Home | Forums | What's new | Resources | |
s'more Sony news.... |
Ammut - Nov 4, 2003 |
< Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next> |
racketboy | Nov 7, 2003 | |||
s'more Sony news.... wow that's a long post luckly there were paragraph breaks... |
Tagrineth | Nov 7, 2003 | ||||
s'more Sony news....
Yeah, if it's something I care about I tend to say a lot. |
Alexvrb | Nov 8, 2003 | |||
s'more Sony news.... "You've gotta be kidding me. Xbox has a severely underpowered CPU, insanely high-latency memory, and low aggregate memory bandwidth compared to PS2 and GCN." Xbox is undoubtedly the most powerful of the three. I can't believe you'd even take a swing at that. Having the fastest CPU doesn't matter for a 3D gaming console. By that kind of reasoning, Gamecube is pathetic, underpowered. Also, you claim the Xbox suffers from high-latency memory. That's not true, what is your basis for comparison here? Does the Dreamcast suffer from insanely high memory latency? If anything, an RDRAM-based solution (like PS2) would have more trouble with that, since the technology was designed for high bandwidth and the latency is often higher than comparable DDR SDRAM - this was one of its faults. Obviously having the highest aggregate memory bandwidth isn't all-important either, or PS2 would kill GCN and Xbox easily. As for the controllers, a controller-s is very similar in analog and dpad location to a GCN controller. But the buttons on a GCN controller are just strange... I prefer a more orderly, standard layout of some sort. |
Tagrineth | Nov 8, 2003 | ||||||||||
s'more Sony news....
All of Xbox's power is in its GPU. It's an unbalanced system. Oh, and GCN's CPU is more powerful than Xbox's. Just FYI. And PS2's CPU is overpowered compared to its rasteriser.
Um... PS2's memory latency is actually extremely low, only slightly higher than GCN's. DRDRAM in and of itself isn't high-latency. It's the SDRAM-like PC implementation that has problems. In the PS2, the DRDRAM is in exactly two chips which are about a centimetre each away from the CPU. Both chips can be active at all times, and the signal takes a miniscule amount of time to cross the RAM and return to the Emotion Engine. In PC's, the signal has to go across the mainboard, through ALL memory modules (lengthwise), then back around, and only one chip per channel can be active at a time - changing active chips adds a latency penalty. GCN's RAM is designed to be insanely low-latency. AFAIR it's the lowest latency DRAM ever created - its spec even rivals most SRAM, which is why it's known as 1T-SRAM instead of DRAM. Xbox's DDR, on the other hand, is a UMA and is being addressed by three or four different devices at any given time. Real tests done by ERP at Beyond3D found that Xbox's memory latency is more than double that of PS2's.
It's still bulky, the stick isn't as comfortable for me (shape-wise), and there are only two shoulder buttons. Plus the controller-S's black and white buttons are placed very awkwardly. The only true, consistent advantage Xbox has over GCN is memory - 64MB > 40MB. |
Alexvrb | Nov 8, 2003 | |||
s'more Sony news.... Not all of the GCNs memory is MoSys 1T SRAM, though much of it is. The sustainable latency isnt uber-low. The Xbox may be UMA, but the other components that share the memory aren't going to be using it nearly as much as the GPU - which is obviously efficient enough, since it solidly trounces the GC in performance. It also allows developers great flexibility. The CPU, a 733Mhz Intel chip, provides good performance while being very bandwidth efficient. At MOST it'd use up to 1GB/sec. I've looked at the performance of the GCN's 485Mhz IBM "Gekko" chip, and its not outstanding. In fact, it looks pretty weak standing at around 1125 MIPS, for starters. http://www.pcvsconsole.com/hank/answer.php?file=16... |
gameboy900 | Nov 8, 2003 | |||
s'more Sony news.... The GC and Xbox have one huge advantage over PS2 (and DC had this too). Both systems GPUs can take compressed texture maps directly from memory, while the PS2 needs to have uncompressed texture maps send to the GPU. This GREATLY reduces how many texture maps the PS2 can utilize each frame. An article I read years ago basically stated that the PS2 could use upto about 10MB of textures per frame while the DC with it's PVR compressed textures even with the slower memory speeds could do up to 26MB of texture per frame. That makes a big difference when you want to show lots of different detailed textures. The GC and Xbox can use natively DXT compressed textures (since they are basically derivatives of DirectX based cards) and as such have the potential to use more textures while using less bandwidth. Also in modern consoles the CPU doesn't play as important role as it used to. These days it's just delegated to being the manager for the system and dealing with user input and AI. |
Tagrineth | Nov 8, 2003 | ||||||||||
s'more Sony news....
Oh hell yes it is. You don't think sustained 6.2ns is low?
They don't use it as much... but I'm not talking about bandwidth, I'm talking about LATENCY. i.e. the amount of time between when the memory is addressed and when it returns the requested data. Xbox's UMA means a lot of things are addressing the same memory at the same time, across different bus segments, which slows things down. And XGPU doesn't "trounce" Flipper. ERP over at Beyond3D said that there are many, many things XGPU does faster than Flipper, BUT he also did say that he could think of a few cases off the top of his head where Flipper would scream past XGPU to an extreme extent.
The test used to find that wasn't using the Gekko's most important feature, paired singles. It can effectively perform two 32-bit operations per clock cycle. And I don't care how much bandwidth the P3/Celeron hybrid is using - it's all about latency, and Xbox's UMA has AWFUL latency. |
Gallstaff | Nov 8, 2003 | |||
s'more Sony news.... Do these stats really matter. I thought it was about the games here at SX.... |
Alexvrb | Nov 8, 2003 | |||
s'more Sony news.... Yes or no, are you saying that the Gekko chip is more powerful than a 733Mhz PIII with 128KB L2 cache? Remember, PIII has some features it can take advantage of too, like SSE. The XGPU is still overall more powerful than the flipper, and I'm not saying flipper is a slouch. Yes each one might excel at certain things, but the nvidia chip is more powerful overall. I'm not an nvidia/intel/ms fanboy. Far from it. But I can't believe you'd hammer the Xbox's hardware. Do what everyone else does and attack the software instead. Anyway, 6.2ns is only for the 3MB total frame buffer + texture cache. As Gameboy said, hardware texture compression greatly helps modern GPUs, but there's still a limit. The 24MB of main memory is still decent, but has a sustainable of around 10ns. Then there's the 16MB "A-memory", which is 81Mhz DRAM, very slow. That said, I think the GCN's hardware is very efficient and great for the manufacturing cost. |
Des-ROW | Nov 8, 2003 | |||
s'more Sony news.... The EmotionEngine is more powerful than the Gekko or the XCPU ^^! |
Cloud121 | Nov 8, 2003 | ||||
s'more Sony news....
Yeah, and the MegaDrive is more powerful than the Dreamcast... |
Gallstaff | Nov 9, 2003 | |||
s'more Sony news.... Again, does it really matter? |
it290 | Nov 9, 2003 | ||||
s'more Sony news....
Dude... 75 million polygons per second.. NO PROBLEM! :wanker |
gameboy900 | Nov 9, 2003 | |||
s'more Sony news.... It just can't show them to you or do anything with them. |
Des-ROW | Nov 9, 2003 | ||||
s'more Sony news....
Haha, the kind of answer I would expect from an ignorant! |
cww80 | Nov 9, 2003 | ||||
s'more Sony news.... The newer console, (in this case: Xbox) is overall the most powerful. Why is this so hard for some people to accept?
Kinda like the Xbox with Microsofts original claim of "300 Million Polygons per second", and then later 125million... |
Tagrineth | Nov 9, 2003 | |||||||||||||
s'more Sony news....
Hey, I don't deny that Xbox is most powerful taken as a whole, but it's unbalanced and there ARE cases where GameCube can outperform it, WITH better output. Hell, I can even think of an example off the top of my head - Star Wars: Rebel Strike.
Texture compression is for bandwidth, not latency. And PS2 does support *some* texture compression - 4- and 8-bit CLUT - but it's limited. And 10ns sustained is still obscenely low. You do realise SDRAM rated at '10ns' can't possibly sustain that? It's a maximum. And yes, the A-RAM is really, REALLY slow. *sigh* N should've skipped the A-RAM and included more 1T.
Actually, the Emotion Engine is quite a lot more powerful and flexible than Gekko and XCPU. But only the EE as a whole; the CPU (r5900i) can't compare at all. Having two Vector Units bolted on doesn't hurt, you know. =)
Not defending Microsoft or anything here, but IIRC that original 300 million claim was vertices, not polygons. |
ExCyber | Nov 9, 2003 | ||||
s'more Sony news....
I think this is the favorite BS marketing tactic of 3D console makers (except Nintendo; they've actually given real-world performance estimates on their spec sheets) - give your vertex processing capablility and don't say anything about fill rate, giving a big impressive number that means nothing by itself. |
it290 | Nov 9, 2003 | ||||
s'more Sony news....
I agree.. I was just arguing with someone about Sony's 75mil figure recently, never mind the fact that it's nearly impossible to actually display that many polygons on almost any of today's display hardware (not counting obscured polygons). Correct me if I'm wrong, but I believe their claim was actually polygons, not vertices. |
< Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next> |