15 Volts? Then it's a mobile part...trust me on this dude. Considering a normal desktop part runs anywhere between 65 - 85 V, it has to be a P4 part that passed the low mobile specs. So maybe a miscommunication here...it IS a p4 part, but it pasted the mobile specs and therefore was used as one. The architecture is identical...it just consumes less power probably because it was fabricated differently (it happens..difference that is).the Black Monarch wrote:No dude, I really did mean the DESKTOP version. All 15 volts. Have you ever been to the Alienware website to look at their laptops? They don't offer the mobile P4s anymore. The Area 51-m is well known for its extremely short battery life (75 mins doing nothing, 30 if you're playing music in Windows Media Player) and large weight and volume because of the hugeass cooling system.
Quite simply...they're wrong. This is just putting a spin on things. Just as they know things on our end, we know things on their end and their Centrino "design" team was pretty small. They took the core P3 and made modifications on it. Not that that's bad though...the P3 architecture is sound...notice how the frequencies are also around the P3 architecture (1.6 Ghz)?the Black Monarch wrote: The newspaper said that the Centrino used a completely new architecture...
Well for the most part, I *DO* agree that the monopolistic practices like paying off distributors not to carry AMD and changing benchmarks is the main reason. I still stand by the "speed" argument. It's a fairly known and accepted rumor that the main reason Intel designed such an inefficient architecture (the p4) was to get speed.the Black Monarch wrote:Umm... no... Intel has done nothing of the sort. Their commercials make absolutely no mention of clock speeds, and some of them (specifically, the ones with the Blue Man Group) don't mention the chips at all. If consumers equate bigger numbers with performance, it's because of their own stupidity and not Intel.
Personally, I think Intel is winning because of its "monopolistic practices" that you mentioned. If I could have gotten an AMD in my laptop instead of a P4, I would have.
As I stated in an earlier post, the P3 -> P4 had 13 pipeline stages to 20. Some of these new stages just transferred data to the next stage, but in turn (I don't wanna go into design things here since it's a bit complicated), it allowed them to jack up the frequency beyond that of the limited P3.They knew that that architecture couldn't support speeds like 2+ Ghz and beyond (well maybe 2, but not 3) and thus needed to do something.
At the same time, I am fairly sure marketing (some of the best in the business I will even admit) said that they need to continue getting speed and this furthered the idea for such an inefficient design. This is surprisingly when we were beating them in the speed race and hence the need to get frequency. Well...the story goes on in that they made it...they can get a good 200 Mhz every quarter and people are buying.
You can't tell me that when a person goes to a store and sees 3.06 Ghz or 2.25 Ghz, they'll assume the 3.06 Ghz is faster. Why do you think we adopted model numbers like 3200+? Now again, I do agree with the monopoly thing and well..in general that they have 10x more money than we do to spend, but...the speed thing DOES exist as is demonstrated by most (ignorant) people's thinking on computers. Sadly...most people aren't that well educated in computer technology, but oh well...
Was made for servers and the design itself isn't that bad. It's just SLOW as hell. Now I know I said speed isn't everything (it's only half the equation), but we're talking lower than a Ghz (after the bug they found). Again...it's half of the equation so speed isn't EVERYTHING, but it is SOMETHING. Also they were trying to force the entire industry to go 64-bit (they thought they had the power...a normal flaw a lot of companies make). We decided to go x86-64 with 32-bit support so you could choose when to go to 64-bit since it had both. Another reason why we are getting these design wins.the Black Monarch wrote: The Itanium was not made for the mainstream. I can't remember what the hell it was supposed to be, though.
Nah Cache is a BIG deal like you stated. I won't go into details, but cache hits and misses can cause HUGE performance differences. I'll admit that. That's the reason we increased ours to 512 in the Barton Core. At the same time, even if you have a big cache, if your architecture that uses it isn't that good, it won't be used effectively. Why do you think a processor that is 800 Mhz slower and has a smaller cache (well with Barton it's even, but take a T-bred B) can even have COMPRABLE (didnt' say beat) performance to a 3.06 Ghz? Because we have a more efficient architecture. As you said instr/ghz we're more efficient. They just have some more speed and thus it evens out.the Black Monarch wrote: dwchang, I think you should mention cache memory. If I remember correctly, the P4 has like twice as much L2 cache as the biggest Athlon, making the Athlon much more likely to choke when faced with particularly cache-intensive applications (the Quake III engine, for example, was specifically designed to take advantage of the P4's superior L2 cache).
Then again they're gonna have a 1 MB L2 soon with Prescott and then things change again.
Well as I stated earlier, I wouldn't trust either of our benchmarks. I'll be the first to admit that we *try* and do the same things..it is a business right?the Black Monarch wrote: I noticed something very interesting on the AMD website a few months ago. They like to portray their side-by-side comparisons and benchmarks as very fair and unbiased, showing where the P4 is better (like L2 cache) and where the Athlon is better (like instructions/Hz). However, I noticed that in their benchmarks, they used high-end Nvidia or ATI video cards in their own machines and used low-end Intel video cards for the Intel machines. Hmm.

