15 Volts? Then it's a mobile part...trust me on this dude. Considering a normal desktop part runs anywhere between 65 - 85 V, it has to be a P4 part that passed the low mobile specs
Whaa??? The website that I went to for such information gave 13.5 as the voltage for the mobiles and 15 as the voltage for the desktops. Are you going by Intel voltages or AMD voltages? There may be a reason why you don't see many laptops with AMDs in them
Quite simply...they're wrong. This is just putting a spin on things. Just as they know things on our end, we know things on their end and their Centrino "design" team was pretty small. They took the core P3 and made modifications on it. Not that that's bad though...the P3 architecture is sound...notice how the frequencies are also around the P3 architecture (1.6 Ghz)?
Those lying motherfuckers! I should know by now not to trust a respectable newspaper. The Republicans kept telling me the media was full of crap...
I still stand by the "speed" argument. It's a fairly known and accepted rumor that the main reason Intel designed such an inefficient architecture (the p4) was to get speed. As I stated in an earlier post, the P3 -> P4 had 13 pipeline stages to 20. Some of these new stages just transferred data to the next stage, but in turn (I don't wanna go into design things here since it's a bit complicated), it allowed them to jack up the frequency beyond that of the limited P3.They knew that that architecture couldn't support speeds like 2+ Ghz and beyond (well maybe 2, but not 3) and thus needed to do something. At the same time, I am fairly sure marketing (some of the best in the business I will even admit) said that they need to continue getting speed and this furthered the idea for such an inefficient design
It's known that Intel is taking advantage of public stupidity. However, I've yet to see anything suggesting that Intel is in any way responsible for said stupidity. (unless one of those Blue Man Group commercials said something like "Gigahertz is everything. Do not pay attention to instructions per clock cycle, that's a bunch of crap" and I missed it...)
Why do you think we adopted model numbers like 3200+?
Because you're lying sons of bitches?

Sorry, I just couldn't help myself after the benchmark thing.
Also they were trying to force the entire industry to go 64-bit (they thought they had the power...a normal flaw a lot of companies make). We decided to go x86-64 with 32-bit support so you could choose when to go to 64-bit since it had both
Yeah, I thought it had something to do with being 64-bit, but I couldn't remember well enough to say it with any kind of certainty.
Was the Itanium II any better?
That's the reason we increased ours to 512 in the Barton Core. Then again they're gonna have a 1 MB L2 soon with Prescott and then things change again
Whoa, I didn't know you'd upped your L2 cache. I need to visit those websites again.
A whole megabyte of L2 cache... oooh... drool...
Well as I stated earlier, I wouldn't trust either of our benchmarks. I'll be the first to admit that we *try* and do the same things..it is a business right? Funny that I'm kind of rebutting my own company
No, it's fair enough after you slammed Intel so many times
Personally, it seems to me like you all too greatly enjoy saying things like "Yeah, our chips and their chips perform about the same, but our chips are better because they're more efficient" and "Yeah, the P4 goes way faster than the P3 ever could, but the P3 was more efficient so it's better." Something just doesn't add up there

... I mean, I like efficiency and all (my first car is going to be a Honda Insight), but come on... when you see a P4 overclocked to 4.44 Gigs and the P6 core (Pro/2/3/Celeron/Centrino/whatever) hasn't even been pushed past 2.0... maybe that horrible inefficiency isn't quite as much of a drawback as you thought
If things go REALLY well in the next few weeks, I might be getting a quad Opteron desktop/server.
The only .org member to donate $1,500 and still have a donation status of "total leech"