cerbie
1000+ Head-Fier
- Joined
- Mar 12, 2005
- Posts
- 1,219
- Likes
- 12
Quote:
Yes, but AMD has done better per cycle as far back as the first K6. There were many reasons not to use them (power use was certainly one back then!), such as generally being late to market, but clock efficiency was not one of them. The Pentium-M once again shows that raw speed just isn't important when comapring different sets of chips.
Intel lost it with raising speeds because they wanted to break 5GHz with the revisions following the Prescott (if they could have done it, they would have at least remained performance-competitive, if not performance-per-watt competitive), and figured the 90nm drop would do it...even now, it takes extreme overlcockers to even think about that territory. For bandwidth-centric workloads, the new Xeons won't be bad, but they would still be hard to justify over the course of a year's use.
Quote:
Lost time? They are going to have a new Itanium out in a year or so, and it will be successful (this time, yeah, it'll work...)!
Quote:
Also, it is becoming more and more about vendors, and Intel is still working with the "we say how it will be" paradigm. IBM v. Sun v. HP v. Dell is more important, usually, than Opteron v. Xeon. This puts any potential compatibility issues to rest, and gives the vendors freedoms to do some good things that customers will start to appreciate, and be able to sell based on what it does, and how well it does it. If everyone was as worried about Intel Inside as they were when the Athlon hit the streets, it could be 10w v. 100w, and no one would touch the AMD (VIA shares 90% of the blame for that, as I've had pretty much no trouble with real AMD chipsets, but always something with VIA).
Part of this is simply due to cheaper parts being fairly good. You can throw anything together, and expect good performance and good uptime on the cheap. Excepting the bad capacitor mess, most parts have become very compatible and reliable, such that having the 'right' ones is not very important, and is not a cause of worry.
Intel still has the average joe by the balls (eMachines is helping, though
), but more and more businesses are caring more about what it can do than what makes it do it. Intel really was short-sighted not to deal with SMP for the Pentium Ms. If they'd had multi-CPU blades out around the Dothan release, they could have adapted them to standard Xeons, eaten some minor profit margins, and been very competitive, totally stalling AMD's serious gaining of credibility. Oh well.
Originally Posted by majid Keep in mind the true clock speed of the dual-core Opteron 280 is 2.4GHz, that of the single-core 254 is 2.8GHz, 30% less than Intel. The AMD64 architecture simply gets twice as much work done per cycle and per watt. And the the integrated memory controller and switched Hypertransport interconnect are way ahead of Intel's tired bus. The Indian-designed CSI bus, touted as a replacement, seems to have been canned along with the entire line of Bangalore-designed server chips. Sure, you can push the existing FSB to 1GHz, but like brute-force Detroit "muscle cars" with inefficient pushrod or hemi technology compared to the more sophisticated Japanese or European timed multi-valve designs, this comes at the cost of even worse energy efficiency. |
Yes, but AMD has done better per cycle as far back as the first K6. There were many reasons not to use them (power use was certainly one back then!), such as generally being late to market, but clock efficiency was not one of them. The Pentium-M once again shows that raw speed just isn't important when comapring different sets of chips.
Intel lost it with raising speeds because they wanted to break 5GHz with the revisions following the Prescott (if they could have done it, they would have at least remained performance-competitive, if not performance-per-watt competitive), and figured the 90nm drop would do it...even now, it takes extreme overlcockers to even think about that territory. For bandwidth-centric workloads, the new Xeons won't be bad, but they would still be hard to justify over the course of a year's use.
Quote:
Energy efficiency is not just a buzzword. Intel has been making much noise about it, but the reality is, their current lineup needs twice as many watts than equivalent Opterons. I pay $4000/month on power in my data center, significantly more in a year than the cost of all my servers put together. While power-hungry hard drives make up a big proportion of my power budget, I would be prepared to pay a significant premium for more power-efficient servers as they allow me to ramp up without having power costs explode. That is also why I am retiring servers that are fully functional, simply because the savings from more modern and power-efficient systems outweigh the cost of the new machines. The current Intel designs are stop-gap measures to staunch the bleeding for the time it takes them to design a competitive cpu and memory interconnect. It will take another year or two for Intel to catch up with the lost time caused by Itanium distraction. |
Lost time? They are going to have a new Itanium out in a year or so, and it will be successful (this time, yeah, it'll work...)!

Quote:
This is not a new story - read Tracy Kidder's "Soul of a new machine" for another good example of a company that almost died on an overly ambitious green-field processor design instead of a more pragmatic one designed for maximal compatibility. What is more surprising is that Intel got caught in that same trap, as the reason they are number 1 today is precisely because they always emphasized compatibility in the past. The only explanation is complacency and arrogance - at some point they assumed their success was due to their superiority rather than the strategy of maximising compatibility, and thus they believed the lessons of others' failures did not apply to them. |
Also, it is becoming more and more about vendors, and Intel is still working with the "we say how it will be" paradigm. IBM v. Sun v. HP v. Dell is more important, usually, than Opteron v. Xeon. This puts any potential compatibility issues to rest, and gives the vendors freedoms to do some good things that customers will start to appreciate, and be able to sell based on what it does, and how well it does it. If everyone was as worried about Intel Inside as they were when the Athlon hit the streets, it could be 10w v. 100w, and no one would touch the AMD (VIA shares 90% of the blame for that, as I've had pretty much no trouble with real AMD chipsets, but always something with VIA).
Part of this is simply due to cheaper parts being fairly good. You can throw anything together, and expect good performance and good uptime on the cheap. Excepting the bad capacitor mess, most parts have become very compatible and reliable, such that having the 'right' ones is not very important, and is not a cause of worry.
Intel still has the average joe by the balls (eMachines is helping, though
