New Opteron EE processors attempt to carve out an 'ultra-low-power' niche

Already, AMD has a low-power segment to its Opteron server processor line, the HE series. With Intel Xeon processors still holding a measurable performance lead -- especially with models that command a comfortable premium -- AMD needs to be able to compete efficiently and maintain that goal of 40% gross margin (it made 43% last quarter). And to do that, the company feels it needs a new product category for a certain segment of customers who may be willing to pay a bit extra for something particularly useful.
If that's not performance, then for now, maybe it can be very low power consumption. This afternoon, the company announced a new and exclusive segment of Opteron EE quad-core processors that are intentionally turned down, drawing 40 watts of average CPU power (ACP, which is AMD's own metric) versus 75 watts for the standard Opteron and 55 watts for the company's Opteron HEs -- which will continue to exist as an in-between choice.
"Some people's philosophy in the high-performance computing world is, I'm going to take a cluster and just run it as maxxed out as I can. If you go talk to people in the high-performance computing world, they rotate their clusters frequently. They're always going for new, fast parts. But if you talk to regular IT guys, they're actually looking to achieve an environment that's very predictable," remarked Margaret Lewis, AMD's long-time server product director, in an interview with Betanews. "Their goal would be to have servers where they could predict they're running between, let's say, 50 and 70% utilization, and they'd love to be able to predict what the draw of power is, and try to level it out so that you're not doing such peaks and valleys. Because if you peak with power, then you have to work your air conditioner harder; and if you peak that power to [the point where] you can't control it with an air conditioner, that's when all the warning bells go off. Things start clocking themselves down.
"So if I could design a cluster of these low-power, solid-performing but not top-of-the-line processors, and I clocked those over a period of time in terms of utility and performance, the curve is going to end up being better than if you're jumping from peak to peak."
The first two CPU models to bear the EE banner will be the 2.3 GHz Opteron 2377 EE, priced at $698 in 1,000-unit quantities (single-unit street prices will be higher) and the 2.1 GHz Opteron 2373 EE, priced at $377. For the 2373, that's nearly a 20% premium at current prices over AMD's standard 75W parts at 2.1 GHz. But for the 2377 -- at least today -- the price premium is a whopping seven bucks over the 2.3 GHz Shanghai series Opteron at 75W. That 75W price could come down soon, though, especially with all the other product lines AMD finds itself making room for, also announced this afternoon.
AMD is making a big bet that there are customers who are building their own cloud services, but who may not necessarily be building high-performance computing services on that cloud. For those customers, there's still going to be very tight deployments with hot CPUs running close to hot hard drives and hard cases, and if the opportunity arises to make those operations cooler yet, some customers may just take it. "What you're going to be looking at is, what are you trying to serve out?" Lewis told us.
"The interesting thing about cloud computing is that there's really many different use cases that fall under a cloud. The use case we all know and love is search, [which] is a computationally intensive activity. Now, you could also be doing [something] like Gmail or Yahoo Mail. Eh, e-mail is a different kind of activity. You're making a query and it's streaming a certain amount of data back to you -- that store/forward type of activity. You could be using a cloud for a consumer purpose, where you're streaming a movie over to a set-top box or an entertainment PC -- like Time Warner Cable. What the cloud is doing is going to dictate [your service load]. It might be when you're doing those searches, you might want an HE part, because you'd be looking at 2.5 GHz capabilities of that part and saying, I really want that realm of performance over the 2.3. But if you're just doing Web serving or free e-mail, you can come down and take an EE part, because it's an active transactional environment -- it isn't necessarily CPU demanding."
Lewis' background is in high-performance computing -- which is the big reason AMD employed her -- and she was all about virtualization when virtualization wasn't, well, cool. One big gain that virtualization has made possible in the data center, she reminded us, is boosting utilization rates for CPUs, specifically by enabling hypervisors to make better use of system idle time. Now, AMD had been responding to idle CPUs with a technology it just today rechristened AMD-P, enabling operating systems to detect when utilization goes down and to step down the power to compensate.
That's all very nice, except when your data centers are using more virtualization, those steps down happen less frequently. "It's great that we can make power go low for an idle server," argued Lewis, "but a lot of people will say, 'I don't want my servers idle. What good is an idle server?' Isn't the underutilized server the symptom of the problem? When we talk to customers, they like this idea of a 50 - 70% utilization range. If they can fall into that range, they're really happy because that gives them enough headroom to do a peak. But it keeps a constant amount of work coming out."
Keeping CPUs moderately utilized creates an additional burden on small data centers that must also be kept cool. So as Lewis admitted to us, it's theoretically possible for a data center to deploy a hybrid of Opteron HE and EE server CPUs, and use management software that uses AMD-P to load-balance between the two, shifting to the HEs when data processing becomes more critical and back to the EEs for more general processing periods. This would bring back the types of cluster administration that took place in the high performance data centers that Lewis helped manage prior to her current assignment with AMD.
"If you know that you have a peak workload for two hours every day," she conjectured, "this is the beauty of virtualization. You could roll some of those virtual machines over to the HE server and have them run a little bit faster, then roll them back over to the EE for the rest of the day. This is that intelligent data center dynamic that everybody keeps alluding to."