Internet data centre power requirements are increasing as much as 20 per cent a year; already, according to estimates of experts, these facilities in total consume as much electricity as some countries, including Iran, Mexico, Sweden and Turkey. The industry hopes to reverse the trend by revisiting the design of cooling systems, power supplies and server architectures.
In servers, the notoriously voracious microprocessor is passing the power-hog mantle to the DRAM, which offers fast data access but requires a heat-generating refresh every few milliseconds. Thus the greening of the data centre includes a focus on lower-voltage DRAMs, nonvolatile alternatives and the emerging category of storage-class memories.
Whether the green-memory movement thrives or dies on the vine, the DRAM status quo could be uprooted.
The DRAM's power appetite is not its only problem. As the recession wears on, OEMs are keeping a nervous eye on struggling memory suppliers. "It's not pleasant to see our partners suffer so badly," said Tom Lattin, director of strategic commodities for industry-standard servers at Hewlett-Packard Co.
DRAM scaling, meanwhile, could hit a wall as it becomes increasingly difficult to shrink the capacitor within the device. That could fuel the need for such alternatives as ferroelectric, magnetoresistive, phase-change and resistive RAM.
Don't look for the DRAM to disappear, said Bob Merritt, an analyst with research firm Convergent Semiconductors who believes DRAMs will scale to 20 nanometers. "There will be DRAM applications for the next 10 years," Merritt said, but "you will also see applications" that will turn to nonvolatile alternatives (which don't require refresh to maintain the data) for server main memory.
Bill Tschudi, program manager at Lawrence Berkeley National Laboratory, said the drive to make data centres more power efficient will include better IT practices, new power distribution schemes, higher processor utilisation rates and "advancements on the memory side."
"Memory power is a significant portion of platform power," noted Dileep Bhandarkar, distinguished engineer with Microsoft Corp.'s Global Foundation Services unit. "As processor performance increases and virtualisation takes off, the memory footprint will increase. There is a need for lower-voltage DRAMs."
DRAM makers have responded with lower-voltage DDR3 synchronous DRAMs, which have found a home in servers from such vendors as HP, IBM, SGI and Sun.
Meanwhile, solid-state drives (SSDs) and I/O accelerators could shake up the memory and storage hierarchy. And server start-ups Schooner Information Technology Inc. and Virident Systems Inc. have released data centre servers that promise to cut hardware costs as well as power consumption. The potential of the technology has prompted IBM to form an alliance with Schooner.
In theory, green servers could replace traditional X86- or RISC-based systems, possibly displacing DRAM in the process. Schooner and Virident use lower-power, nonvolatile, "storage class" memory to handle the search index and other tasks usually relegated to DRAM.
Market watcher Frost & Sullivan estimates that a typical server farm of 5,000 systems with 32 GB of DRAM each could be reduced to 1,250 systems with 128 GB each of nonvolatile memory, resulting in a 75 per cent reduction in energy over four years, a 75 per cent reduction in the cost of physical space and a 45 per cent reduction in capital expenditures.
Troubling trendsSuch reductions would be welcome news for U.S. data centres, which spend Rs.14,696.08 crore ($3 billion) per year on electricity alone, according to the Environmental Protection Agency. The EPA sees U.S. data centre power consumption rising from 6100 crore kilowatt-hours today to 10000 crore kWh in 2011. Meanwhile, Frost & Sullivan projects that the total installed base of data centre servers will rise from 22 lakh units in 2007 to 68 lakh units next year.
The typical server consumed about 50 watts before 2000 but draws some 250 W today, according to "Energy Efficiency for Information Technology," a new book published by Intel Corp. And SGI, formerly Rackable Systems Inc., estimates that for every 100 W to power a server, a further 60 to 70 W are needed to cool it.
Processor power consumption ranges from 45 to 200 W, according to Intel. In a server with eight 1-GB dual in-line memory modules, the DIMMs can contribute 80 W to the power budget, according to Intel. In large servers with up to 64 DIMMs, the result could be "more power consumption by memory than processors," Intel notes.
Intel incorporates "automatic memory throttling" on its processors to reduce heat. DRAM vendors are also reducing heat generation in their latest 50-nm-class parts, exemplified by those from Hynix, Micron Technology and Samsung.
Meanwhile, server vendors have been migrating from DDR2 SDRAMs to 1.5-volt and, more recently, 1.35-V DDR3 SDRAMs. DDR3 doubles performance and provides a 60 per cent improvement in power consumption (for the 1.35-V version) over DDR2, said Jim Elliott, vice president of memory marketing for Samsung Semiconductor Inc.
By next year, DDR3 modules could migrate from conventional to load-reduced DIMMs, which could boost memory capacity fourfold. And by 2011, vendors could unveil DDR4 SDRAMs, reportedly a 1.2-V technology
But those developments won't take all the pressure off DRAMs. Data centre servers' use of virtualisation, which enables multiple operating systems to run on the same computer, reduces hardware costs but slices up the system workload; not all processors run the same tasks at the same time. Server utilisation ranges from 10 to 30 per cent in a data centre, according to the Uptime Institute.
The use of virtualisation, along with complex multi-core processors, heightens the need for more-efficient memory, said Michael Sporer, director of marketing for enterprise memory at Micron Technology Inc.
"Today, the bottleneck is in the disc and the disc sub-system," Sporer said. "The next bottleneck may be in memory performance, rather than capacity."
Server start-ups Schooner and Virident are pushing similar concepts to address the looming performance squeeze.
Virident's GreenCloudIn April, Virident rolled out its GreenCloud line of X86-based data centre servers, said to deliver up to 70 times the performance of traditional systems. The line uses storage-class memory, which bridges the performance gap between DRAM and mass storage. Virident said the architecture boosts processor utilisation and eliminates I/O overhead by providing random word-level access to large data sets.
Virident's systems still use DRAM for some functions, but storage-class memory is more efficient for search-index and related applications, said president and CEO Raj Parekh. Virident's initial systems use Spansion Inc.'s EcoRAM NOR devices, but the start-up also expects to use NAND and phase-change memory from Numonyx Inc.
Over time, Virident's systems will variously support a single memory technology or a mixture of device types, depending on the application. NAND reads small data chunks at high rates, for example, while NOR is ideal for random read searches and phase-change memory offers high write speeds, Parekh said.
Hewlett-Packard, meanwhile, is putting a new twist on a conventional approach with its new ProLiant G6 servers. Based on Intel's Xeon 5500 processors, the G6 deploys thermal sensors and a technology that caps the power drawn by the server.
The servers also use DDR3 memory, which Jimmy Daley, marketing manager for industry-standard servers at HP, called a "major step forward" over DDR2.
HP stopped short of endorsing storage-class memory, but it offers an optional I/O accelerator from Fusion-io Inc. The sub-system, based on a redundant NAND architecture, does not replace the hard drive but sits between the memory and storage system to alleviate system I/O bottlenecks, said David Flynn, chief technology officer for Fusion-io. The accelerator is said to provide more than 10 lakh I/O operations per second in the HP servers.
SGI is keeping an eye on Spansion's EcoRAM and the Fusion-io accelerator, said Geoff Noer, vice president of product management at the server maker. EcoRAM could address "some opportunities," Noer said, but "I don't see it as a mainstream solution."
SGI's new CloudRack C2 is a cabinet design that can pack a number of dense, rack-mount servers in the same unit. C2 supports up to 1,280 processor cores per cabinet. To handle heat, the X86-based offering uses redundant fan arrays and dc power supplies.
The C2 supports DDR3 SDRAMs, and Noer said he is also bullish on solid-state storage. Between 2008 and 2013, according to iSuppli Corp., the use of SSDs could allow data centres to reduce power consumption by a combined 166,643 MWh—slightly more than the total megawatt-hours of electricity generated in the nation of Gambia in 2006.
That's good news. But even as the server supply chain finds ways to rein in power, more data centres will be built, turning up the heat.
That has industry jokesters quipping that perhaps Google should look to erect its next data centre on a rig off the coast of Iceland. Or why not the moon?
No comments:
Post a Comment