Hybrid Memory Cube spec makes DRAM 15 times faster

Final spec for three-dimensional DRAM is backed by memory makers Micron, Samsung, and Hynix

Backed by 100 tech companies, the three largest memory makers announced the final specifications for three-dimensional DRAM, which is aimed at increasing performance for networking and high-performance computing markets.

Micron, Samsung and Hynix are leading the technology development efforts backed by the Hybrid Memory Cube Consortium (HMC). The technology, called a Hybrid Memory Cube, will stack multiple volatile memory dies on top of a DRAM controller.

[ Keep up on the day's tech news headlines with InfoWorld's Today's Headlines: Wrap Up newsletter. ]

The DRAM is connected to the controller by way of the relatively new silicon VIA (Vertical Interconnect Access) technology, a method of passing an electrical wire vertically through a silicon wafer.

Mike Black, chief technology strategist for Micron's Hybrid Memory Cube team, said what the developers did was change the basic structure of DRAM.

"We took the logic portion of the DRAM functionality out of it and dropped that into the logic chip that sits at the base of that 3D stack," Black said. "That logic process allows us to take advantage of higher performance transistors ... to not only interact up through the DRAM on top of it, but in a high-performance, efficient manner across a channel to a host processor.

"So that logic layer serves both as the host interface connection as well as the memory controller for the DRAM sitting on top of it," he added.

The DRAM is broken into 16 partitions, each one with two I/O channels back to the controller. Each Hybrid Memory Cube -- there are two prototypes -- has either 128 or 256 memory banks available to the host system.

The first Hybrid Memory Cube specification will deliver 2GB and 4GB of capacity, providing aggregate bi-directional bandwidth of up to 160GBps compared with DDR3's 11GBps of aggregate bandwidth and DDR4, with 18GB to 20GB of aggregate bandwidth, Black said.

Jim Handy, director of research firm Objective Analysis, said the Hybrid Memory Cube technology solves some significant memory issues. Today's DRAM chips are burdened with having to drive circuit board traces or copper electrical connections, and the I/O pins of numerous other chips to force data down the bus at gigahertz speeds, which consumes a lot of energy.

The Hybrid Memory Cube technology reduces this task to make the DRAM drive only tiny TSVs which are connected to much lower loads over shorter distances," he said. A logic chip at the bottom is the only one burdened with driving the circuit board traces and the processor's I/O pins.

"The interface is 15 times as fast as standard DRAMs ... while reducing power by 70 percent," Handy said "Basically, the beauty of it is that it gets rid of all the issues that were keeping DDR3 and DDR4 from going as fast as they could."

For example, Handy said, instead of having multiple DIMMS (which can range from one to four) on a motherboard, you would need only one Hybrid Memory Cube, cutting down on the number of interfaces to the CPU.

The HMC has defined two physical interfaces back to a host system processor: a short reach and an ultra-short reach. The short reach is similar to most motherboard technologies today where the DRAM is within eight to 10 inches of the CPU. That technology is aimed mainly for use in network applications and has the goal of boosting throughput from as much as 15Gbps to 28Gbps per lane in a four-lane configuration.

"The first package we're going to launch commercially in the second half of this year is in a fairly large package because fundamentally the networking base doesn't want package pitch lower than 1 millimeter on the ball pitch for the bottom of the ball grid array," Black said. "So physically the logic chip and the DRAM die are in the 100 square-millimeter size sitting on a bigger package to accommodate the ball-out requirements for a short reach design in a networking platform."

The ultra short-reach interconnection definition is focused on a low energy, close-proximity memory design support of FPGAs, ASICs and ASSPs, such as high-performance networking, and test and measurement applications. That will have a one to three-inch channel back to the CPU, and it has the throughput goal of 15Gbps per lane.

"It's optimized at very low energy signaling for multi-chip modules," Black said. "That's where you'll see a very small package form factor where you're sub-300 micron ball pitch."

While 3D DRAM will cost more to make than its predecessor, Black pointed out that it would cost more to gain the aggregate bandwidth using standard DRAM modules.

"If you look at the total cost of offering a cube, versus trying to get to that kind of bandwidth with traditional DRAM technology, we can in many cases show the total system cost as being much better with Hybrid Memory Cube," he said.

Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is lmearian@computerworld.com.

See more by Lucas Mearian on Computerworld.com.

Read more about data storage in Computerworld's Data Storage Topic Center.

This story, "Hybrid Memory Cube spec makes DRAM 15 times faster" was originally published by Computerworld.

Copyright © 2013 IDG Communications, Inc.