HPE believes it can easily scale the architecture to deliver an Exabyte (1 billion gigabytes) single-memory system

Dan Robinson
16 May 2017

Hewlett Packard Enterprise (HPE) has unveiled a new prototype of The Machine, its memory-driven research project. The new version operates with 160TB of shared memory, which makes it the world’s largest single-memory computer, HPE claims.

The Machine was first demonstrated at the HPE Discover conference in London last year, but the new incarnation has been designed to demonstrate that its memory-driven architecture can scale to accommodate very large data sets, which HPE sees as a key issue for the future of computing.

Based on the current prototype, HPE believes it can easily scale the architecture to deliver an Exabyte (1 billion gigabytes) single-memory system, and possibly even further.

“We believe memory-driven computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society,” said HPE’s chief technology officer and director of Hewlett Packard Labs, Mark Potter.

Like the version shown at HPE Discover, the new prototype is based on an architecture that links processors and a shared global memory pool through a memory fabric interconnect, designed to be fast enough to overcome the memory bottleneck in today’s systems.

That 160TB of shared memory is spread across 40 separate nodes in several racks, all connected using the memory fabric interconnect.

HPE disclosed that this prototype of The Machine is built using Cavium’s ThunderX2 processor, an ARM-based system-on-a-chip with up to 54 cores. It also includes working optical links using its X1 silicon photonics module and runs an optimized version of Linux.

While this prototype is still very much a research vehicle, it points the way to future systems with a large enough memory capacity to allow the processing of massive data sets in memory instead of having to shuffle data in and out of storage. HPE envisions the use of some form of non-volatile memory instead of standard DRAMs in future.

http://www.datacenterdynamics.com/co.../98319.article