At HPE Discover earlier this month HPE (or Hewlett Packard Enterprise as we now have to call them) announced their next evolution in managed infrastructure for which the company has coined the term Composable Infrastructure
. The idea is to break down technology into its constituent parts and build or “compose” the pieces that allow for the provisioning of on-demand servers, virtual machines and containers. This also includes the deployment of applications running on top of whatever environment is built. As part of the announcement HPE debuted Synergy, a new hardware platform for delivering the composable strategy.
The “composable” concept echoes the way in which public cloud providers manage their own offerings. Customers log into services such as AWS or Azure and behind the scenes their virtual machine, container environment or entire application is built to requirements from the template builds that the CSPs and marketplace partners have put together. Orchestration and automation processes bring together compute, storage and networking based on templates set up by an administrator, allowing end users to deploy entire applications on demand. Synergy and the software surrounding it is looking to do that for on-premises infrastructure.
In the presentation “Beyond convergence, is your infrastructure composable?
“, HPE presented a number of metrics; 10 million “zombie” unused servers worldwide; $30 billion in wasted power; one financial company with 70% oversubscription on hardware because of many “stranded” resources in the enterprise. Potential financial savings from implementing Composable IT is quoted as high as 30%, with 17% coming from a move to the Synergy hardware alone. Throw into the mix the Gartner fact that 80% of all downtime in the Enterprise is caused by human intervention and the case for full automation is quickly justified.
So what is Synergy? Watch the video quoted above and you will see a brief demonstration. It’s new hardware consisting of compute and storage models in a bespoke chassis. Each compute module runs “stateless” (although it can run with state due to local storage, the blade is expected to be wiped and reused for the next application) and has either two or four sockets. The storage modules hold 20 drives per bay and each drive can be configured as a DAS device to any of the compute modules. The integration of 3PAR is coming later. There’s no networking as such, with no top of rack switch; all networking is managed on a “fabric” on the chassis backplane using VirtualConnect.
The Synergy system is managed by two blade controllers. One is the OneView “Composer
” that has a view of the entire system and manages the provisioning of resources. These are recommended to be installed in pairs for resiliency. The second is the Image Streamer
, which is basically the source of the build images (imagine ISOs, PXE build etc) for the VMs and applications the system supports.
The announcement of Synergy (which as apparently the project code name) is part of an ongoing development consisting of four phases, the first of which was announced at HP Discover in Las Vegas earlier this year. This was the start of the automation process, through partnerships with the likes of Docker, Chef, Ansible and ongoing development on OpenStack and VMware’s cloud platforms. Phase 2 is Synergy (expected to be GA in 2Q2016), with phase 3 seeing continuous service delivery and phase 4 “HPE machine technology integration”. I’m quoting the HPE slide for the last two phases, however I expect the final phase here refers to The Machine, which we talked about last year.
The Architect’s View
What HPE are looking to do here isn’t entirely new and groundbreaking. Hitachi’s UCP Director, for example, does much of the same functionality, allowing the dynamic and automated building of physical and logical resources. On-demand provisioning of resources from a service catalogue is achievable in the public cloud and through private deployments like CSC’s Agility. It also seems to me that HPE decided they couldn’t compete in the public space so reused their Helion assets (from the Eucalyptus acquisition) as the basis for the Composable project, so it’s not actually developed from scratch.
Having said all that, it is an attempt to be relevant in a world increasingly moving to new consumption and deployment models for technology. One potential roadblock for HPE could be the political issues in getting large enterprises to adopt the technology, particularly those with existing towers of teams doing separate compute, networking and storage. Synergy introduces simplicity but also requires a comprehensive understanding of the entire infrastructure, from hardware, through networking, storage and applications. This means we need a new breed of “super architects” who can see the whole solution in one; Composable IT introduces both simplification and complexity at the same time, but that’s a debate for another post.