IBM is enjoying an uptick in revenues from the sale of … wait for it … mainframes.




by Jon William Toigo
10/21/2015

...

Return of the Mainframe

But then something happened. I looked around at some other data points. For one, IBM is enjoying an uptick in revenues from the sale of … wait for it … mainframes. The z13, introduced at the beginning of the year, has become a darling of many companies who are switching off hundreds of servers, and moving their virtual machines (VMs) as KVMs over onto the mainframe, where the cost model is better.

IBM claims you can stand up thousands of VMs in a zSystem mainframe for less than $100 each; that's significantly less than what it costs for licenses to hypervisors from VMware. Moreover, mainframes generally have a much smaller complement of staff to manage and operate the system, they tend to have far better uptime than do x86 tinkertoy servers, and software costs for the systems have come back down to earth -- a key reason folks migrated off of the platforms in the 80s and 90s. We've come back to the future, and it looks a lot like the past. Oldsters, with their knowledge of technology and best practices in the mainframe world, are suddenly in high demand.

Legacy Storage Is Not the Problem

If you stick with x86 and virtualization, you may be concerned about the challenges of achieving decent throughput and application performance, which your hypervisor vendor has lately been blaming on legacy storage. That is usually a groundless accusation. The problem is typically located above the storage infrastructure in the I/O path; somewhere at the hypervisor and application software operations layer.

To put it simply, hypervisor-based computing is the last expression of sequentially-executing workload optimized for unicore processors introduced by Intel and others in the late 70s and early 80s. Unicore processors with their doubling transistor counts every 24 months (Moore's Law) and their doubling clock speeds every 18 months (House's Hypothesis) created the PC revolution and defined the architecture of the servers we use today. All applications were written to execute sequentially, with some interesting time slicing created to give the appearance of concurrency and multi-threading.

This model is now reaching end of life. We ran out of clock speed improvements in the early 2000s and unicore chips became multicore chips with no real clock speed improvements. Basically, we're back to a situation that confronted us way back in the 70s and 80s, when everyone was working on parallel computing architectures to gang together many low performance CPUs for faster execution.

A Parallel Comeback

Those efforts ground to a halt with unicore's success, but now, with innovations from oldsters who remember parallel, they're making a comeback. As soon as Storage Performance Council audits some results, I'll have a story to tell you about parallel I/O and the dramatic improvements in performance and cost that it brings to storage in virtual server environments. It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.

Also on the cutting edge is new/old network technology. Here, I'm referring to the tech from a company called RockPort Networks, which has a cadre of former Lucent folks working on a next-generation network technology derived from the work of Stanford University and IBM several decades ago. Look it up: torus networking.

Cray supercomputers were developing the technology, which involves peer-to-peer, hub-centric networks with three dimensional node connectors. If they gain mindshare (and find additional customers and investors), RockPort's technology (currently implemented as software on any NIC) promises to revolutionize network bandwidth and throughput while slicing network costs to the bone. Cisco won't like it a bit, I guess, since switch-based branch and leaf nets could become a thing of the past; but again, this is an example of a new/old technology being brought back to the future to solve new problems and challenges.

Are 8-Tracks Next?

Finally, I'd just point out that tape is enjoying a renaissance. With the specter of 20-60 Zettabytes of data (a 1 followed by 21 zeros, or one billion terabytes) looming in the 2020 timeframe, tape is the only technology that will be able to meet the space requirements (I'll break it down in a future column). Again, the tape mavens -- defined as oldsters by their inherently un-hip storage technology -- have been vindicated. Everything old is new again.
https://virtualizationreview.com/art...d-storage.aspx