Howard Marks
Jun 22 2017

AMD this week finally announced their new Epyc line of server processors and for the first time in a long time, it appears AMD has processors that can actually compete with Intel’s data center world dominating Xeons. While most of the attention so far has been on the top of the line 32 core model if Epyc has an epic effect on AMD’s bottom line, it’s going to be with the models that provide the power, and PCIe lanes, that have required two Xeons in a cheaper, one socket server.

Recent AMD processors have only been truly competitive at the low end of the desktop market where price is the number one decision factor. As recently as 10 short years ago AMD was giving Intel a run for their money in x86 processors on both desktops and servers.

In the early 2000s Xeons were basically higher cache versions of Intel’s Pentium desktop processors. Then came AMD’s Opteron with the expanded memory address space of a 64-bit instruction set, high-speed HyperTramsport between multiple processors in a server and integrated memory management. HyperTransport and the memory manager let AMD eliminate the bottleneck of the front side bus and northbridge chip that separated Xeons of the day from their memory.

In 2009 Intel released the Xeon 5500 (Nehalem) processors with integrated memory management and QPI. While AMD still had a core count advantage for a few years with Nehalem Intel caught up and until Epyc AMD just didn’t have a viable server processor as a result leading vendors like Dell don’t even have AMD based servers in their product lines. Today, Xeon servers account for 99.3 percent of the servers shipped, and Intel’s data center group has gross margins over 50%.

With Epyc AMD is looking to return to the former glory days and regain a decent piece of the server market at first glance they have the tech to do it. Each Epyc processor actually combines four Zen based dies with 2, 4, 6 or 8 cores each for 8-32 cores per processor in steps of eight. Like on the Xeons each core can handle two compute threads so the top of the line Epyc 7601 can run 64 execution threads at 2.2Ghz, with the ability to boost some cores to 3.2Ghz under heavy load.

While core counts are all well and good we all know those top of the line processors cost a pretty penny, Intel’s Xeon E5-2699 v4 lists for over $4,500. Epyc’s real advantage comes not in raw compute but in memory addressing and I/O. Every Epyc processor, even the eight core entry point 7251 has eight DDR4 memory channels and a whopping 128 PCIe lanes

The competing Xeon E5 v4s from Intel have only four channels (1.5TB max) and 40 PCIe lanes. One big architectural difference between the Intel and AMD designs is that Intel has dedicated QPI channels for inter-processor communication where AMD uses 64 of the Epyc’s PCIe channels to create their InfinityFabric in multiprocessor systems. A dual processor Xeon system will, therefore, have 80 PCIe lanes for I/O where the dual socket Epyc system will have the same 128 as a single processor system.

With NVMe SSDs, RDMA over 100Gbps or faster Ethernet, and server class memory coming soon to a data center near you PCIe lanes are becoming the bottleneck for everything from software-defined storage systems to dedicated compute servers connecting to NVMe fabrics.

With up to 32 cores, 2TB of RAM, and 128 lanes of PCIe Epyc puts the power of today’s dual socket servers into a single socket. The most popular Xeon servers today ship with dual 10 or 12 core processors. Since Epyc includes much of the support circuitry that’s normally part of the Intel server chipset, the single socket Epyc server with 24 cores will cost several hundred dollars less than the dual Xeon server, it would replace.

Since much of the software we put on every server, most significantly VMware’s vSphere, is licensed by the socket users will save more on software than the few hundred dollars they save on hardware. A single-socket server with a 24 core Epyc 7401P could deliver the same performance as an Intel server with two Xeon E5-2650 12 core processors while saving $10,000 or more in software licensing costs.

It’s those single socket systems where AMD is looking to get a foothold introducing three Epyc models, including the 7401P mentioned above, designed specifically for the single socket server market, other than having the InfinityFabric interprocessor connection disabled they have all the functionality as the main line processors.

By comparison, Intel’s Xeon E3 and Xeon-D processors that target the one processor market have always been severely limited compared to the mainline E5s. The current E3 V6 processors have just four cores and can address just 64GB of RAM, the Xeon-D’s are a bit more powerful with up to 16 cores and built in 10Gbps Ethernet, but they’re still limited to 128GB of RAM which just isn’t enough for any but the smallest HCI environments.

Of course, Intel has been dropping hints about a new line of Scaleable Xeons that will replace the E3, E5, E7 naming convention for a very MasterCard like bronze, silver, gold, and platinum model. Dell and HPE talked up next generation servers at their recent customer events, but since Intel hasn’t released specs, both companies were long on platitudes but short on details.

The best rumors we’ve seen show the new gold Xeons, the ones designed for the dual-socket servers will top out at 22-28 cores and 48 lanes of PCIe. Vendors looking to build high-performance storage systems with more than a handful of NVMe SSDs will need to add PCIe switch chips to connect them all limiting overall bandwidth to the 8 or 16 lanes feeding each PCIe switch from the processors.

A single socket Epyc server could drive 24 U.2 NVMe SSDs at four lanes each and still have 32 lanes for the 25/100gbps Ethernet connections.

If, as we expect, AMD is aggressive with their pricing that one socket Epyc server could, after software licenses, cost as little as half as much as the dual socket Xeon it will end up competing with. We’ll have two viable vendors for server processors, and the competition will be good for us all.