Resultados 1 a 5 de 5
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005

    [EN] IBM And Lenovo Data Center Strategies

    Paul Teich
    Oct 7, 2016

    I often hear about “win-win” business deals, but it is difficult to find good examples of direct competitors who’ve done so. Two years ago, IBM and Lenovo closed on the sale of IBM’s x86 server business to Lenovo and after recently attending both IBM’s Edge 2016 conference and Lenovo’s 2016 Industry Advisory Council, I think this deal qualifies as a win-win. I heard very different strategic narratives at the two events, but one point was crystal clear – the sale has enabled both companies to focus their respective attention and resources on their differentiated strategies.

    IBM has been perfecting the art of delivering its message to press, analysts and customers for over a century. Their Las Vegas event had over 5500 attendees, including scores of analysts and press. Lenovo’s advisory council was a more intimate affair, with a dozen or so top industry analysts mixing it up with Lenovo execs in and near their Morrisville, NC, U.S. global headquarters.

    A key part of IBM’s pursuit of the x86 server unit sale was the resource and messaging conflicts within IBM between their home-grown POWER processor ecosystem and Intel’s Xeon de-facto standard architecture and ecosystem. The phrase “industry standard” neatly side-steps the fact that both Intel’s Xeon and IBM’s POWER are proprietary architectures.

    As AMD’s server market share declined in the early 2010s, IBM decided to double down creating a viable non-Intel data center ecosystem with OpenPOWER. IBM realized it could not execute an OpenPOWER strategy with the dedication required to drive success while at the same time retaining its x86 server business. IBM, Google, Mellanox, NVIDIA and Tyan announced the OpenPOWER Foundation in August of 2013, a year before the sale of IBM’s x86 server business to Lenovo closed, but with that eventual sale in mind.

    IBM

    Advanced analytics and the emerging “cognitive era” are IBM’s big bet for its future. For advanced analytics, IBM’s POWER processor architecture has co-opted Intel’s own Xeon processor story – that is, resource rich processors with lots of “fat” threads running at high frequencies and a high performance data center architecture. I have not run into a data center customer yet who does not believe IBM POWER or the OpenPOWER ecosystem can also run high performance workloads. Similarly, IBM and the OpenPOWER Foundation are working closely with NVIDIA, Xilinx and other neural network accelerator hardware vendors to rapidly deploy advanced machine learning and deep learning capabilities.

    Over the past three years, the OpenPOWER Foundation has created a viable cloud server ecosystem, starting, pretty much, from scratch.

    I can only point to one inhibiting factor for more widespread cloud adoption – third-party OpenPOWER server system on chip (SoC) designs based on IBM’s POWER8 and POWER9 processor cores will not enter the market until 2018. That’s perhaps a five quarter wait for volume production. In about three quarters we should start hearing about engineering samples and beta deployments.

    Until then, OpenPOWER server designs will be based on IBM’s current POWER8 processor cores, followed by POWER9 cores in 2017.

    Applications do need to be recompiled from x86 to POWER architecture, but that is not a strong inhibitor for public cloud adoption and perhaps not for some private cloud deployments, as well.

    Lenovo

    Lenovo, on the other hand, has been fairly quiet in the U.S. over the past two years. Other than the ThinkServer brand and a handful of Xeon-based server SKUs, what did they get from acquiring IBM’s x86 business?

    IBM has been a leader in high performance computing (HPC) architecture for decades. IBM’s x86 server team had access to that experience and knowledge base; they are a first rate server design team, by anyone’s standards. In September 2013 this team unveiled IBM’s NeXtScale converged x86 server architecture, targeted at dense enterprise data centers. This architecture was a better fit for dense enterprise deployments than HP’s original Moonshot architecture and it was a year earlier to market than Dell’s FX architecture. In addition, in August 2014 IBM introduced the Flex System for both POWER7+ and x86 architectures. Lenovo also inherited the Flex System chassis and System x M5 server blades, which launched with a configuration tuned specifically for Microsoft Hyper-V.

    Lenovo and Cineca announced the first phase of their Lenovo-based “Marconi” supercomputer in April of this year, and it debuted at number 46 on the Top500 list of supercomputers in June. Marconi is based on Lenovo’s NeXtScale nx360 M5 server, which uses dual-socket Xeon E5-2600 generation 4 server nodes that will be interconnected using Intel’s OmniPath architecture (OPA). Cineca is the largest OPA installation in the world, and is scheduled to add Knights Landing (KNL) generation Xeon Phi processors in 2017. Marconi uses Lenovo’s GSS storage subsystem, which integrates IBM’s Spectrum Scale (GPFS) file system and is also connected via OPA.

    Lenovo has for years been a supplier to Chinese public cloud giants such as Baidu, Alibaba, and Tencent. Last week at Microsoft’s Ignite conference, Microsoft announced their Azure Stack Technical Preview 2 (TP2) by showcasing half-racks from three server OEMs – Dell, HPE and Lenovo. This is a clear statement on Lenovo’s part to invest in private cloud success, and a clear statement from Microsoft that they consider Lenovo’s Azure Stack capabilities to be top-tier and worth Microsoft’s investment in time and resources. You can read more about Azure Stack and the hardware in those OEM half-racks here.

    The Take-Away

    Lenovo is getting focused and aggressive. Their integration of IBM’s x86 server team is complete, and they are about to shift into a higher gear just as Dell and EMC are merging and HPE is shedding business units.

    At the same time, IBM has been able to refine and hone its POWER and OpenPOWER messages. IBM had no chance of success for POWER architecture with one foot in Intel’s x86 world – and while success is still not a sure thing, they have put their resources into a single coherent strategy.

    One thing is certain in all of this – the sale of IBM’s x86 server division to Lenovo was a good deal for both parties, they are both stronger because of it.

    – The author and members of the TIRIAS Research staff do not hold equity positions in any of the companies mentioned. TIRIAS Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud.

    TIRIAS Research

    http://www.forbes.com/sites/tiriasre...wo-years-after

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005

    IBM Power System S822LC for High Performance Computing uses dual-socket IBM POWER8 processors and four NVLINK-connected NVIDIA P100 GPU accelerator modules [photos: TIRIAS Research]





    Lenovo’s Flex System Chassis and x240 M5 blade [photos: TIRIAS Research and Lenovo]






    Lenovo’s NeXtScale nx360 M5 v4 WCT water cooled sled with two dual-socket Xeon processor nodes; processors and memory are water-cooled [photo: Lenovo]

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005

    Opening Up The Server Bus For Coherent Acceleration




    Timothy Prickett Morgan
    October 17, 2016

    When IBM started to use the word “open” in conjunction with its Power architecture more than three years with the formation of the OpenPower Foundation three years ago, Big Blue was not confused about what that term meant. If the Power architecture was to survive, it would do so by having open specifications that would allow third parties to not only make peripherals, but also to license the technology and make clones of Power8 or Power9 processors.

    One of the key technologies that IBM wove into the Power8 chip that differentiates it from Xeon, Opteron, ARM, and Sparc processors is the Coherent Accelerator Processor Interface (CAPI), an overlay that uses the PCI-Express bus as a transport but that cuts out all of the overhead needed for supporting legacy I/O and storage devices and allows accelerators hanging off the CAPI protocol to have coherent access to the main memory in the Power8 complex and for the Power8 cores to have access to the memory in the accelerators themselves. (IBM has called the CAPI bus a “hollow core,” which is a good metaphor for how it makes adjunct computing devices look from the point of view of the Power8 processor complex.)

    While CAPI has been a key part of IBM’s renewed push in the high performance computing segment, Big Blue wants more from CAPI, so much so that it is willing to create a brand new consortium that will steer its development and foster widespread adoption across all processor makers – including archrival Intel, which has emerged as IBM’s biggest foe in the HPC segment. And OpenCAPI, as the standard that IBM and its systems friends are putting forth is called, seeks to be one of the key protocols for linking components together in systems, along with the CCIX coherent attachment specification announced in May and the Gen-Z storage class memory standard that also debuted last week.

    “What is going on here is that we are re-architecting the system,” Brad McCredie, the former president of the OpenPower Foundation and an IBM Fellow and vice president of development in IBM’s Power Systems division, tells The Next Platform. “No matter how you slice it, the processor is not improving cost/performance at the incredible rate and pace that it had in the past. And yet there is still this ever-growing demand for more and more compute capability, driven by these new workloads of artificial intelligence as well as simulation and analytics, there is this huge appetite to still produce more advanced systems. Less and less value is being driven by CPUs and more and more is being generated by accelerators, new memory technologies, and lots of crazy innovations that are attaching to processors. The value is shifting around, and in order to have the ability to mix and match technologies, we are going to need standards and Google, Hewlett Packard Enterprise, and Dell EMC see this very plainly.”

    McCredie is referring to IBM’s peers in the formation of the OpenCAPI consortium, who were joined by networker Mellanox Technologies, memory maker Micron Technology, GPU accelerator maker Nvidia, and FPGA maker Xilinx in lining up behind CAPI as a key in-system protocol for linking accelerators and memory of all kinds into the processing complex inside of a system node.

    There is a certain amount of confusion about what CCIX, Gen-Z, and OpenCAPI are attempting to do for hybrid systems, and McCredie offered his view on what it all means and how they all fit together, much as some of the top engineers at HPE and Dell did when discussing the Gen-Z protocol with us last week. There seems to be agreement that these standards bodies and their protocols for lashing different things together in systems will be designed to work together, but clear boundaries have yet to be drawn, and frankly, markets will decide – as they always do – what protocols are used where, and how. From his perspective, Gen-Z protocols will be used over various kinds of switched and fabric interconnects, starting with Ethernet, primarily to link various kinds of compute elements, including processors, FPGAs, GPUs, and such, to various kinds of storage class memories that are pooled at the rack level. But OpenCAPI will be concerned primarily with attaching various kinds of compute to each other and to network and storage devices that have a need for coherent access to memory across a hybrid compute complex. (Or, if need be, a monolithic one just made up of processors.)

    (cont)

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005
    “Gen-Z’s primary focus is a routable, rack-level interconnect to give access to large pools of memory, storage, or accelerator resources,” says McCredie. “Think of this as a bus that is highly optimized for talking outside of the node at the rack level. OpenCAPI is optimized for inside the node, and it is trying to deal with the fact that we have I/O buses that have one-tenth the bandwidth of main storage buses. With OpenCAPI, I/O bandwidth can be proportional to main store bandwidth, and with very low 300 nanosecond to 400 nanosecond latencies, you can put storage devices out there, or big pools of GPU or FPGA accelerators and let them have access to main store and just communicate to it seamlessly. OpenCAPI is a ground-up design that enables extreme bandwidth that is on par with main memory bandwidth. CCIX is taking a very similar approach, but it is evolving over preexisting standards, like IBM did by having the original CAPI run on top of the PCI-Express 3.0 bus. We are very supporting of CCIX and we think we need to keep moving that standard forward as well. But we also believe there is a need to build on fresh ground for ultimate performance and lowest latency.”

    The initial CAPI ports on IBM’s Power8 processors used the on-chip PCI-Express 3.0 controllers as the transport layer for the minimalist coherency protocol it developed to link accelerators and network cards to the Power8 processing complex. With the Power9 chip due in the second half of next year, there will be a slew of different interconnect ports available to linking the processor to other devices. An updated CAPI 2.0 protocol will run atop PCI-Express 4.0 links, which will have twice the bandwidth of the PCI-Express 3.0 ports. That is 16 GT/sec instead of 8 GT/sec, which equates to 31.5 GB/sec of bandwidth in an x16 slot compared to 15.75 GB/sec for a PCI-Express 3.0 slot. So this upgraded CAPI will be no slouch when it comes to bandwidth, at least at peak bandwidth. (Sustained will probably be a lot lower because of peripheral driver and operating system overhead.)

    But with New CAPI, as the OpenCAPI port was called when the Power9 chip was unveiled in April at the OpenPower Summit and then talked about in greater detail at the Hot Chips conference in August, IBM it etching new “BlueLink” 25 Gb/sec ports on the chip specifically for coupling accelerators, network adapters, and various memories more tightly to the Power9 chip than was possible with the prior CAPI 1.0 and CAPI 2.0 protocols over PCI-Express 3.0 and 4.0, respectively. It is interesting to note that these same 25 Gb/sec BlueLink ports on the Power9 chip will be able to run the updated NVLink 2.0 protocol from Nvidia. NVLink provides high bandwidth, coherent memory addressing across teams of Tesla GPUs, with NVLink 1.0 supported in the current “Pascal” Tesla P100s and the faster NVLink 2.0 coming with the “Volta” Tesla V100s die next year.

    The combination of Power9 and Tesla V100s is the computing foundation of the US Department of Energy’s “Summit” and “Sierra” supercomputers, and from the looks of things, hyperscalers like Google might be interested in similar hybrid machines for its machine learning training. On the Power9 chips aimed at scale-out, two-socket servers, the chips have 48 lanes of PCI-Express 4.0 peripheral I/O per socket, for an aggregate of 192 GB/sec of duplex bandwidth. In addition to this, the base Power9 chip will support 48 lanes of 25 Gb/sec BlueLink bandwidth per socket for other connectivity, with an aggregate bandwidth of 300 GB/sec across those lanes. On the larger scale-up Power9 chips, these 25 Gb/sec BlueLink ports are used to do NUMA communications across four-node clusters; the Power9 chip has separate 16 Gb/sec links to hook four socket modules together gluelessly. So it looks like only the scale-out Power9 chips – the ones that Google and Rackspace Hosting are interested in for their “Zaius” server – will be the ones supporting the BlueLink ports and the OpenCAPI protocol on top of them.

    Unless, of course, other processor makers adopt OpenCAPI and either run it on their own communications circuits or license the BlueLink ports from Big Blue. Neither is unreasonable, and the former especially so given that both 100 Gb/sec Ethernet transports and 100 Gb/sec InfiniBand transports are both using 25 Gb/sec signaling now. (This is one reason why the Gen-Z storage class memory interconnect is running first atop Ethernet, although it is not limited to that.) IBM has intentionally separated the OpenCAPI consortium from the OpenPower Foundation so those supporting OpenCAPI don’t have to be part of the Power architecture community.

    “We are creating a completely open spec, and everybody is invited, including Intel, to come participate in our spec,” says McCredie, who adds that IBM is perfectly happy to license its BlueLink ports and any microcode and circuitry relating to CAPI 1.0, CAPI 2.0, or OpenCAPI (which is CAPI 3.0) to any company that is interested. (An open specification does not mean open source, of course.) Those who join the OpenCAPI consortium will get an FPGA implementation of the receiver for the OpenCAPI protocol for free, which will allow makers of accelerators, network cards, and storage devices to add OpenCAPI ports on their devices so they can talk to processors and other accelerators that have OpenCAPI ports.

    The interesting bit will be when other processor and accelerator makers step up to do target OpenCAPI ports on their devices, not just receivers. IBM is running the NVLink 2.0 protocol over its BlueLink ports, and there is no reason that anyone – AMD with its Zen Opteron and K12 ARM chips, Intel with Xeon and Xeon Phi chips, and a slew of ARM server chip makers – can’t add 25G ports and run OpenCAPI or NVLink on them.

    In a way, any processor maker could get a lot of the benefits of the Power9 architecture through OpenCAPI, and while IBM is not begging them to, they would be kind of foolish not to. It is not like IBM and Intel have not cooperated on interconnects and protocols before – both made InfiniBand come into being by merging warring protocols together in 1999, and thought then that the PCI bus had run out of gas. This time, they might be right, or more accurately, that the time has come to start with a clean slate and reuse signaling pins and communications PHYs on chips in many different ways. The PCI-Express stack is a limiter in terms of latency, bandwidth, and coherence. This is why Google and Rackspace are putting OpenCAPI ports on their co-developed Power9 system, and why Xilinx will add them to its FPGAs, Mellanox to its 200 Gb/sec InfiniBand cards, and Micron to its flash and 3D XPoint storage.

    We think there will be bridging between OpenCAPI and Gen-Z interconnects and devices, too, much as Ethernet and InfiniBand networks can bridge between each other and, if need be, encapsulate the traffic of each other. Time will tell.

    http://www.nextplatform.com/2016/10/...-acceleration/

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005

    Azure Stack private cloud hardware

    Microsoft puts Stamp on private cloud

    Paul Teich
    Oct 3, 2016

    Jason Zander, corporate vice president for Microsoft’s Azure Web service, showed three nearly identical half-rack servers from Dell, HPE and Lenovo at the event. They were working testbeds for Microsoft’s anticipated Azure Stack private cloud due for release in mid-2017.

    The systems—when combined with code from Microsoft’s new partnership with Docker--are going to blind-side the rest of the market for private-cloud infrastructure with a nicely integrated suite.

    Azure Stack will initially be a subset of Microsoft’s Azure services that customers will own and operate themselves as private cloud instances. Over time Microsoft will release a more complete set of Azure services into Azure Stack. The benefit to private cloud customers will be the ease of migrating applications on-demand from Azure Stack private clouds into Microsoft’s Azure public cloud.

    Azure Stack targets industrial, global transportation and smart-city markets. They need local hardware either because they must continue to operate if they are unintentionally disconnected from network access or because the nature of their use is occasionally connected.

    The Azure Stack systems from Dell, HPE and Lenovo have a lot in common as the table below shows.


    Configurations may change before servers ship. (Images: Tirias Research)

    All of the servers implement dual-socket Intel Xeon processors. The three will compete detailed specs of the eight 2U systems. Final specifications of the servers will change over the next few months.

    In the public cloud world, where services such as Google, Facebook, Alibaba and Baidu purchase at hyperscale, there is a concept of a stamp. A stamp is a pre-configured rack or perhaps several racks that are purchased at one time to serve a specific set of workloads.

    These Azure Stack OEM systems are clearly intended to provide a default certified private cloud stamp that each vendor can then lightly customize. Their half-rack size means they can be deployed more easily at an enterprise friendly scale than full-height racks intended for web scale service providers.

    Certification will be important because Azure Stack is not a general purpose server OS. Its first incarnation will be based on a minimum viable set of Microsoft’s Azure cloud stack services. There will be other services at general availability, but not the full set of services supported in the Azure public cloud.

    Microsoft is working with the OEMs to ensure that they can differentiate from each other. The last thing that OEMs want is a race to a pricing floor with completely undifferentiated hardware.

    http://www.eetimes.com/author.asp?se...doc_id=1330555

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •