Resultados 1 a 2 de 2
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,423

    [EN] The Next IT Frontier: Adaptive Orchestration

    Getting a handle on skyrocketing workloads.

    Trevor Pott
    06/01/2017

    Distributed computing is hard, large clusters are hard, parallel computing is really hard, and Gustafson's law is a pain in the ASCII. With containers, we're already at the point that we can stuff tens of thousands of workloads into 10 rack units of space, and to be brutally honest, we're not very good at coordinating that. What happens when tens of thousands becomes hundreds of thousands, or millions, of workloads?

    Today's infrastructure is experiencing a generational leap in capability. Breakneck speeds bring their own problems to datacenter design, which ensures, among other things, that we'll always need systems integrators. Hardware Compatibility Lists are kind of crap, which makes rolling our own next-generation software-defined win machine somewhat problematic. On top of it all, increased workload density is both blessing and curse.

    After more than a decade of relatively steady -- bordering on stagnant -- increases in systems capability, everything is coming to a head. The last time we ran across a change of this magnitude we were consolidating workloads from bare metal using virtualization, and driving performance with increasingly fancy centralized storage. We had to change everything about how we managed workloads then. We're going to have to now as well.

    The Lethargy of Volume

    At the turn of the millennium we had to dramatically rethink storage because of virtualization. Not only did shared storage enable critical functionality such as vMotion and High Availability (HA), but we started cramming rather a lot of workloads onto a limited number of storage devices. The SCSI RAID cards we jammed into each host just wouldn't cut it as centralized storage solutions.

    Fibre channel SANs got lots of love; the gigabit Ethernet of the day was just not up to the task. Demand on storage and networking both increased. Fibre channel speeds increased every few years. Gigabit Ethernet begat 10 Gigabit Ethernet (10GbE). Eventually, 40GbE and 100GbE were also born.

    10GbE saw reasonably widespread adoption, but 40GbE and 100GbE didn't see much love. Prices were too high. That was okay; storage and networking didn't impinge upon one another, and most of us weren't really stressing our networks.

    Eventually, fibre channel vendors got greedy. The push to do storage over Ethernet became serious, made all the more so by the emergence of software-defined storage (SDS) and hyperconvergence. Ethernet switches evolved to solve latency issues, microbursting issues, and The Great Jumbo Frames Debate almost became a thing.

    Just as it looked like we might have to either start actually caring about jumbo frames or kowtow to the fibre channel mafia, a new networking standard with 25GbE, 50GbE and a new 100GbE emerged. Significantly cheaper than its predecessors, it will let us stave off efficiencies like jumbo frames for a few more years.

    For the longest time, the storage side wasn't much different. Fibre channel doubled in speed every now and again, but the bottleneck was in the box. IDE to SATA, SCSI to SAS; iteration after iteration slowly allowed for faster systems. Hard drives never really got much faster; for years we just kept adding shelves full of disks, in a desperate attempt to solve the performance problems brought about by continuing consolidation and the ceaseless demand for more workloads.

    Then along came flash.

    Software-Defined Transformation

    Flash is too fast for SATA, and it's too fast for SAS. We didn't really notice this in the beginning because in addition to using standards that couldn't take advantage of flash disks, the RAID controllers and HBAs of the time had such low queue depths they couldn't even make full use of the standards of the time. This was around 2010. Hyperconverged vendors yelled at storage controller vendors, and things slowly started to suck less. NVMe came out. We hooked flash up directly to the PCIe bus. This broke our SDS solutions because we couldn't get enough networking in. We lashed 10GbE ports together, and eventually 100GbE ports. Containerization went mainstream, and suddenly 150 workloads per box became 2,500.

    Snowden happened. American judges started making extrajudicial demands for data. Brexit. Trump. The fig leaf of Safe Harbor was blown away and the EU's General Data Protection Regulation (GDPR) demanded both privacy and security by design and by default.

    Now we not only needed to store and move and run everything at the power of raw rediculosity, we needed to encrypt it, too. Table stakes became encryption at rest, encryption in flight, and with realtime deduplication and compression to boot. The slow, steady pace of the early 21st century wasn't -- isn't -- enough.

    That jumbo frames debate will be back soon enough. RDMA, NVMe and other technologies focused on marked improvements in efficiency to provide better throughput and/or latency without needing new standards are about to be the default, not some add-on only used by the dark priests of niche datacenters.

    A Game of Risk

    The 00s were a slow, even boring march of sequential iterations in technology. This predictability had value. We knew what to expect, so we could make reasonable judgements about technology investments without taking a big risk that we'd be caught unaware by some massive leap in capability. Many of us even became comfortable stretching out beyond the vendors' preferred three-year refresh horizon.

    This is no longer the case. When we stop solving problems by throwing raw throughput at it we have to learn something new. When we stopped simply ramping up the clock frequency of processors we had to learn to get good at multiple cores. Physics said no, and we adapted.

    This time, it's not just CPUs that are hitting the wall. Today's equipment might not speak the more efficient protocols of tomorrow, and we need to start thinking about that. Today the focus is on building private clouds and trying to treat all infrastructure merely as a commodity to be consumed.

    The next hurdle will be how to maintain those pools of resources when the older chunk of our cloud doesn't speak the same language as the newer bit. You wouldn't, for example, want your front-end application living on a network cluster that spoke jumbo frames and your database cluster on one that didn't.

    The Next Big Thing is Adaptive Orchestration. This means automating workload interdependence identification and then enforcing locality based upon performance, latency, risk profile, resiliency requirements, RPO, RTO as well as regulatory, governance and data residency requirements.

    Pets vs. cattle defined the transition from a few hundred to tens of thousands of workloads. Adaptive Orchestration will enable us to handle millions.

    https://virtualizationreview.com/art...form=hootsuite
    Última edição por 5ms; 02-06-2017 às 16:42.

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,423

    Mainframe or Cloud?

    It Isn’t An All-or-Nothing Decision

    Christopher O’Malley | CEO of Compuware
    June 1, 2017

    Recently there has been much discussion in IT circles, particularly in government, about the need to modernize legacy technologies and leverage newer alternatives. This desire is at the heart of the Modernizing Government Technology (MGT) Act, which is currently working through Congress. Here at Compuware, we are advocates of this bill, and believe modernization is the key to delivering the type of fast, convenient services that put customers’ and citizens’ needs first.

    Indeed, there are technologies in use in both the private and public sectors that could be made more efficient. For example, virtualized x86 environments are prone to sprawl and demand constant attention. So the decision to consume the IT services associated with these costly environments (email or HR for example) from the cloud can offer many benefits, including flexibility and instant scalability.

    However, there’s a dangerous element of “group think” that often accompanies IT modernization discussions across market sectors – that “new” automatically equates to “better.” There’s also the naïve tendency toward generalizing and believing that any technology that doesn’t include buzzwords like “cloud,” “machine learning,” “container” or “chatbot” must be replaced.

    This isn’t always true. Sometimes there is simply no substitute for a modern version of the original, which is inherently better and doesn’t need to be replaced, but rather simply requires sincere stewardship. The mainframe is one example, having consistently defied predictions of its demise. Mainframes continue to support 70 percent of all enterprise data and 71 percent of all Fortune 500 companies’ core business processes.

    The reasons for this longevity are simple. Mainframes, with new leading-edge models delivered every few years, have proven to be inherently more secure, powerful and reliable than the Cloud and distributed architectures, even though these alternatives are often perceived to be more modern. In addition, many organizations that have tried to move their critical systems-of-record off the mainframe have found the process to be altogether too risky, expensive and time-consuming. And, even if successful, they find that the systems they are left with are even more complex than the original, of less quality and therefore even more difficult and costly to maintain. There is no honor or reward in being successful at being unsuccessful.

    In computing, the term “legacy” connotes an old, outdated technology, computer system or application program. But the post-modern mainframe is the most reliable, scalable and securable platform on the planet. To consider it outdated or unsupported technology is malarkey with a motive. According to one recent global CIO survey, 88 percent of respondents noted they expect their mainframe to continue to be a key business asset over the next decade; another 81 percent reported that their mainframes continue to evolve—running more new and different workloads than they did just a few years ago. The mainframe remains the only platform in the world that is capable of handling the huge surge in computing volumes brought on by mobile, and numerous studies have shown it to be more cost-effective in the long run than alternative architectures.

    So in reality, the post-modern mainframe is anything but legacy, although its tools and processes do have to be modernized, allowing newer generations of developers to work on it as nimbly and confidently as they do on other platforms. This means replacing the antiquated green screen development environment with a modern, familiar IDE. It also means leveraging Java-like technologies that provide visualization capabilities to help developers understand poorly documented mainframe applications along with unit testing to maintain the quality of the code. “Agile-enabled” source code management and code deployment that ensure mainframe development organizations can participate 100 percent in DevOps processes are also required. We call this “mainstreaming the mainframe.”

    The prospect of modernizing on the mainframe rather than moving off may leave many mainframe-based organizations feeling like they are at a crossroads. They see the benefits of keeping their mainframe – and understand it can be a competitive asset – but does this mean they can’t leverage the benefits of the cloud? The honest answer is no, because choosing between the cloud and the mainframe does not need to be an either/or scenario. Certain applications and workloads are better suited to the mainframe, while others are better suited to the cloud. The key is knowing the difference and devising a strategy that puts the worthy ideas of serving customers and citizens first and foremost.

    This “two-platform IT” strategy entails keeping all mission-critical, competitively differentiating applications running on-premise, on the mainframe. Recent breaches and outages – most notably the Amazon S3 outage on February 28, which caused widespread availability issues for thousands of websites, apps, and IoT devices – should give any company pause before relegating their most critical computing systems, where near-perfect reliability is a must and mainframes thrive, to the cloud. On the other hand, non-mission critical and non-differentiating applications, such as email or HR, are better suited to the cloud’s economy of scale. Using the cloud in concert with the mainframe is where smart money invests.

    In summary, when it comes to IT modernization, we must avoid becoming blindly enamored by the newest technologies, and getting caught up in sweeping generalizations. “New” does not necessarily mean “better” and the utility of longstanding technologies must be evaluated on a case-by-case basis, instead of slapping on labels like “legacy” and automatically dismissing them. The post-modern mainframe is a key asset, and combining it with the cloud in a smart, comprehensive strategy can yield an optimum resolution and a true win-win scenario.

    http://www.datacenterknowledge.com/a...hing-decision/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •