Resultados 1 a 9 de 9
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,033

    Fabricantes de servidores passam a disputar mercado cloud com clientes

    The largest server vendors are now embracing a movement that at one time represented a threat: cloud computing. In doing so, these OEMs (original equipment manufacturers) are getting into the same business as some of their largest customers, which can be a risky proposition in the IT world. But in recent years, large cloud players have used their leverage to squeeze margins, threatening to commoditize servers. As margins have narrowed, cloud has altered the economics of the server world, and leading marquee server brands have been emboldened to launch their own cloud offerings.

    HP, Dell and IBM have all turned to public clouds based on OpenStack to remain relevant and position themselves to capitalize on enterprise hybrid strategies.

    It’s no secret that the cloud has the potential to commoditize infrastructure, and being “cloud ready” hedges these bets. Enterprises will always trust brand names above all, so being cloud ready ensures they won’t abandon the gear and vendors they’re already using. There’s acceptance that cloud isn’t a fad, so these OEMs want to make sure that as this great shift takes place, their equipment remains.

    As Cloud Wars Heat Up, Server OEMs Bet on OpenStack

  2. #2
    Moderador
    Data de Ingresso
    Oct 2010
    Localização
    Rio de Janeiro
    Posts
    2,679
    Ou seja: quem quiser nao vai conseguir competir com eles. Primeiro porque eles tem como fazer o proprio HW e comprar de si mesmo ao menor preco possivel, e aumentar o preco quando vender pra concorrencia; segundo que eles vao ter semore o topo de linha nas maos...

    Nisso prefiro ficar com a Supermicro que mantem sua funcao de ser apenas fabricante de hardware e nao prestador de servicos...

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,033
    Eu acho que tem um problema bem maior em andamento que é falta de evolução da arquitetura dos servidores. É comum a critica que os servidores atuais não passam de PCs melhorados. O data center convencional baseia-se em arquitetura de hardware de servidor de 30 anos, muitas vezes rodando sistemas operacionais com raizes nos anos 60. Sem falar que virtualização a IBM já fazia há pelo menos 40 anos. Essa "briga" entre Dell, HP, IBM não será para valer enquanto não lançarem soluções inovadoras de hardware e software. Para mais do mesmo sempre tem uma Supermicro.
    Última edição por 5ms; 31-03-2013 às 00:53.

  4. #4
    Moderador
    Data de Ingresso
    Oct 2010
    Localização
    Rio de Janeiro
    Posts
    2,679
    O problema de reinventar a roda é achar quem queira codificar pra ela... Ou então acabam-se nos middlewares lentos pra diabo, como acabou se tornando o elefante branco do Java.

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,033
    Citação Postado originalmente por cresci Ver Post
    O problema de reinventar a roda é achar quem queira codificar pra ela... Ou então acabam-se nos middlewares lentos pra diabo, como acabou se tornando o elefante branco do Java.
    A roda já foi reinventada pela Amazon, Microsoft e Google.

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,033
    Em um comentário sobre os ~130 anos da Coca Cola:

    "In a world where 80% of the products we use today did not exist ten years ago ..."

    Será mesmo? Se for verdade, vivem reinventando a roda por aí.

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,033
    http://www.focusbankers.com/telecom/...n_Winter12.pdf

    Sector Overview

    The Market: Target Customers

    Historically, the target customer groups for the sub sectors largely were exclusive. Data center real estate firms served the largest enterprises, particularly media and technology companies as well as their colocation and managed hosting brethren. Colocation/ interconnection served network service providers and SMEs. Finally, managed hosters served the SMB market.

    The lines have blurred between the sub sectors. DCREs and colocation providers are increasingly competing with each other for customers at the low end of the REIT market and the high end of the colocation market. Likewise, managed hosters are moving up-market to include the low end of the enterprise market and now are competing with colocation providers. And many colocation providers are moving into the managed hosting market in order to capture higher revenues per square foot at their costly facilities.

    As noted previously, there are three clear business models in CMH. While the subsector lines are blurring, we believe these three distinct business models remain attractive and sustainable, albeit some more than others.
    Última edição por 5ms; 31-03-2013 às 16:54.

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,033

    Designing For Dependability In The Cloud

    David Bills is Microsoft’s chief reliability strategist and is responsible for the broad evangelism of the company’s online service reliability programs.

    This article kicks off a three-part series on designing for dependability. Today I will provide context for the series, and outline the challenges facing all cloud service providers as they strive to provide highly available services. In the second article of the series, David Gauthier, director of data center architecture at Microsoft, will discuss the journey that Microsoft is on in our own data centers, and how software resiliency has become more and more critical in the move to cloud-scale data centers. Finally, in the last piece, I will discuss cultural shift and evolving engineering principles that Microsoft is pursuing to help improve the dependability of the services we offer.

    Matching the Reliability to the Demand

    As the adoption of cloud computing continues to grow, expectations for utility-grade service availability remain high. Consumers demand access 24 hours a day, seven days a week to their digital lives, and outages can have a significant negative impact on a company’s financial health or brand equity. But the complex nature of cloud computing means that cloud service providers, regardless of whether they sell offerings for infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS), need to be mindful that things will go wrong — because it’s not a case of “if things will go wrong,” it’s strictly a matter of “when.” This means, as cloud service providers, we need to design our services to maximize the reliability of the service and minimize the impact to customers when things do go wrong. Providers need to move beyond the traditional premise of relying on complex physical infrastructure to build redundancy into their cloud services to instead utilize a combination of less complex physical infrastructure and more intelligent software that builds resiliency into their cloud services and delivers high availability to customers.

    The reliability-related challenges that we face today are not dramatically different from those that we’ve faced in years past, such as unexpected hardware failures, power outages, software bugs, failed deployments, people making mistakes, and so on. Indeed, outages continue to occur across the board, reflecting not only on the company involved, but also on the industry as a whole.

    In effect, the industry is dealing with fragile, (sometimes referred to as brittle), software. Software continues to be designed, built, and operated based on what we believe is a fundamentally-flawed assumption: failure can be avoided by rigorously applying well-known architectural principles as the system is being designed, testing the system extensively while it is being built, and by relying on layers of redundant infrastructure and replicated copies of the data for the system. Mounting evidence paints a picture that further invalidates this flawed assumption; articles continue to regularly appear describing failures of online services that are heavily relied on, and service providers routinely supply explanations of what went wrong, why it went wrong, and summarize steps taken to avoid repeat occurrences. The media continues to report failures, despite the tremendous investment that cloud service providers continue to make as they apply the same practices that I’ve noted above.

    Resiliency and Reliability

    If we assume that all cloud service providers are striving to deliver a reliable experience for their customers, then we need to step back and look at what really comprises a reliable cloud service. It’s essentially a service that functions as the designer intended it to, functions when it’s expected to, and works from wherever the customer is connecting. That’s not to say every component making up the service needs to operate flawlessly 100 percent of the time though. This last point is what brings us to needing to understand the difference between reliability and resiliency.

    Reliability is the outcome that cloud service providers strive for. Resiliency is the ability of a cloud-based service to withstand certain types of failure and yet remain fully functional from the customers’ perspective. A service could be characterized as reliable, simply because no part of the service, (for example, the infrastructure or the software that supports the service), has ever failed, and yet the service couldn’t be regarded as resilient, because it completely ignores the notion of a “Black Swan” event – something rare and unpredictable that significantly affects the functionality or availability of one or more of the company’s online services. A resilient service assumes that failures will happen and for that reason it has been designed and built in such a way to detect failures when they occur, isolate them, and then recover from them in a way that minimizes impact on customers. To put the meaning of the relationship between these terms differently, a resilient service will — over time — become viewed as reliable because of how it copes with known failure points and failure modes.

    Changing Our Approach

    As an industry, we have traditionally relied heavily on hardware redundancy and data replication to improve the resiliency of cloud-based services. While cloud service providers have experienced successes applying these design principles, and hardware manufacturers have contributed significant advancements in these areas as well, we cannot become overly reliant on these solutions as paving the path to a reliable cloud-based service.

    It takes more than just hardware-level redundancy and multiple copies of data sets to deliver reliable cloud-based services — we need to factor resiliency in at all levels and across all components of the service.

    That’s why we’re changing the way we build and deploy services that are intended to operate at cloud-scale at Microsoft. We’re moving toward less complex physical infrastructure and more intelligent software to build resiliency into cloud-based services and deliver highly-available experiences to our customers. We are focused on creating an operating environment that is more resilient and enables individuals and organizations to better protect information.

    In the next article of this series, David Gauthier, director of data center architecture at Microsoft, discusses the journey that Microsoft is making with our own data centers. This shift underscores how important software-based resiliency has become in the move to cloud-scale data centers.
    Designing For Dependability In The Cloud

  9. #9
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,033

    Why is Data Storage Such an Exciting Space?

    For a while, the storage industry appeared to be fairly stable (read: little technology innovation), with consolidation around a few large players. Several smaller companies were bought out by larger players – 3PAR by HP, Isilon by EMC, Compellent by Dell. However, in the last year, we’ve seen a renewed action in the space with promising new start-ups, dedicated to solving the storage problems in the new-age data centers. So, what exactly is the problem with legacy storage solutions in new-age data centers?

    Evolution of Storage Technology

    For better perspective, let’s start with a quick recap of data storage technology evolution. In the late 1990s and early 2000s, storage was first separated from the server to remove bottlenecks on data scalability and throughput. NAS (Network Attached Storage) and SAN (Storage Area Networks) came into existence, Fibre Channel (FC) protocols were developed and large scale deployments followed. With a dedicated external controller (SAN) and a dedicated network (based on FC protocols), the new storage solutions provided data scalability, high-availability, higher throughput for applications and centralized storage management.

    Server Virtualization and the Inadequacy of Legacy Solutions

    Legacy SAN/NAS based storage solutions scaled well and proved adequate, until the advent of server virtualization. With server virtualization, the number of applications grew rapidly and external storage was now being shared among multiple applications to manage costs. Here, the monolithic controller architecture of legacy solutions proved a misfit as it resulted in noisy neighbor issues within shared storage. For example, if a back-up operation was initiated for a particular application, other applications received lower storage access and eventually, timed out. Further, storage could no longer be tuned for a particular workload as applications with disparate workloads shared the storage platform.

    Rising Costs and Nightmarish Management

    Legacy vendors attacked the above issues through several workarounds – including faster controller CPUs and recommending additional memory with fancy acronyms. Though these workarounds helped to an extent, the brute way to guarantee storage quality of service (QoS) was to either ridiculously over-provision storage controllers (with utilization below 30-40 percent) or dedicate physical storage for performance-sensitive applications. Obviously, these negated the very purpose of sharing storage and containing storage costs in virtualized environments. Subsequently, storage costs relative to overall data center costs increased dramatically. Being hardware-based, legacy vendors didn’t see any reason to change this situation. With dedicated storage for different workloads, there were several storage islands in a data center which were chronically un-utilized. Soon, “LUN” management became a hot new skill and also a nightmare for storage administrators.

    The New-Age Storage Solutions

    With the advent of the cloud, today’s data centers typically have 100s of VMs which require guaranteed storage access/performance/QoS. Given the limitation of legacy solutions to scale in these virtualized environments, it was inevitable that a new breed of storage start-ups cropped up. Many of these start-ups chose to simplify the “nightmarish” management either by providing tools to observe and manage “hot LUNs” (a term to denote LUNs that serve demanding VMs) or by providing granular storage analytics on a per-VM basis. However, the management approach does not really cure the “noisy neighbor” issues, leaving a lot of other symptoms unresolved.

    Multi-tenant Storage Controllers

    There is a desperate need for solutions which attack the noisy neighbor problem at its root cause i.e., by making storage controllers truly multi-tenant. These controllers should be able to isolate and dedicate storage resources for every application based on its performance demands. Here, storage endpoints (LUNs) will be defined in terms of both capacity and performance (IOPS, throughput and latency). These multi-tenant controllers will then be able to guarantee storage QoS for every application right from a shared storage platform.
    Why is Data Storage Such an Exciting Space?

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •