Resultados 1 a 3 de 3
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,008

    [EN] How Cloud Computing Alters Data Center Design

    by Scott Fulton
    September 12, 2016

    There are two principal classes of data center customers. First, there’s service providers, whose consumption patterns are relatively rigid and whose requirements are spelled out in the SLAs. Second, there are enterprises, whose utilization and resource usage patterns — due in large part to the cloud service delivery platforms upon which they rely — can be all over the map.

    Should a data center provider compartmentalize its operations to serve the needs of both customer classes separately? Or should it instead implement a single design that’s flexible, elastic, and homogenous enough to address both classes — even if it means deploying more sophisticated configuration management and more hands-on administration?

    “In a multi-tenant world, you design for the latter,” responded Dave Leonard, ViaWest’s chief data center officer. “And even in a single-tenant world, I’m convinced that it’s the wrong answer to go for the former.”

    Many of the major data center providers in today’s market are currently inclined to center their design efforts around one big template — for instance, a 10,000 square foot, single-tenant hall with 1100 kW of UPS power, he said. Realistically, it isn’t practical for such a provider to make such a facility multi-tenant, Leonard argued.

    “So say you get a software-as-a-service company. They can only buy one thing: 10,000 square feet and 1100 kW. And on day one, that might fit their needs perfectly, or maybe they can architect their application to where that’s perfect. But what happens when they re-architect their application and their hardware, and now they consume double the watts per square foot?

    “Well, they’ve just stranded half of that space,” Leonard answers himself. “Who pays for that space that’s stranded? Well, they have to pay for it, because there’s no flexibility there.”

    Now, certain well-known data center customers — Leonard cites Akamai as one example — are moving from a 12-15 kW per rack power usage profile down to about 9 kW/rack. Service providers are capable of making such deliberate changes to their applications to enable this kind of energy efficiency.

    Suppose a hypothetical SP customer of this same data center is inspired by Akamai, re-architects its application, and lowers its power consumption. “Well, now they can’t use the power that’s in that space,” argues Leonard.

    “Creating space where power and cooling are irretrievably tied to the floor space that is being delivered on is a really bad idea. When the use of that floor space, power, and cooling changes over time — and there’s a dozen dimensions that can cause it to change — those data centers are rigid and inflexible in their ability to react to those changes.”

    Yes, cloud application architectures have bifurcated the market for data center facilities. But the phenomenon arising from this alteration is essentially a single trend. Leonard believes a facilities or colocation provider should engineer adaptability into its design in order to adapt what it offers customers to correspond with changes in their consumption profiles.

    Like many data center providers, ViaWest is noticing a sharp uptick in what Leonard calls “Amazon graduates:” SaaS and IaaS customers who were either born in the cloud or migrated to the public cloud when it was cost-effective but found themselves moving back off once their consumption profiles evolved past that cost-effective point.

    “They realize, especially as they end up with a lot of data on those clouds,” said Leonard, “that it becomes uneconomic at a certain scale. It becomes more economic to take that back and move it into a private cloud that is dedicated to them, or move it back onto their own hardware [with] co-location.”

    ...

    “I don’t say that there’s a cloud data center,” ViaWest’s CDCO told us, “and you build a cloud data center in a particular way. There’s data centers that are able to adapt to changing needs — some driven by cloud users, some driven by SaaS or IaaS users, some driven by enterprises as they change over time. There’s characteristics that all these different users drive into the physical design of their data centers, that are more important to accommodate now than was the case five or ten years ago.”

    http://www.datacenterknowledge.com/a...center-design/
    Última edição por 5ms; 18-09-2016 às 00:24.

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,008

    Seven Ways Microsoft Is Redefining Azure For The Enterprise

    Amazon AWS and Microsoft Azure are the first cloud platforms proving they can scale globally to support enterprises’ vision of world-class cloud app portfolio development.


    Louis Columbus
    September 18, 2016

    451 Research’s latest study of cloud computing adoption in the enterprise, The Voice of the Enterprise: Cloud Transformation – Workloads and Key Projects provides insights into how enterprises are changing their adoption of public, private and hybrid cloud for specific workloads and applications. The research was conducted in May and June 2016 with more than 1,200 IT professionals worldwide. The study illustrates how quickly enterprises are adopting cloud-first deployment strategies to accelerate time-to-market of new apps while reducing IT costs and launch new business models that are by nature cloud-intensive. Add to this the need all enterprises have to forecast and track cloud usage, costs and virtual machine (VM) usage and value, and it becomes clear why Amazon Web Services (AWS) and Microsoft Azure are now leaders in the enterprise.

    Being able to innovate faster by building, deploying and managing applications globally on a single cloud platform is what many enterprises are after today. And with over 100 potential apps on their cloud roadmaps, development teams are evaluating cloud platforms based on their potential contributions to new app development and business models first.

    AWS and Microsoft Azure haven proven their ability to support new app development and deployment and are the two most-evaluated cloud platforms with dev teams I’ve talked with today. Of the two, Microsoft Azure is gaining momentum in the enterprise.

    Here are the seven ways Microsoft is making this happen:

    • Re-orienting Microsoft Azure Cloud Services strategies so enterprise accounts can be collaborators in new app creation. Only Microsoft is coming at selling Cloud Services in the enterprise from the standpoint of how they can help do what senior management teams at their customers want most, which is make their app roadmap a reality. AWS is excellent at ISV and developer support, setting a standard in this area.


    • Giving enterprises the option of using existing relational SQL databases, noSQL data stores, and analytics services when building new cloud apps. All four dominant cloud platforms (AWS, Azure, Google, and IBM) support architectures, frameworks, tools and programming languages that enable varying levels of compatibility with databases, data stores, and analytics. Enterprises that have a significant amount of their legacy app inventory in .NET are choosing Azure for cloud app development. Microsoft’s support for Node.js, PHP, Python and other development languages is at parity with other cloud platforms. Why Microsoft Azure is winning in this area is the designed-in support for legacy Microsoft architectures that enterprises standardized their IT infrastructure on years before. Microsoft is selling a migration strategy here and is providing the APIs, web services, and programming tools to enable enterprises to deliver cloud app roadmaps faster as a result. Like AWS, Microsoft also has created a global development community that is developing and launching apps specifically aimed at enterprise cloud migration. Due to all of these factors, both AWS and Microsoft are often considered more open cloud platforms by enterprises than others. In contrast, Salesforce platforms are becoming viewed as proprietary, charging premium prices at renewal time. An example of this strategy is the extra 20% Salesforce charges for Lightning experience at renewal time according to Gartner in their recent report, Salesforce Lightning Sales Cloud and Service Cloud Unilaterally Replaced Older Editions; Negotiate Now to Avoid Price Increases and Shelfware Published 31 May 2016, written by analysts Jo Liversidge, Adnan Zijadic.


    • Simplifying cloud usage monitoring, consolidated views of cloud fees and costs including cost predictions and working with enterprises to create greater cloud standardization and automation. AWS’ extensive partner community has solutions that address each of these areas, and AWS’ roadmap reflects this is a core focus of current and future development. The AWS platform has standardization and automation as design objectives for the platform. Enterprises evaluating Azure are running pilots to test the Azure Usage API, which allows subscribing services to pull usage data. This API supports reporting to the hourly level, resource metadata information, and supports Showback and Chargeback models. Azure deployments in production and pilots I’ve seen are using the API to build web services and dashboards to measure and predict usage and costs.


    • Openly addressing Total Cost of Ownership (TCO) concerns and providing APIs and Web services to avoid vendor lock-in. The question of data independence and TCO dominates sustainability and expansion of all cloud decisions. From the CIOs, CFOs and design teams I’ve spoken with, Microsoft and Amazon are providing enterprises assistance in defining long-term cost models and are willing to pass along the savings from economies of scale achieved on their platforms. Microsoft Azure is also accelerating in the enterprise due to the pervasive adoption of the many cloud-based subscriptions of Office365, which enables enterprises to begin moving their workloads to the cloud.


    • Having customer, channel, and services all on a single, unified global platform to gain greater insights into customers and deliver new apps faster. Without exception, every enterprise I’ve spoken with regarding their cloud platform strategy has multichannel and omnichannel apps on their roadmap. Streamlining and simplifying the customer experience and providing them with real-time responsiveness drive the use cases of the new apps under development today. Salesforce has been successful using their platform to replace legacy CRM systems and build the largest community of CRM and sell-side partners globally today.


    • Enabling enterprise cloud platforms and apps to globally scale. Nearly every enterprise looking at cloud initiatives today needs a global strategy and scale. From a leading telecom provider based in Russia looking to scale throughout Asia to financial services firms in London looking to address Brexit issues, each of these firms’ cloud apps roadmaps is based on global scalability and regional requirements. Microsoft has 108 data centers globally, and AWS operates 35 Availability Zones within 13 geographic Regions around the world, with 9 more Availability Zones and 4 more Regions coming online throughout the next year. To expand globally, Salesforce chose AWS as their preferred cloud infrastructure provider. Salesforce is not putting their IOT and earlier Heroku apps on Amazon. Salesforces’ decision to standardize on AWS for global expansion and Microsoft’s globally distributed data centers show that these two platforms have achieved global scale.


    • Enterprises are demanding more control over their security infrastructure, network, data protection, identity and access control strategies, and are looking for cloud platforms that provide that flexibility. Designing, deploying and maintaining enterprise cloud security models is one of the most challenging aspects of standardizing on a cloud platform. AWS, Azure, Google and IBM all are prioritizing research and development (R&D) spending in this area. Of the enterprises I’ve spoken with, there is an urgent need for being able to securely connect virtual machines (VMs) within a cloud instance to on-premises data centers. AWS, Azure, Google, and IBM can all protect VMs and their network traffic from on-premises to cloud locations. AWS and Azure are competitive to the other two cloud platforms in this area and have enterprises running millions of VMs concurrently in this configuration and often use that as a proof point to new customers evaluating their platforms.


    https://softwarestrategiesblog.com/2...rged-a-leader/
    Última edição por 5ms; 20-09-2016 às 01:31.

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,008

    “Right-Sizing” The Data Center

    Tim Kittila
    October 4, 2016

    Overprovisioned. Undersubscribed. Those are some of the most common adjectives people apply when speaking about IT architecture or data centers. Both can cause data center operational issues that can result in outages or milder reliability issues for mechanical and electrical infrastructure. The simple solution to this problem is to “right-size your data center.”

    Unfortunately, that is easier to say than to actually do. For many, the quest to right-size turns into an exercise akin to a dog chasing its tail. So, we constantly ask ourselves the question: Is right-sizing a fool’s errand? From my perspective, the process of right-sizing is invaluable; the process provides the critical data necessary to build (and sustain) a successful data center strategy.

    When it comes to right-sizing, the crux of the issue always comes down to what IT assets are being supported and what applications are required to operate the organization. However, with the variability in computer load and the ability to load-balance and shift loads within the data center without any disruption to operations, let alone the ability direct these IT loads to other data center, picking the size of mechanical/electrical infrastructure is the challenge.

    When it comes to poor performance of IT applications, too often the knee-jerk reaction is to “throw hardware” at the problem. This becomes the challenge for the facilities team, as we begin chasing IT phantom loads. Let alone, when identifying the IT load for a data center, whether build or colo, many times, the IT architecture can be sloppy and over projected. Meanwhile facilities engineers knowing this, over project the mechanical electrical infrastructure which exacerbates the problem.

    For example, my team was recently commissioned to analyze an application that was underperforming. Users were complaining of slow response times, inability to use the application during peak load time, and general system underperformance. Before we could even complete our results, the underlying opinion from the IT department was that the system was under-provisioned from server standpoint and we were just there to validate their assumption. However when the analysis was completed the results showed a different perspective. In fact, from the server standpoint there was plenty of capacity. The root-cause of the problem resided in how the application leveraged memory available to the system. Paging to spinning disc at a slower bit rate was what causing issues with the end user experience. The issue really boiled down to how the virtual machine was configured. This analysis proved worthy as we kept IT from throwing additional hardware at the issue.

    This example is all too common in industry; slow applications – slam hardware at it. But IT isn’t always to blame. Data Center architects also can overprovision from a mechanical and electrical systems standpoint. Albeit, we get our data from IT and sometimes the load never shows up, but it should be our passion to allow our big M, big E systems to be flexible, scalable and handle low-low conditions. The fault is can reside on both sides of the table (IT and Facilities) if we do not design for these variations and changes in technology.

    Reliability is almost always the number one goal, but efficiency of the operations is a close second. When it comes to efficiency, the biggest part of the equation is right-sizing the equipment to match the load. But you also need to factor in growth potential. Not provisioning enough capability in your data center (IT or facilities) and then being forced into an early and unplanned capital expenditure could be fatal to an organization.

    The same can be said if you heading into a colo solution. The process of right-sizing allows the colo to plan better and ensures that you are not reserving capacity that you may never see. This type of over-provisioning hurts not only your bottom line, but also impacts the colo for stranded power and/or space.

    When it comes to approaching the right-sizing process, here are some key steps to consider:

    #1 Identify and Assess

    Get the IT inventory. Do an analysis on this inventory. Go beyond the name-plate rating; do your homework on how it’s truly operating. Assess how IT plans to use this load and know the applications being demanded by that specific data center’s functions.

    #2 Know your Data Center Architecture

    If this is your data center, understand the minimum load required to provide a stable environment from both IT and facilities considerations. Know the bare minimum for IT requirements and know the maximum. Work within those bounds. And by all means, collaborate with the teams responsible for those decisions. Too many times we see over provisioned chiller plants that cannot sustain load, especially in a dual-path configuration. This can cause all sorts of issues from corroded pipes to frozen cooling towers. Same goes for generators with low loads – these types of situation can be detrimental to a data centers reliability.

    #3 Know your Colo

    If it’s a colo situation, understand their system limitations for minimums. This is a question that is too infrequently asked. Many times all that is asked or considered is the maximum density allowed. As a good customer to the colo, this question will no doubt impress them, but also provide an opportunity for your team to work with them to come up with a scalable contract. In others cases be sure to set up a contract that works with your actual demand.

    #4 Think about Efficiency

    It’s not here yet, but the proverbial target will be on data centers to reduce energy. Arguably, that time is already here. It all begins with IT provisioning and matching this load. The IT equipment in our data center has some of the best technology to “scale” load (think of it as a VFD for IT processing). Big M and Big E equipment is starting to follow suit as we start adopting variable refrigerant flow technology, improved IGBT technology, and DC power provisioning.

    Despite being seemingly a fool’s errand, right-sizing your data center is a critical step before determining your strategy – whether it’s build, colo or cloud. And while right-sizing can help provide a more efficient operation, it is also critical for ensuring the overall reliability of data center operation.

    http://www.datacenterknowledge.com/a...-fools-errand/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •