Resultados 1 a 4 de 4
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    [EN] Re-evaluating the love affair with the public cloud

    High-profile public cloud failures may have companies re-examining business needs

    David Chernicoff
    March 21, 2017

    It's taken a surprisingly long time and some very public failures for businesses to step back and reflect on the cloud's potential problems and not just its advantages. The recent Amazon Web Services (AWS) failure was not the first to cause a large number of cloud-based services to shut down for a significant period of time, but it is the highest profile failure of the public cloud to date.

    Human error was to blame for the initial AWS failure, which is often the case with cloud failures, but it was followed by unexpected issues that prevented a rapid recovery. Amazon's explanation for the failure can be found here, but the technical media's reaction to the failure is more interesting and points to why companies may need to re-examine business needs.

    A few second thoughts

    How often major cloud services experience significant outages is lost on much of the general media, but the tech media outlets seem to be re-evaluating their love affair with the public cloud. One reason mentioned in published articles is that the nature of the massive public cloud removes one of the strongest benefits of Internet-aware services, which is a distributed computing model with a high tolerance for failure. Depending on a single public cloud solution is the equivalent of putting all of your eggs in one basket. Many types of failures within that cloud can bring down your services completely. As such, a strong public/private cloud mix might be the most appropriate direction for the future. Not up for debate is that businesses positioned to manage, manipulate, and make use of the massive increase in data and devices that technologies such as the Internet of Things represent will be the most successful.

    The public cloud offers real advantages: rapid prototyping, easy availability, and a mix of services that organizations can use to build complete entry-level enterprise services without the need for strong internal IT help. But building a reliable service infrastructure using public cloud resources takes significant time and money. You may be shortchanging your business if you limit your goals because of what is available only via public cloud services.

    Building your own hybrid public/private cloud infrastructure can be the logical growth path for businesses looking to leverage public cloud. Technologies such as hyperconvergence and a composable infrastructure provide the basis for developing an on-premises private cloud. Initially, enterprises considering building hybrid cloud environments looked at it as a way to continue to run legacy applications that might not be cloud-suitable alongside the latest in cloud-based technologies. But the hybrid path also offers flexibility and reliability benefits that pure cloud solutions can't—from running your existing legacy applications to allowing the buildout of customized networking and cloud resources targeted at addressing specific business. Companies also use hybrid clouds to meet the security, compliance, performance, and cost requirements that public cloud cannot.

    Public cloud not the only game in town

    New technologies are re-invigorating traditional data centers. Advances in hyperconverged architectures and the ability to deliver, with minimal coding, composable infrastructures that provide highly flexible environments are giving the public cloud a run for its money. These technologies allow organizations to deploy private clouds that offer the benefits of public cloud but with greater security, compliance, and performance, and often at a lower cost.

    When companies combine private cloud software from major cloud vendors like Microsoft Azure with these kinds of hardware and software solutions, they can deploy services and applications to the most suitable environment, whether it's advanced private cloud running on premises or a public cloud.

    Understanding the process and management techniques necessary to deploy the most efficient and effective hybrid cloud will give businesses the edge they need when moving toward cloud technology. A hybrid cloud that includes a mix of private and public cloud can offer a more resilient environment, greater security, higher performance, and the ability to meet regulatory requirements. It also provides one of the major benefits of the public cloud: capacity on demand. Workloads can continue to exist primarily on owned devices and spike demand can be pushed to resources available from the public cloud.

    Another reason to go the hybrid IT route: Businesses often move processes for data backup and disaster recovery to the cloud. But if the cloud goes down, what happens to the business? The cost of downtime has been reported to be an average of $8,000 per minute, and that for Amazon itself, a website outage can be almost $5 million per hour. In the typical enterprise, the costs are likely somewhere in the middle.

    But building a hybrid IT can mean that the reliability advantage goes both ways. If the public cloud become unavailable, your private cloud infrastructure can pick up the slack, should you choose to build this model.

    Cloud realities: Lessons for leaders

    • Cloud failure contingency planning is critical.
    • The one-size-fits-all approach of the public cloud limits operational flexibility.
    • New on-prem hardware and software technologies and hyperconverged and composable infrastructures, combined with private/public cloud deployment, offer the highest level of flexibility, security, and control.


    https://insights.hpe.com/content/hpe...lic-cloud.html

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573
    O que impressiona no artigo (encomendado) do David Chernicoff não é ser gancho para promoção dissimulada da HP + Azure Stack mas não mencionar o OpenStack.

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    Edge vs. central IT

    Why edge computing matters

    Pedro Pereira
    February 14, 2017

    So where do your apps and services belong? Some will inevitably reside in the cloud, but cloud infrastructures cannot efficiently handle the massive loads of data the IoT is expected to generate. Despite the cloud’s scalability, cost-effectiveness, and support for future architectures, latency issues can get in the way of the real-time processing necessary for IoT implementations.

    That won’t be happening at the core network, either. The cloud’s raison d’etre, after all, is to relieve central IT of ever-increasing demands for data processing, analysis, and storage. Some other solution is needed between the core and cloud. And that’s where edge computing comes in.

    Edge allows you to place compute power closer to the action—the network edge. This is where many of the IoT’s analytics and monitoring applications will reside to enable real-time decision-making. As IoT implementations get under way, a web of micro data centers will sprout at the edge. They will act as way stations between cloud servers, core IT, and the vast networks of sensors and monitors that capture and transmit data. Like a rail system with stops between major hubs, these micro data centers will ideally transform the IoT into a well-organized data delivery system.

    Edge computing promises to play an essential role in the network of the future as it evolves to accommodate IoT needs. That network will be a hybrid combining cloud, edge, and central IT components, with applications—or pieces of applications—residing in these distinct but integrated areas.

    Location, location, location

    As in real estate, edge computing comes down to location. The closer you place processing and data, the more agile your organization becomes. Now you don’t have to wait for data to travel from the source for hundreds or thousands of miles to a cloud data center to be processed and redirected to a technician or analyst in front of a dashboard somewhere else.

    Funneling data to the cloud potentially wastes precious seconds—or even minutes—that can make a real operational difference. A driverless car at an intersection can’t wait several seconds for information from the cloud before it starts moving again. If the vehicle sits there too long waiting for data, it is bound to cause a traffic snag or even an accident.

    As connected cars become more sophisticated, they will be able to communicate with each other about road and weather conditions. For instance, location services company HERE has teamed with Nokia Liquid Applications to use an LTE network to warn vehicles as they approach road hazards.

    “Edge computing is used to extend existing cloud services into the highly distributed mobile base station environment, so that road hazards can be recognized and warnings can be sent to nearby cars with extremely low latency,” according to a Nokia blog. Google's Waze mobile navigation application performs similar services, albeit they require humans to inform the system about traffic slowdowns and potential road hazards.

    Edge computing has a place not only on regular roadways, but also on the racetrack, where cars running at 140 mph can transmit sensor data to the pit crew. This scenario is already a reality in Formula E, where the DS Virgin Racing team uses the compute power of a trackside data center provided by Hewlett Packard Enterprise to optimize car performance.

    “Streaming data is analyzed at the point of collection, providing real-time insight that allows [the team] to make real-time adjustments to maximize the systems that control their car, and hopefully win the race," says Kelly Pracht, senior manager of HPE Edgeline IoT Systems, in a recent blog. "After the race, aggregate data is analyzed for deeper insights.”

    The power of immediacy

    Away from roadways and racetracks, edge computing is starting to make a difference in other industries. For example, healthcare providers increasingly rely on connected devices that deliver vital information to applications monitored by medical personnel.

    At-home monitoring devices track patients’ weight, blood pressure, heart rate, insulin levels, and other metrics. The data travels to software monitoring systems that issue alerts to the smartphones, tablets, and stationary monitors of nurses and doctors if intervention is needed. Any latency here is potentially a life-and-death situation. The same is true in tele-ICU, which allows critical care medical personnel to connect remotely with intensive care unit patients through real-time audio, visual, and data links.

    Slow-loading screens or pixelated video images won’t cut it in these scenarios. However, not all edge computing instances come down to life and death. In retail environments, for example, the combination of Wi-Fi and smartphones can create Internet-like shopping experiences.

    A shopper who has previously registered for the store’s Wi-Fi connection will be recognized by the network as she walks in. Wi-Fi analytics software brings up relevant information such as previous purchases and time spent at the store. The system tracks the shopper through the store and sends promotional information to nearby digital displays or texts coupons to her smartphone. The goal is to get the customer to spend more money while feeling the retailer is attuned to her needs and wants.

    Where the cloud excels

    Edge data centers will be essential to IoT adoption in hybrid environments where real-time decisions are paramount. However, cloud infrastructures will still provide essential scalability, flexibility, and storage for certain applications.

    The cloud can handle massive volumes of data for which no immediate action is required. Analysts can mine that data later to identify patterns and trends that can be used for preventive maintenance and predictive purposes. For instance, cybersecurity solutions are being developed that identify the sources and methods of attacks to forecast future attacks, giving organizations a greater chance at preventing breaches.

    Long term, large-scale data storage will remain an essential cloud function. So will web applications affected by seasonal fluctuations, such as retail websites that require extra capacity during the holidays or accountants who need to scale up during tax filing season.

    The cloud also makes sense for applications for which demand is hard to predict, along with testing environments and—increasingly—mobile app development and management. Cloud-based software development accelerates the development process and keeps down costs, helping organizations achieve the agility they need to compete in fast-paced markets.

    What to keep in-house

    At least for the foreseeable future, certain applications will need to stay on premises. There are compelling reasons for this. In some cases, it's more expensive to move applications to the cloud or replace them with cloud apps. Some executives still get nervous about giving up direct control of assets by moving them elsewhere. And there are lingering concerns about cloud security, privacy, and regulatory compliance.

    From a technical standpoint, a compelling case for keeping applications in house can be made based on these criteria:

    • Extensive redevelopment and integration needed for applications to run efficiently in a cloud environment
    • Applications requiring extensive customization to meet corporate requirements
    • Applications tightly linked to vast, complex databases
    • Comparable cloud-based applications lack required functionality
    • Mainframe applications that serve as hubs of data integration, such as enterprise service bus software, and can’t be moved without also moving all dependent applications


    Hybrid future

    Hybrid environments combining edge, cloud, and in-house assets will become as commonplace as client-server systems were not long ago. Years from now we won't even use the word "hybrid" to describe these environments. Instead we'll call them "the network."

    https://insights.hpe.com/articles/ed...long-1702.html

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    VMware takes aim at enterprise hybrid cloud

    All organisations begin to realise that all of those things that IT does equally apply to these new types of infrastructure that are being built up.

    Colin Barker
    April 3, 2017

    VMware CTO on why the company will play a big role as enterprises embrace cloud computing

    VMware is part of Dell Technologies and is probably best known for its virtualisation technology -- but it also wants to be the platform for enterprises' hybrid cloud deployments, along with other areas such as mobile device management. ZDNet caught up with the company's CTO Ray O'Farrell to find out where its ambitions lie.

    What's the current state of play with VMware?

    People think of VMware as the virtualisation company, but in practice VMware has significantly expanded beyond that core x86, virtualisation story -- you will have seen it over the years with things like SDDC and the software-defined internet.

    Right now, we have a focus on things like private cloud and the virtualisation of [core systems], including all those together with management and so on. We think we are putting together a large, private cloud story enabling you to set policy, security, and so on.

    About a year ago, we began to look at what is going on in the industry in terms of what we hear from our customers.

    We got a strong message that a lot of them want to leverage cloud, not necessarily in private/public... but in terms of the experience of cloud. They want to get the experience, the agility, and build their organisation based on that.

    But now they are facing the management challenge. Now they are building two silos and in some cases they are not even coming from the same organisation -- some are from the IT organisation, some from the line of business.

    We are focused hybrid cloud, meaning that it is our expectation that enterprises will have assets in both and will try to figure out what is the best way to leverage that.

    It's not too unlike what we saw in the early days when you had multiple servers, multiple storage, and all of them operating as islands, silos being managed by different people.

    What we are seeing now in the cloud is a new requirement through the software that says that you can do the same thing through the managed infrastructure with a mixture of private and public cloud.

    Isn't one of the issues the problem of prioritisation? People want to bring private and public cloud together, but what parts of the infrastructure should they focus on first?

    In some ways you need to ask: 'how did you get here?' As a company or a business, you will be going through this digital transformation. In the past, the impulse was probably coming from lines of business which would cause frustration with the IT department: 'How fast can I get this done?'

    You wound up building these separate silos. For a while, organisations were investing in two modes of operation: you would have DevOps versus non-DevOps, for example. But all organisations begin to realise that all of those things that IT does -- security, making sure that this makes sense from an economic model, making sure that regulations are complied with -- equally apply to these new types of infrastructure that are being built up.

    I think that there is a détente being worked out here as organisations realise that their IT organisations need to be responsive, and the lines-of-business are realising that they need somebody to take care of the security, which has always been run by the CIO or IT, and you can see that blend come together. I definitely see that shift beginning to occur.

    Now, if that's the case, what would your advice to IT managers be?

    One of the things to realise very quickly is that if you are truly going through this digital transformation, the fundamental role of technology -- and that's not just IT -- has changed. It's not just about 'how do I run some back-end process, how do I change something to speed up the stock management'.

    Now it's about technology being at the very centre of the customer intimacy: how do I know the customer? How do I personalise my products? And that means that IT is right at the centre of the competitive stance of the company. The IT organisation needs to think through that and say: 'I am not some back-end thing, I am something at the core of what the company is doing. The company is our digital business and I manage the digital assets of the company'.

    That changes the thinking of the IT department so that they can look at things from a business point of view. It's not just about saving money everywhere -- I have to invest and that investment will have a return, because I invest in better analytics, target different customer sets, and get a return on that investment.

    I would say that IT needs to recognise who their new customer is and needs to focus on making that customer happy.

    So what is the main focus for you?

    A lot of it is, in some ways, a sense of trust. A lot of companies have been running their IT infrastructure for a decade or more and so a lot of them have that trust with us and so they want help with that transition.

    A lot of it is very practical things. Over the last two years, IT has had to cope with the presence of a lot more mobile devices, phones, laptops, and so on. That was not VMware's story, right? But we have realised that the new role of IT is to manage all of this infrastructure and so we have built teams and made acquisitions to focus on systems that still centre round that sense of enterprise trust. That's why you see us doing these things like AirWatch and Workspace ONE. It's driven by the fact that our customers are going through that transition and we need to be there to help them.

    When you look at the same thing in cloud, VMware made a very large decision late last year to go into partnership deals with Amazon. The aim was to bring together the best in private cloud and the largest in public cloud. That decision was driven because customers face certain challenges and they want to make certain things happen and they want companies to help them go through that transition.

    Customers know you as a reliable company but one that has a particular niche in their datacentre. How do you get them to recognise that you could offer more?

    First of all, if you look at it from the customer's point of view, it's not just IT. There is a much broader point here.

    In its essence, one of [VMware's] key strengths was, if you have a problem with your datacentre -- siloing and so on, multiple services, multiple storage -- you bought VMware and you put this software on and it solved a whole bunch of problems for you.

    The first thing was that we said, we work with what IT needs to manage. The new IT needs to work with mobile devices so we made sure that we could too. We are saying to IT, we know what it is that you need to do so we're doing that as well.

    When we looked at networking, the very same thing occurred. We did some analysis on virtual machines and asked where are the bottlenecks? It turned out, while we had put a lot of focus on storage and compute, the bottleneck wasn't there. You could wind up a virtual machine in two seconds but you had to wait for the networking guy to get back to you before you could wire up the firewall, reconfigure the network and so on. So that's when we said 'OK, let's virtualise the network as well'.

    It just showed that you had to get a level of abstraction before you could analyse the network and get it working efficiently.

    Look at IoT. You might ask, what has VMware to do with IoT? But when you look at it you very quickly get into work around sensors and they produce data, you have analytics and then feedback to a user and so on. But when you look at that, that is really another big infrastructure problem.

    A lot of people focus on, the outputs of this data make decisions. What we focus on is: how do you know the sensors are secure? How do you know the data is up to date? How do you know the gateways that are associated with that are up to date?

    What we are really saying is, once you get the extension of the data around IT the role of IT will change and once again the same questions need to be addressed. That's where we bring out existing strength into that mix. So, our focus is: What are we good at? What problems do we need to solve?

    http://www.zdnet.com/article/vmware-...-hybrid-cloud/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •