Resultados 1 a 4 de 4
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    [EN] Azure Container Service: the cloud’s most open option for containers

    Corey Sanders
    November 7, 2016

    Containers are the next evolution in virtualization, enabling organizations to be more agile than ever before. I see this from customers every day! They can write their app once and deploy everywhere, whether dev, test or production. Containers can run on any hardware, on any cloud, and in any environment without modification. In short, they offer a truly open and portable solution for agile DevOps.

    With Azure Container Service (ACS), we provide customers a unique approach to managing containers in the cloud by offering a simple way for them to scale containers in production through proven open source container orchestration technology. Today we are announcing a series of updates to ACS that continue to demonstrate ACS is the most streamlined, open and flexible way to run your container applications in the cloud — providing even more customer choice in their cloud orchestrator. These updates, available today, include:

    • Kubernetes on Azure Container Service (preview): In July 2014, roughly a month after Kubernetes became publicly available, we announced support for Kubernetes on Azure infrastructure. Kubernetes 1.4 offered support for native Azure networking, load-balancer and Azure disk integration. Today, we are taking this support even further and announcing the preview release of Kubernetes 1.4 on Azure Container Service. This deeper and native support of Kubernetes will provide you another fully open source choice for your container orchestration engine on Azure. Now, customers will have more options to choose their cloud orchestrator with ACS providing support for three fully open source solutions in DC/OS, Docker Swarm and Kubernetes. You can read more here from Brendan Burns, one of the founders of Kubernetes, for his view on Kubernetes on ACS.
    • DC/OS Upgrade to 1.8.4: We’re pleased to share we have upgraded ACS support for DC/OS to version 1.8.4. This new version includes flexible new virtual networking capabilities along with job-scheduling and Marathon-based container orchestration baked right into the DC/OS UI. In addition, GitLab, Artifactory, Confluent Platform, DataStax Enterprise and our own Operations Management Suite are now available for one-click installation from the DC/OS Universe app store.
    • Open Source Azure Container Service Engine: Today, we are releasing the source code for the ACS Engine we use to create Azure Container Service deployments in Azure. This new open source project on GitHub will allow us to share with the community how we deploy DC/OS, Swarm and Kubernetes and collaborate on best practices for orchestrating containers on Azure, both public and on Azure Stack.Furthermore, with the ACS Engine, you can modify and customize deployments of the service beyond what is possible today. Finally, with your help, we can take contributions from the community and improve the service running in Azure.

    We are seeing organizations of every size move their container-based solutions from dev/test environments to production in the cloud, especially as they discover the business agility opportunities containers make possible. In addition to delivering more choice and flexibility on ACS, we’re also enabling more streamlined agile development and container management through new updates, including these:

    • Azure Container Registry: Available in preview on Nov. 14, the Azure Container Registry is a private repository for hosting container images for use on Azure. Using the Azure Container Registry, you can store Docker-formatted images for all types of container deployments. In addition, the Azure Container Registry integrates well with the orchestrator offered by the Azure Container Service. When you use the Azure Container Registry, you will find it compatible with the open source Docker Registry v2 so you can use the same tools on ACR.
    • VS, VSTS and VS Code integration and deployment to Azure Container Service: Also on Nov. 14, we will release a new experience to enable you to easily set up continuous integration and deployment of multicontainer Linux applications using Visual Studio, Visual Studio Team Services and the open source Visual Studio Code. To continue enabling deployment agility, we expect to invest heavily in excellent dev-to-test-to-prod deployment experiences for container workloads using a choice of development and CI/CD solutions.

    Azure is the only public cloud with a container service that offers a choice of open source orchestration technologies, DC/OS, Docker Swarm and Kubernetes, making it easier for you and your team to adopt containers in the cloud using the tools you love. You can get these agile benefits and more! Go try out DC/OS, Swarm or Kubernetes, on Azure Container Service today! If you want to see more, make sure you watch Microsoft Connect(); next week!

    See ya around,


  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Google Wants Kubernetes To Rule The World

    Timothy Prickett Morgan
    November 8, 2016

    At some point, all of the big public cloud providers will have to eat their own dog food, as the parlance goes, and run their applications atop the cloudy version of their infrastructure that they sell to other people, not distinct and sometimes legacy systems that predate the ascent of their clouds. In this regard, none of the cloud providers are any different from any major enterprise or government agency that struggles with any kind of legacy system.

    Search engine and online advertising giant Google wants its Cloud Platform business to compete against Amazon Web Services and Microsoft Azure and stay well ahead of IBM SoftLayer, Rackspace Hosting, and other public cloud wannabees, and it has two strategic advantages into eventually growing Cloud Platform to be as large as that of AWS and perhaps even larger than the rest of its businesses put together. (Amazon believes that AWS can be larger than its online consumer product sales business, which is now about seven times larger than AWS but nowhere near as profitable.) The first is that Google designs its own uber-efficient servers, storage, and networks and the datacenters that wrap around them as well as the wide area networks that connect them in a global cluster that scales well beyond a million machines. The second advantage is not its vast library of software to storing data and analyzing it, but the open source Kubernetes container orchestration system that gets its inspiration from the company’s Borg cluster controller and its Omega follow-on.

    It has been a little more than a year since Google set Kubernetes free and put it under the control of the Cloud Native Computing Foundation, and the software has gone from the foundational release 1.0 in July 2015 to a much more production ready release 1.4 at the end of September of this year. The Google Container Engine service – abbreviated GKE so as to not be confused with the Google Compute Engine raw VM service on the public cloud, and why not just call it Google Kubernetes Engine then? – was fired up shortly thereafter. With the KubeCon conference being hosted by the CNCF in the cloud capital of the world – Seattle, Washington, of course, where the big three have their operations – Kubernetes is on a lot of minds right now.

    Including, as it turns out, developers inside of Google who don’t actually work on Kubernetes itself. The funny bit is that Google developers are starting to want to run applications on top of Kubernetes for the same flexibility and portability reasons that Google is using as a sales pitch for choosing its container abstraction layer for on-premises and public clouds.

    “We regularly have conversations with people inside of Google about when are we going to bring some of the capabilities of Kubernetes inside of the Google and let them run some random application on top of Kubernetes, which is a complicated problem,” Tim Hockin, one of the original engineers on Kubernetes, tells The Next Platform. Hockin is acting as one of the voices for Kubernetes now that Craig McLuckie, who was the lead product manager for the Compute Engine service, one of the other founders of the Kubernetes project inside Google, and the related GKE container service, has left Google to run his own startup. Aparna Sinha, senior product manager for Google Container Engine, hopped on the call with Hockin to give us an update on where Kubernetes is at as a project and as a tool that Google itself is using.

    So, the obvious question is when can Google just use Kubernetes instead of Borg? (One might also reasonably ask when Hadoop, which is based on Google’s MapReduce concept, might replace the tool of that name and the Google File System, which was the inspiration of the Hadoop Distributed File System.) But Google has its own kind of legacy software problem which would seem to cut against shifting from its internal software to open source variants inspired by them.

    “What we are seeing is that for new applications, Google developers are looking at Borg or Kubernetes, and many of them are choosing Kubernetes,” says Sinha. “But I don’t think that it is practical to think that Gmail or search can move to Kubernetes.”

    But Google is more than just two workloads, and anything it creates to make its public cloud better can – and we would argue should – be consumed by Google just like any other company. The real test of the cloud is when there simply is no difference between the raw infrastructure that a hyperscaler uses internally and the capacity and functions it sells on its public cloud. All of the real innovation will move up the stack, to the application software.

    “There are a couple of ways to look at that,” says Hockin, who in his previous roles at Google worked on the Borg container management system and the cluster management tool that sits below it. “We have already gotten requests from inside Google for people who want to run atop Kubernetes, and we are working with them to use Kubernetes through the cloud, literally through Google Container Engine. We are just starting down that road, and it is very challenging because Google brings a lot of requirements that we were hoping to ignore for a while. Google is a hard customer to have. The larger question is could we use Kubernetes inside Google instead of Borg or alongside Borg – and that is a much harder question. We have many Borg clusters up and running, we have policies inside of Google that are all or nothing, so we can’t just upgrade one cluster to Kubernetes. Kubernetes is also missing hundreds and hundreds of features that Borg has – and whether they are good features or not is a good question, but these are things that Borg has and that people use. We don’t want to adopt all of those features in Kubernetes. So to bring Kubernetes in instead of Borg is an incredible challenge. That may never happen, or it may be on a five to ten year track, or I can imagine a certain end game where internally Borg has a dozen big customers and everyone else uses Kubernetes on our cloud.”

    All of the big cloud providers face that same set of options and conundrums. It is just funny to think of Borg as being analogous to a mainframe at some point. (Maybe sooner than we think.) But of course, all of the services being created by public clouds will have their own maturation curve and will linger past their prime because change is sometimes as costly as not changing.

    We are always looking for the next platform, as the name of this publication implies, and it is clear to use that Kubernetes is a contender as the centerpiece of a compelling platform based on mostly open source technologies. (Joe Beda, an ex-Googler who worked on Kubernetes beside McLuckie and who helped sell Urs Hoetzle, the head of the search engine giant’s massive infrastructure effort, on the idea of creating a more universal Borg and open sourcing it, outlined what the next platform might look like and we have also discussed the integration of Kubernetes with the Prometheus monitoring system.)

    “The extensibility and the flexibility that Kubernetes offers is what really makes it a platform,” says Sinha. “You can run Kubernetes on virtual machines, on bare metal, on any cloud, and that is the beauty of it. It gives you that choice. You don’t just have a choice of clouds. You have a choice of storage, networks, and schedulers and you can plug those in as well, and this is what makes Kubernetes more applicable to the enterprise because they can tailor it to their environment.”


  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Microsoft, which adopted the alternative Mesos management tool as its container control layer on the Azure cloud’s container service and that also supports Docker Swarm, has just announced this week that it will be supporting Kubernetes, too. To be specific, Kubernetes has been supported on raw VMs for a while on Azure, soon the orchestration layer on the Azure Container Service will be able to be set up using Kubernetes rather than Mesos (or more precisely, the commercial-grade Mesos that is known as DC/OS and that is sold by Mesosphere). This provides Azure with a deeper, more native integration with Kubernetes, and interestingly, Microsoft is also opening up the code for its ACS Engine, the go-between that sits between Azure raw infrastructure and the DC/OS, Swarm, or Kubernetes container orchestration tools. This is, in effect, like Microsoft saying it is open sourcing the part of its Borg stack that it didn’t borrow from the open source community. Microsoft clearly also understands the concept of choice, and its embrace of Linux on Azure is but another example.

    But Google and Microsoft are different. Microsoft wants to support everything on Azure, while Google wants Kubernetes everywhere. (In a sense, Microsoft is living up to the Borg name, assimilating all orchestrators, more than Google is.) And quite literally, Kubernetes is how Google is playing up to the on-premises cloud crowd giving it differentiation from AWS (which won’t sell its infrastructure as a stack with a license, although it says VMware is its private cloud partner) and Microsoft (which still doesn’t have its Azure Stack private cloud out the door).

    “That is exactly the right way to think about it, that is exactly the intent, and that is how companies are using Kubernetes today and cluster federation is meant to build on top of that,” says Sinha. “Our strategy with Kubernetes has always been to provide an open source implementation that companies can use identically on premise as well as in your choice of clouds. So it is not just hybrid on-premises and public cloud, but it is multi-cloud, and there are customers that really take advantage of that. They deploy workloads wherever they want: On AWS, on GCP, or on premise, and federation allows them to build a control plane on top of all that.”

    Cluster federation was introduced in Kubernetes 1.3 earlier this year, which allows multiple Kubernetes clusters to be spread across multiple clouds, public or private, or across different regions of a cloud provider and have a unified control plane that allows for the clusters and their replica sets to be provisioned as if they were on one giant instance of infrastructure. This federation layer obviously helps with resiliency and disaster recovery, but is also intended to allow for policies to be set that can push certain kinds of workloads and data only to specific clusters in specific regions if there are compliance or other restrictions. Just because you can put something anywhere on a cloud doesn’t mean developers should be able to do so.

    The combination of abstraction and central control through federation is a powerful one, and a technique that Google returns to again and again in its own infrastructure.

    “One of the things that the Kubernetes community is absolutely fanatical about is abstraction from cloud providers and their implementations,” Hockin explains. “We do not want to couple ourselves, with any part of the Kubernetes system, to Google Cloud Platform or Amazon Web Services or any on-premises infrastructure. As much pain as that causes us in the implementation, it is really important. This is reflected in our APIs and in our storage, for instance. The Kubernetes system uses abstract storage, but it gets bound on the back-end to concrete storage. So as a developer I just ask for 100 GB of fast disk, and that definition of fast depends on what the cluster administrators have set up. In my on premises Kubernetes clusters that might mean a NetApp appliance, and in GCP it might mean persistent disk and in AWS it might mean Elastic Block Storage, but as a developer I don’t actually know or care.”

    Here at The Next Platform, we have a hard time believing that people won’t know or care, but Hockin’s point is taken. They will always want more speed, more capacity, and less latency. And scale. More scale, too. And that is why Google keeps pushing the scale barrier for Kubernetes, which it clearly knows how to do with some Borg clusters spanning more than 50,000 server nodes and lots of clusters spanning more than 10,000 nodes.

    “We have a fairly stringent API latency requirement, and that is how we define the scaling limits, specifically for Google Cloud Platform,” explains Sinha. “With Kubernetes 1.3, we announced support for 2,000 nodes, and we intend to scale that up to 5,000 nodes. Externally, users do push it to their limits, and there is no hard set limit as such. But what we have seen is that large global customers are working with clusters with 1,000 or 2,000 nodes, and then they want to have separate clusters beyond that which work together, which is where federation comes in.”

    Hockin says that Google has internally tested Kubernetes running across dozens of clusters that are federated, and that this federation feature was created specifically so customers could glue together clusters, from a management and workload sharing perspective, in every Google Cloud Platform region to take advantage of all of the geographical diversity that allows.

    “We are definitely shooting for dozens if not low hundreds of clusters in a federation, and each cluster could have from 2,000 to 5,000 nodes and up to 60,000 pods,” says Hockin “If you take a dozen clusters in a dozen cloud regions times 5,000 nodes each, you have got quite a heap of machines.” (That’s 720,000 nodes if you want to be precise, and that is a lot of iron, even if a node is just a VM. At current densities of maybe 40 VMs per two-socket server, that is still 18,000 physical servers.)

    Google might want to be more careful about enabling future competition. . . . Or not. Kubernetes is indeed a pilot, steering customers ultimately to Cloud Platform. Or so it must be thinking.

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Bringing Kubernetes Support to Azure Container Service

    Brendan Burns | Partner Architect at Microsoft & Kubernetes co-founder
    November 7, 2016

    With more than a thousand people coming to KubeCon in my hometown of Seattle, nearly three years after I helped start the Kubernetes project, it’s amazing and humbling to see what a small group of people and a radical idea have become after three years of hard work from a large and growing community. In July of 2014, scarcely a month after Kubernetes became publicly available, Microsoft announced its initial support for Azure. The release of Kubernetes 1.4, brought support for native Microsoft networking, load-balancer and disk integration.

    Today, Microsoft announced the next step in Kubernetes on Azure: the introduction of Kubernetes as a supported orchestrator in Azure Container Service (ACS). It’s been really exciting for me to join the ACS team and help build this new addition. The integration of Kubernetes into ACS means that with a few clicks in the Azure portal, or by running a single command in the new python-based Azure command line tool, you will be able to create a fully functional Kubernetes cluster that is integrated with the rest of your Azure resources.

    Kubernetes is availabe in public preview in Azure Container Service today. Community participation has always been an important part of the Kubernetes experience. Over the next few months, I hope you’ll join us and provide your feedback on the experience as we bring it to general availability.

    In the spirit of community, we are also excited to announce a new open source project: ACS Engine. The goal of ACS Engine is to provide an open, community driven location to develop and share best practices for orchestrating containers on Azure. All of our knowledge of running containers in Azure has been captured in that repository, and we look forward to improving and extending it as we move forward with the community. Going forward, the templates in ACS Engine will be the basis for clusters deployed via the ACS API, and thus community driven improvements, features and more will have a natural path into the Azure Container Service. We’re excited to invite you to join us in improving ACS. Prior to the creation of ACS Engine, customers with unique requirements not supported by the ACS API needed to maintain variations on our templates. While these differences start small, they grew larger over time as the mainline template was improved and users also iterated their templates. These differences and drift really impact the ability for users to collaborate, since their templates are all different. Without the ability to share and collaborate, it’s difficult to form a community since every user is siloed in their own variant.

    To solve this problem, the core of ACS Engine is a template processor, built in Go, that enables you to dynamically combine different pieces of configuration together to form a final template that can be used to build up your cluster. Thus, each user can mix and match the pieces build the final container cluster that suits their needs. At the same time, each piece can be built and maintained collaboratively by the community. We’ve been beta testing this approach with some customers and the feedback we’ve gotten so far has been really positive.

    Beyond services to help you run containers on Azure, I think it’s incredibly important to improve the experience of developing and deploying containerized applications to Kubernetes. To that end, I’ve been doing a bunch of work lately to build a Kubernetes extension for the really excellent, open source, Visual Studio Code. The Kubernetes extension enables you to quickly deploy JSON or YAML files you are editing onto a Kubernetes cluster. Additionally, it enables you to import existing Kubernetes objects into Code for easy editing. Finally, it enables synchronization between your running containers and the source code that you are developing for easy debugging of issues you are facing in production.

    But really, a demo is worth a thousand words, so please have a look at this video:

    Of course, like everything else in Kubernetes it’s released as open source, and I look forward to working on it further with the community. Thanks again, I look forward to seeing everyone at the OpenShift Gathering today, as well as at the Microsoft Azure booth during KubeCon tomorrow and Wednesday. Welcome to Seattle!

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens