In 2015, when the Open Container Initiative (OCI) was launched to create industry standards around containers, it used Docker’s container runtime and image format as the base. But now a number of companies are undertaking a project that would break the OCI stack away from Docker in preference of Kubernetes, Google’s open source container orchestration engine.
Scott M. Fulton III
22 Sep 2016
This new project is geared for Kubernetes. It will directly interface with Kubernetes pods. It will enable Kubernetes — not Docker — to launch and manage containers at scale.
“What we want is a daemon that can be used by Kubernetes for running container images that are stored on Docker registries,” said Dan Walsh, the long-time SELinux project lead, and consulting engineer with Red Hat, speaking with The New Stack. Red Hat’s and Google’s developers are taking the lead with this project, for now, called simply OCID (OCI daemon). “In order to do that,” Walsh continued, “we wanted to build a series of libraries, to be able to facilitate running these container images.”
The maintainers of the project assert that OCID is not a “Docker fork,” though the project serves as evidence of a split in the container ecosystem, especially as Kubernetes continues to gain its own traction in the container marketplace.
Earlier this year, Docker, Inc. made its own orchestration engine, Docker Swarm, a part of its Docker Engine as per of release 1.12. As a result, Docker Engine now allows users to manage complex containerized applications without additional software. The Docker orchestration capabilities are opt-in; they must be activated by the user, though many worry that not opting in may lead to backward compatibility issues down the road.
Docker’s move to add Swarm to Docker Engine added friction to a community already at odds. In late August, the interest in forking Docker surfaced in discussions with vendors and users. Publicly, developers were vocal about Docker’s aggressive release schedule and how it put third-party system providers at odds with their own customer base.
At present, Red Hat’s engineers are taking the lead with the OCID project, which kicked off in June of this year. It’s being developed as part of the Kubernetes Incubator initiative, which supports projects that, in turn, support Kubernetes. Its lead maintainers are Google’s Vishnu Kannan, and Red Hat’s Mrunal Patel.
“We don’t really need much from any container runtime, whether it’s Docker or [CoreOS’] rkt — they need to do very little,” said Kelsey Hightower, Google’s staff developer advocate, “mainly give us an API to the kernel. So this is not just about Linux. We could be on Windows systems… if that’s the direction the community wanted to go in, to support these ideas. Because it’s bigger than Docker Inc., at this point. This is about, how do you run a containerized application?”
Front and Center
If OCID (pronounced O-C-I-D) does become, as some suggest, a reference implementation of a container engine, it will provide a free and open source option for running OCI containers at scale. It will include runc, the container runtime based on libcontainer, which Docker donated to the OCI for use as a free standard. It will also include the code necessary for pushing images to and pulling them from repositories hosted by container registries. It will support the Container Network Interface, built by CoreOS, for modeling plug-ins independently from the engine hosting the containers.
Yet the component that is OCID’s raison d’être is called oci-runtime. As its project page on GitHub describes it, “This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.”
In the Kubernetes environment, there is a component named the kubelet whose job is to manage pods — clusters of containers that typically comprise an application. A kubelet is capable of acting as a stand-alone daemon, a kind of local overlord for the containers in its pod. The key project in OCID, oci-runtime, will implement a new class of interface between kubelets and Kubernetes, enabling the orchestrator to effectively manage the entire container lifecycle.
“What we want to do now is refactor the kubelet code base,” remarked Google’s Hightower, “so that the way we integrate with Docker, rkt, and now OCID is much cleaner and consistent. We want to provide a well-defined interface to say, in order for a container runtime engine to be compliant with Kubernetes, we’re going to have a Container Runtime Interface. The CRI will be an API abstraction of what your container runtime needs to support, in order for us to certify it as something we can run underneath Kubernetes.”
If OCID sounds like a wholly new, feature-complete container engine, it is not. Despite how it has been characterized to date, there is one very significant omission.
“Right now, at this point, OCID is not implemented to be able to build an OCI image,” said Red Hat’s Walsh. “You wouldn’t have the ability to create the image in the first place… You still need a tool like Docker for building the OCI image — [which] is basically a Docker image.”
“Our goal — and, I think, the industry’s goal — is to have a standard implementation of the container runtime, and a standard format for how container images are built,” remarked Joe Fernandes, Red Hat’s senior director of product management for OpenShift (Red Hat’s commercial orchestration engine, based on Kubernetes). “So that different vendors can build different solutions on top and the ecosystem can grow.”
The Docker image format has established common ground for the development ecosystem, Fernandes explained. He perceives the publication of that format as a standard to be part of the job of OCI.
“Once you have the format, then it comes down to the runtime,” he said. “Some people thought that, when Docker contributed libcontainer — which became runc — that was the runtime. It was a piece of the runtime — the lowest piece. But these other modules also make up the runtime. So what we’re doing here is creating standard implementations for these modules.”
In December, that actually appeared to be true, after presentation slides made it appear to have been firmly decided that OCI’s focus would be restricted to the container runtime (runc) and the container image format.
But Red Hat’s developers, aided by Google’s developers, are arguably extending the definition of “container runtime.” Fernandes explained it as the foundation tier of a much broader implementation. And Hightower explained it as, by definition, including an interface to an orchestrator component — for now, Kubernetes, but theoretically not restricted to Kubernetes, assuming any other orchestrator wants to pick up the same API and run with it.
“OCID is really the natural progression of OCI,” explained Hightower. “Their goal is to specify some standards around pretty much what Docker started: this idea that we can take our applications, package them into an image format, put them on a repository, and share them with anyone. And once they’re shared, pull them down, extract them, and run them in a very consistent way. ”
In order to do that, OCI not only needs to specify how to run containers, Hightower explained, but also how to download and network them, giving people something they can actually use.
“If they’re going to be a bunch of documents and libraries, let’s give people something that is, I guess in many ways, comparable to Docker itself,” he continued, “that is focused only on those bits that I just mentioned. That way, it’s less of a mystery on what OCI is; now you’ll have a fully specified, integrated thing that you can actually use in some of these projects.”
One of OCID’s principal components is skopeo, a tool for validating and fetching container images from Docker repositories, that today is one of Red Hat’s major contributions to Docker itself.
Walsh explained skopeo as originally having been developed as a tool that examined a container image, and produced a JSON file that could be used as its manifest. That tool was later leveraged for pulling and pushing images from a Docker registry. Red Hat then enrolled it as part of its Project Atomic, and the Go library-based derivative is now available on GitHub as containers/image.
In addition to skopeo, Walsh said, the OCID project includes a copy-on-write file system based on a Docker component called the graph driver.
“We started working several months ago,” he told us, “on splitting out the copy-on-write file system, so the other tools besides Docker could share [it]. We actually found it difficult to continue to work underneath Docker to get this as a separate module, so we decided to pull it out and concentrate on getting the module totally correct. Hopefully, at some point, we want to open up a pull request to get this back into Docker.”
In the meantime, he said, the current form of the library also appears on GitHub as containers/storage. While Docker does have its own copy-on-write file system, Walsh pointed out, it runs entirely in its own exclusive memory.
“We’re using the same shared libraries that OCID is using,” he said. “We would love to see some of these libraries going into future versions of rkt, and we would actually like to see them going back into Docker… What we want to do with storage is move the locking of file system storage to file lock, so that multiple applications could share the storage at the same time. We would love to get that back into the Docker daemon at some point.”
As a project in development, OCID may be certain things today that it will cease to be in the future, and vice versa. One of the things OCID is not, at least today — surprisingly — is a mechanism for running OCI containers exclusively. Nor will OCID fall under the scope of the OCI project, said Patel, for certifying containers as OCI compliant. Documentation co-authored by Patel and Kannan explicitly calls for flexibility of container format support, including continuing to support Docker as OCI goes forward, and also supporting everyday TAR files.
As Joe Fernandes reminded us, Docker began its existence as a complete container system, and over the preceding months (it’s still a very young product) became more modularized — runc being one of those modules. Although runc’s predecessor was designed to be pulled into Docker Engine, runc can just as easily be pulled into CoreOS’ rkt today.
“You can view OCID along those lines,” Fernandes explained. “There’s other modules here that we’d like to carve out and make them standards… We want to work with the Docker community, who could pull these back in, now that they’re standard interfaces — versus having a sort of monolith that has it all blended together. It’s the whole UNIX philosophy of, ‘Do One Thing Well.’ We created the OCID project to house that work — to test out these ideas, and come up with standard implementations that then can be pushed back into OCI, and then ultimately pulled into the different implementations, Docker being the obvious one.”
As Fernandes and Walsh put it to us, the omission of an “engine” component as the creator of container images for OCID, is by design. They agreed that it’s the engine where vendors such as Red Hat, Docker, CoreOS, and VMware could attribute their respective “value-adds” and compete with one another on value and service.
Yet at the time of this writing, there remain two very curiously phrased elements of the open source documentation on GitHub (the entirety of which, of course, remains under continual review).
First, there’s this description that appears on the project page of the kubelet interface: “For the first release, oci-runtime will continue to use docker-engine for managing images. The image management functionality will be separated from the runtime functionality so each could have different implementations which could potentially be switched. It will be ideal to support the OCI image specification for images once it reaches v1.0.”
That description may be interpreted as stating that the runtime component will lean on Docker code for support, at least in the initial rounds, but it also sets the stage for OCID for using other engines, decreasing its reliance on the Docker Engine itself.
The second, equally curious, phrase is the hook in the OCID documentation that links OCID to Google: specifically, its explicit self-declaration as a part of a Kubernetes environment. Up front, OCID states its purpose in life is to provide management for the members of Kubernetes’ pods — and that declaration takes precedence over its management of containers.
Google’s Hightower believes that Kubernetes has earned a position in the container community that could perhaps be described as the arbiter of fairness.
“This is more about leveling the playing field,” he said. “Right now, Docker is the most well-supported container runtime, because we at Google and the community have done the majority of the work to make Docker work as a first-class runtime. But now that we see that people want choice, and it has always been our vision to support multiple container runtimes, one step in doing that is creating this Container Runtime Interface. As part of that, we’re going to refactor our current code base to implement the [CRI].”
For the CRI to work as Hightower explained it, however, will require a certain kind of component that strikes at the heart of the very first argument that ever erupted over Docker: specifically, a system container, capable of launching itself. Such a component effectively kicks systemd, Linux’s now preferred application initialization mechanism, back out from under the rug.
“We want to ship everything in the form of containers, so we need a container runtime that doesn’t use orchestration,” explained Red Hat’s Walsh, “because the orchestration needs it. So we have a chicken-and-egg situation. By using system containers, we’re able to pull an image using the containers/image library, we’re able to store it on containers/storage, and then we’re going to execute runc, but in this case, we’ll be running it as part of systemd.”
It’s one way, Walsh believes, that a containerized environment can effectively launch itself — and Kubernetes in turn — in production.
Unquestionably, the scope of OCI has expanded in its first year of operation. There had been some talk at the beginning that, once the container format specification was released, OCI’s work would be done. Now, we see two of its most prominent members pushing for an open copy-on-write file system library, an open registry maintenance library, a networking scheme, and a bootstrap mechanism that relies upon an architectural feature Docker would prefer to remain deprecated. Could a container security library be far behind?
The vital component — the container engine itself — will be a matter left to each data center to resolve for itself. There, Docker might have an early advantage in having created the market in the first place. But it could find itself competing one-on-one with OpenShift, on terms that Red Hat helped create.
“I would say that OCID is going to be used by Kubernetes. Our goal, in an OpenShift environment, is to make Kubernetes use OCID as its container runtime environment. Then OpenShift will use Kubernetes, so we’ll get the advantages of OCID.”
So if you were to use Kubernetes in the future, you’d have OCID, and the decision would be made for you.
Developers are not spinning virtual machines (VMs) up and down as expected.
Aug 19, 2016
Is This the Beginning of the End for OpenStack?
I was struck by a conversation I had earlier this year during the OpenStack conference in Austin with a technical architect from one of the bigger players. He was seeing baffled IT teams who had OpenStack clouds in which the users (developers) were not spinning virtual machines (VMs) up and down as expected. They were just deploying a bunch of VMs and then leaving them running for long periods. When the IT folks investigated, they found the VMs were Docker host VMs and the developers were now deploying everything as containers. There was a lot of dynamic app deployment going on, just not at the VM level.
Then recently Mirantis announced that it would be porting the OpenStack IaaS platform to run as containers, scheduled (orchestrated) by Kubernetes, and making it easy to install Kubernetes using the Fuel provisioning tool. This is a big shift in focus for the OpenStack consulting firm, as it aims to become a top Kubernetes committer.
OpenStack, containers and Kubernetes all exist for a singular purpose: to make it faster and easier to build and deploy cloud-native software. It’s vital to pay attention to the needs of the people who build and deploy software inside enterprises, OpenStack’s sweet spot today.
Questioning OpenStack’s Relevancy
If I put myself in a developer’s shoes, I am not sure why I care about spinning up VMs and, hence, OpenStack. Docker containers came along and made packaging and deploying microservices much easier than deploying into VMs. And there’s now a strong ecosystem around container technology to fill the gaps, extend its capabilities and make the whole thing deployable in production. The result has been phenomenal growth in container usage in a very short amount of time. The remaining operational problem for the average enterprise is deployment of its container stack of choice onto bare metal or its existing hypervisor, which it can do today with tools it already has, such as Puppet/Chef/Salt or, in the future, using Fuel.
Of course, this focuses on the developers working on new stuff or refactoring apps. Container penetration is small relative to the mass of existing systems, as lots of things are not in containers today and will be happily uncontained for years to come. So, there’s obviously still a need for VMs. Is that why OpenStack still matters?
Problem one is that OpenStack initially was a platform to arm service providers to compete with AWS, and when that didn’t pan out, refocused on being the infrastructure as a service (IaaS) for new apps. There was a time when it was hard to read an article about OpenStack without hearing about “pets vs. cattle,” and OpenStack was designed to herd cattle. That was the reason to deploy it, even if you already had vSphere or Hyper-V with automation. It was tough to migrate existing virtualized apps to OpenStack without changes.
Problem two is that OpenStack itself is a large and complex collection of software to deploy. It has itself become a big, complex pet, which is why Mirantis and others can make a living providing services, software and training. So an OpenStack deployment looks like a non-trivial cost and time investment—not to enable the exciting cloud-native new stuff, but the stuff that is already running just fine elsewhere in the data center. That’s a tough sell.
That’s why I question the future of OpenStack.
This is not to say that organizations with OpenStack somehow made a mistake: Giving their users on-demand cloud app environments is a good call. However, if they were making the same decisions today, those enterprises would need to think very hard about what their developers and DevOps teams would prefer: a dynamic container environment perhaps based on Fuel, Docker and Kubernetes—on-premises or in a public cloud—versus an on-prem private IaaS such as OpenStack.
Tough times ahead.
About the Author/Mathew Lodge
Mathew Lodge is Chief Operating Officer at Weaveworks Inc. and was previously vice president in VMware’s Cloud Services group. Mathew has 20+ years’ diverse experience in cloud computing and product leadership. He has built compilers and distributed systems for projects including the International Space Station, helped connect six countries to the Internet for the first time, and managed a $630 million router product line at Cisco. Prior to VMware, Mathew was senior director at Symantec in its $1 billion-plus information management group.
Red Hat still plans on being The OpenStack company
Other companies -- Canonical, SUSE, and Mirantis -- all plan on being OpenStack powers, but Red Hat shows it's determined to be number one with its latest OpenStack cloud release.
Steven J. Vaughan-Nichols
September 1, 2016
Of course, you think of Red Hat as being The Linux company. Everyone does. But, if Red Hat has its way, you'll think of them as The cloud company. Red Hat is showing once more its intention to be the king of the OpenStack hill with its release of Red Hat OpenStack Platform (RHOP) 9.
This is a highly scalable, open-source Infrastructure-as-a-Service (IaaS) platform. It's designed to deploy, scale, and manage private cloud, public cloud, and Network Functions Virtualization (NFV) environments. It's based on the OpenStack community "Mitaka" release. The focus of this release was to make the notoriously hard-to-install OpenStack easier to deploy.
RHOP 9 is based on Red Hat Enterprise Linux (RHEL) 7.2. This version of Red Hat's flagship operating system was designed to make it more cloud and container friendly than ever. RHOP also features Red Hat Ceph Storage 2 for software-defined storage and Red Hat CloudForms for cloud management and monitoring.
With Red Hat Ceph, RHOP 9 can handle 64TBs of free object and block storage for customers evaluating a robust, scale-out cloud storage solution.
CloudForms provides inherent discovery, monitoring, and deep inspection of OpenStack resources. This enables you to make policy-based operational and life-cycle management decisions over all its infrastructure components, and virtualized workloads. Red Hat didn't address how this will fit in with its recent purchase of Ansible, the popular DevOps tool.
The top features in this release are:
Automated updates and upgrades with Red Hat OpenStack Platform Director -- Red Hat enables users to upgrade their OpenStack deployments through the automation and validation mechanisms of the Red Hat OpenStack Platform Director. This, in turn, is based on the upstream community project TripleO (OpenStack on OpenStack). This in-place upgrade tool offers a simplified means to take advantage of the latest OpenStack advancements, while preventing downtime for production environments.
Live migration improvements and selectable CPU pinning from OpenStack Compute (Nova) -- The Compute component now offers a faster and enhanced instance of the live migration process, helping system administrators to observe its progress and even pause and resume the migration task. A new CPU pinning feature can dynamically change the hypervisor behavior with latency-sensitive workloads such as NFV, enabling more fine-grained performance control.
Tech Preview of Google Cloud Storage backup driver in OpenStack Block Storage (Cinder) -- As part of Red Hat's continued collaboration with Google, new disaster recovery policies in RHOP 9 now extend to the public cloud using integrated drivers created for Google Cloud Storage. This new feature enables more secure backups of critical data across the hybrid cloud.
Besides looking for private and hybrid IaaS cloud customers, Radhesh Balakrishnan, the general manager of Red Hat's OpenStack initiative, is looking to expand to telecoms. "With this release of Red Hat OpenStack Platform 9, we continue to add capabilities to meet the production requirements of enterprises rolling our private clouds and service providers deploying NFV."
Red Hat is also looking closely with its partners Dell and Intel to offer corporate customers a complete hardware/software vertical stack for one stop OpenStack buyers. Will it work? The OpenStack vendor market is still full of competitors, but Red Hat is making it clear that they mean to be at the top when all is said and done.
Red Hat grows headcount by 25 percent in first half of fiscal 2017
Red Hat has on-boarded 1,000 new hires in the first half of the year, particularly in sales, Chief Finance Officer Frank Calderoni told analysts Wednesday – a 25 percent increase of its workforce.
By the end of the year, the company will scale its hiring plans back to normal, CEO Jim Whitehurst says. And sales – not acquisitions – will be how the Raleigh-based, open-source technology firm plans to make its $5 billion in five years goal.
Lauren K. Ohnesorge
Sep 22, 2016
Speaking after the firm reported $600 million in second quarter revenue, up 19 percent year-over-year, Whitehurst says the $5 billion goal he outlined earlier this year is based on markets the company already plays in – no acquisitions required.
“That said, I do believe we’re going through a shift, not only around hybrid cloud in terms of architecture, but also the application architecture that’s being driven by containers,” he says. “We will continue to look for ways to build our container platform.”
Deals totaling more than $1 million rose more than 60 percent in the quarter, Whitehurst says. Three of those deals were in OpenStack, he says.
Whitehurst attributes the increase to the company “doing an even better job of selling” across its entire portfolio – not just its flagship Red Hat Enterprise Linux. And new sales hires, which he says will temper back to normal for the rest of the fiscal year, are a big part of the strategy.
Calderoni says the focus being imprinted on the sales team is “not just selling an operating system,” but “selling a portfolio." And that's how the company hopes to compete down the line in replacement cycles.
“We’re trying to balance driving current results with investing in these new products for the long run,” Whitehurst adds.
And acquisitions could be on the table. Last year, Red Hat announced a deal to buy a firm close to home, Durham-based open-source automation firm Ansible.
“It’s been a leading element of our management portfolio in such a short period of time,” Calderoni says of Ansible, adding that the company has seen “great alignment” when it comes to the business.
Whitehurst says that if Red Hat does look for buyouts, it will look for complimentary technologies – such as the automation added with the Ansible buy.
LONDON, U.K, Sept 27, 2016, Canonical today launches a distribution of Kubernetes, with enterprise support, across a range of public clouds and private infrastructure. “Companies moving to hyper-elastic container operations have asked for a pure Kubernetes on Ubuntu with enterprise support” said Dustin Kirkland, who leads Canonical’s platform products. “Our focus is operational simplicity while delivering robust security, elasticity and compatibility with the Kubernetes standard across all public and private infrastructure”.
Hybrid cloud operations are a key goal for institutions using public clouds alongside private infrastructure. Apps running on Canonical’s distribution of Kubernetes run on Google Compute Platform, Microsoft Azure, Amazon Web Services, and on-premise with OpenStack, VMware or bare metal provisioned by MAAS. Canonical will support deployments on private and public infrastructure equally.
The distribution adds extensive operational and support tooling but is otherwise a perfectly standard Kubernetes experience, tracking upstream releases closely. Rather than create its own PAAS, the company has chosen to offer a standard Kubernetes base as an open and extensible platform for innovation from a growing list of vendors. “The ability to target the standard Kubernetes APIs with consistent behaviour across multiple clouds and private infrastructure makes this distribution ideal for corporate workgroups in a hybrid cloud environment,” said Kirkland.
Canonical’s distribution enables customers to operate and scale enterprise Kubernetes clusters on demand, anywhere. “Model-driven operations under the hood enable reuse and collaboration of operations expertise” said Stefan Johansson, who leads ISV partnerships at Canonical. “Rather than have a dedicated team of ops writing their own automation, our partners and customers share and contribute to open source operations code.”
Canonical’s Kubernetes charms encode the best practices of cluster management, elastic scaling, and platform upgrades, independent of the underlying cloud. “Developing the operational code together with the application code in the open source upstream Kubernetes repository enables devops to track fast-moving K8s requirements and collaborate to deliver enterprise-grade infrastructure automation”, said Mark Shuttleworth, Founder of Canonical.
Canonical’s Kubernetes comes integrated with Prometheus for monitoring, Ceph for storage and a fully integrated Elastic stack including Kibana for analysis and visualisations.
Enterprise support for Kubernetes is an extension of the Ubuntu Advantage support program. Canonical’s distribution of Kubernetes is supported on any Ubuntu machine covered by UA. Additional packages include support for Kubernetes as a standalone offering, or combined with Canonical’s OpenStack. Canonical also offer a fully managed Kubernetes, which it will deploy, operate and then transfer to customers on request.
This product is in public beta, the final GA will coincide with the release of Juju 2.0 in the coming weeks.For more information about the Canonical distribution of Kubernetes, please visit our website