Resultados 1 a 9 de 9
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    [EN] Docker Containers are from Mainframe era

    Abdul Jaleel

    Container a.k.a LXC (Linux Container) is an operating-system-level virtualization environment for running multiple isolated application instances on a single Linux OS. Linux containers give each application running on a server its own, isolated environment to run, but they all share the host server's operating system. Remember this to distinguish between Virtualization and Containers: In containerization, multiple virtual images of operating system with dedicated hardware are not launched for each application.

    LXC combines linux kernel's cgroups and support for isolated namespaces to provide an isolated environment for applications. The Linux kernel provides cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) and namespace isolation functionality that allows complete isolation of an applications' view of the operating environment, including process trees, networking, user IDs and mounted file systems. Kernel libraries are shared across each container, and user mode processes communicate and use the kernel through the Kernel API and system calls.

    Now, let us look at Mainframe systems. On the Mainframe z/OS operating system, user processes are run in an address space with multiple tasks (TCBs) providing multi-tasking support, but share a common “kernel” through libraries on the system volume. Privileged instructions are executed using supervisor calls or SVC calls, which run in supervisor mode – the equivalent of running a process in the kernel. Thus each address space is logically isolated from another using virtual memory addressing however all address spaces and tasks are processed or “dispatched” on the same z/OS instance. Workload Manager (WLM) - equivalent to cGroups - controls the resource allocation priorities based on the goals set for service classes. So it is a single operating system , applications run utilizing multiple address spaces for various components (database,security,logging,networking,I/O, batch, OLTP etc) within single OS sharing all the underlying resources.

    In short, zOS address spaces are namespaces on containers, zOS WLM is cGroup on linux containers and it is just single instance of OS in both the cases. So today's hottest trend in data center world is a technology concept from 1960's ?

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Mainframe era

    VMware ESX/ESXi: Type-1, bare-metal hypervisor


    In their 1974 article, Formal Requirements for Virtualizable Third Generation Architectures, Gerald J. Popek and Robert P. Goldberg classified two types of hypervisor:

    Type-1, native or bare-metal hypervisors

    These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. The first hypervisors, which IBM developed in the 1960s, were native hypervisors. These included the test software SIMMON and the CP/CMS operating system (the predecessor of IBM's z/VM). Modern equivalents include Xen, Oracle VM Server for SPARC, Oracle VM Server for x86, Microsoft Hyper-V and VMware ESX/ESXi.

    Type-2 or hosted hypervisors

    These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.

    However, the distinction between these two types is not necessarily clear. Linux's Kernel-based Virtual Machine (KVM) and FreeBSD's bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor. At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with other applications competing for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.

    Mainframe origins

    The first hypervisors providing full virtualization were the test tool SIMMON and IBM's one-off research CP-40 system, which began production use in January 1967, and became the first version of IBM's CP/CMS operating system. CP-40 ran on a S/360-40 that was modified at the IBM Cambridge Scientific Center to support Dynamic Address Translation, a key feature that allowed virtualization. Prior to this time, computer hardware had only been virtualized enough to allow multiple user applications to run concurrently (see CTSS and IBM M44/44X). With CP-40, the hardware's supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.

    Programmers soon re-implemented CP-40 (as CP-67) for the IBM System/360-67, the first production computer-system capable of full virtualization. IBM first shipped this machine in 1966; it included page-translation-table hardware for virtual memory, and other techniques that allowed a full virtualization of all kernel tasks, including I/O and interrupt handling. (Note that its "official" operating system, the ill-fated TSS/360, did not employ full virtualization.) Both CP-40 and CP-67 began production use in 1967. CP/CMS was available to IBM customers from 1968 to early 1970s, in source code form without support.

    CP/CMS formed part of IBM's attempt to build robust time-sharing systems for its mainframe computers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this even allowed beta or experimental versions of operating systems*—*or even of new hardware—*to be deployed and debugged, without jeopardizing the stable main production system, and without requiring costly additional development systems.

    IBM announced its System/370 series in 1970 without any virtualization features, but added virtual memory[clarification needed] support in the August 1972 Advanced Function announcement. Virtualization has been featured in all successor systems (all modern-day IBM mainframes, such as the zSeries line, retain backward compatibility with the 1960s-era IBM S/360 line). The 1972 announcement also included VM/370, a reimplementation of CP/CMS for the S/370. Unlike CP/CMS, IBM provided support for this version (though it was still distributed in source code form for several releases). VM stands for Virtual Machine, emphasizing that all, and not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, and time-sharing vendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modern open source projects. However, in a series of disputed and bitter battles, time-sharing lost out to batch processing through IBM political infighting, and VM remained IBM's "other" mainframe operating system for decades, losing to MVS. It enjoyed a resurgence of popularity and support from 2000 as the z/VM product, for example as the platform for Linux for zSeries.

    As mentioned above, the VM control program includes a hypervisor-call handler that intercepts DIAG ("Diagnose") instructions used within a virtual machine. This provides fast-path non-virtualized execution of file-system access and other operations (DIAG is a model-dependent privileged instruction, not used in normal programming, and thus is not virtualized. It is therefore available for use as a signal to the "host" operating system). When first implemented in CP/CMS release 3.1, this use of DIAG provided an operating system interface that was analogous to the System/360 Supervisor Call instruction (SVC), but that did not require altering or extending the system's virtualization of SVC.

    In 1985 IBM introduced the PR/SM hypervisor to manage logical partitions (LPAR).

    IBM provides virtualization partition technology (LPAR) on System/390, zSeries, pSeries and iSeries systems. For IBM's Power Systems, the POWER Hypervisor (PHYP) is a native (bare-metal) hypervisor in firmware and provides isolation between LPARs. Processor capacity is provided to LPARs in either a dedicated fashion or on an entitlement basis where unused capacity is harvested and can be re-allocated to busy workloads. Groups of LPARs can have their processor capacity managed as if they were in a "pool" - IBM refers to this capability as Multiple Shared-Processor Pools (MSPPs) and implements it in servers with the POWER6 processor. LPAR and MSPP capacity allocations can be dynamically changed. Memory is allocated to each LPAR (at LPAR initiation or dynamically) and is address-controlled by the POWER Hypervisor. For real-mode addressing by operating systems (AIX, Linux, IBM i), the POWER processors have designed virtualization capabilities where a hardware address-offset is evaluated with the OS address-offset to arrive at the physical memory address. Input/Output (I/O) adapters can be exclusively "owned" by LPARs or shared by LPARs through an appliance partition known as the Virtual I/O Server (VIOS). The Power Hypervisor provides for high levels of reliability, availability and serviceability by facilitating hot add/replace of many parts (model dependent: processors, memory, I/O adapters, blowers, power units, disks, system controllers, etc.)
    Última edição por 5ms; 25-04-2017 às 12:06.

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    KVM: Bare-Metal Hypervisor?

    Keith Ward

    I need to turn to you, knowledgeable readers, for help in answering some questions. In a followup to yesterday's announcement by Red Hat about its virtualization roadmap, I asked the company some questions about the new Enterprise Hypervisor.

    Specifically, I wanted some details about how KVM would work as a standalone hypervisor, since my understanding is that it's hosted inside the Linux kernel (i.e., a Type II hypervisor). The response I got from Navin Thadani, senior director, virtualization business at Red Hat, threw me for a bit of a loop. He says KVM is a bare-metal hypervisor (also known as Type I), and even tries to make the case that Xen is a hosted hypervisor. Here's his comment in full:

    It is a myth that KVM is not a Type-1 hypervisor. KVM converts Linux into a Type-1 hypervisor. There is only one kernel that is used (and that is the Linux kernel, which has KVM included). On the flip side, I can make an argument that Xen is not a Type-1 hypervisor, because the CPU and memory is scheduled by the hypervisor, but IO is scheduled by Dom0, which is a guest (so it's not bare metal). In the KVM architecture, the CPU, memory, and IO are scheduled by the Linux kernel with KVM.

    On the other hand, other folks fall into the "KVM is a hosted hypervisor" camp, exemplified by this snippet from Brian Madden:

    Xen folks attack KVM, saying it's like VMware Server (the free one that was called "GSX") or Microsoft Virtual Server because it's really a Type 2 hypervisor that runs on top of another OS, rather than a "real" Type 1 hypervisor. KVM responds "So what? Why should we rewrite an OS from scratch when something like Linux is available? And if you want to use a KVM machine as a dedicated VM host, then fine, just don't install anything else on that box."

    So, is KVM hosted or not? Is Xen hosted or not? Is Red Hat full of hot air, or are they onto something? I'll be honest and say that I just don't know. Thadani upset my hypervisor apple cart with this comment.

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010


    Prediction: Citrix drops open source Xen hypervisor for Hyper-V. The world drops Xen for KVM.

    Brian Madden
    29 Jun 2008

    Today, VMware dominates the virtualization market in the enterprise. The only other real competitor is the open source Xen, whether in the form of Citrix's XenServer commercial product, one of the open source flavors like Sun xVM, or the actual open source project.

    Microsoft's upcoming Hyper-V product is very similar to the open source Xen hypervisor. The two are so similar, in fact, that I'm now convinced that once Hyper-V comes out, Citrix will shift XenServer so it runs on Hyper-V instead of the open source Xen hypervisor. When that happens, Citrix will have no reason to continue to support Xen.

    Meanwhile, there's an upstart open source virtualization engine called KVM. ("KVM" in this context is "Kernal Virtual Machine") Once Citrix (and Microsoft) shift their focus to Hyper-V, we may see the open source community rally behind KVM instead of Xen.

    The hardware virtualization market of 2009 / 2010 could very well be split into three camps:

    • VMware ESX
    • Microsoft / Citrix Hyper-V
    • Open source KVM

    Let's dig deeper into how we might get there from where we are today.

    Hyper-V and Xen are very similar

    Microsoft was involved in the original development of the Xen project in Cambridge. (Gabe originally wrote about this last August.) Then at VMworld in August 2007, Citrix and Microsoft announced that the virtual machines of XenServer and Hyper-V will be compatible with each other, and the APIs to control them will be compatible.

    Hyper-V and the open source Xen hypervisor are so similar, in fact, that one could plausibly argue that "Hyper-V is the Windows version of Xen."

    Benny Tritsch likes to point out that the whole reason Microsoft built the "Server Core" installation option for Windows Server 2008 is so that they'd have something other than Linux to run in Hyper-V's parent partition. (The parent partition in Hyper-V is analogous to Dom0 in Xen.)

    When Hyper-V comes out, Citrix will shift focus there, away from Xen

    The current market penatration of XenServer is zero. Literally zero. (Sure, some people have bought XenServer, but the percentage of people currently using XenServer is less than the margin of error in all polls asking people what virtualization platform they use. So for all intents and purposes, XenServer's market share is zero.)

    But when Hyper-V comes out, this will change. Hyper-V's market share will not be zero for long. Hyper-V will be free and included in all versions of Server 2008. Microsoft has a long history and does a great job creating products that—while technically inferior to competitors—are just "good enough" for people to use them. Especially when they're built-in to Windows.

    I have no idea what Hyper-V's market share will be six months or a year from now. But I can absolutely 100% guarantee it will be more than XenServer's is today.

    So if today's Citrix XenServer product adds value to a hypervisor that no one's going to use in a year, and if that hypervisor is very similar to a hypervisor that millions of people will use in a year, why wouldn't Citrix make the modifications to XenServer to support Hyper-V-based hypervisors in addition to Xen-based hypervisors?

    In fact, we already have precedence for this relationship between Citrix and Microsoft. In the server-based computing world, Microsoft provides baseline functionality with terminal services, and Citrix adds value with XenApp (Presentation Server). This would be no different in the hardware virtualization world: Microsoft provides the baseline functionality via Hyper-V, and Citrix would add value with XenServer.

    From a practical standpoint, I'm sure XenServer won't "automatically" work on Hyper-V. Citrix will certainly have to do some work make everything functional. But the similarities between Hyper-V and Xen should make this process relatively straightforward.

    Citrix XenServer supporting Hyper-V is a fairly non-controversial prediction that most people agree with. So let's take this one step further. Assuming Citrix ports XenServer to Hyper-V, how long will they continue supporting the open source Xen hypervisor? Or more directly, why should Citrix continue to support the open source Xen?

    From a practical standpoint, maintaining a Xen version and a Hyper-V version of XenServer would just be extra work for Citrix. And will Citrix lose any sales if they just get rid of Xen support? Not likely. (Certainly not enough to outweigh the savings of ditching Xen support altogether.) Chances are that if a customer is "anti-Windows" enough to not want to use Hyper-V, then they're going to be the type of person who would instead prefer to use one of the open source Xen products instead of Citrix's commercial XenServer.

    Furthermore, working with the open source community is not natural for Citrix. They don't care about that community, and their company and business model is in no way setup to deal with open source. The faster that Citrix can distance themselves from open source and tie themselves to Redmond, the stronger their enterprise sales will be.

    So when Citrix finally drops open source Xen support from XenServer, who will get upset? The five people who actually bought it already? The open source community who wasn't going to pay for it anyway?

    You know who would not be upset? The millions of people who run Hyper-V, the millions of people who are comfortable paying for software they use to run their companies, and the millions of people who are comfortable paying Citrix for software that adds value to the out-of-the-box capabilities of Microsoft Windows.

    Once Hyper-V is out, there will be no reason for Citrix to continue to support the open source Xen hypervisor.

    The open source community shifts towards KVM, away from Xen

    If Citrix drops support for Xen, what does that mean for the future of Xen? Today there are many companies besides Citrix selling or providing products and solutions based on the open source Xen hypervisor. Some of these products include:

    • Sun xVM
    • Oracle VM
    • Virtual Iron
    • Red Hat (oops, not as of last week!)
    • Novell
    • Virtual Iron

    But the open source community is split right now over the best way to do virtualization—Xen, or something called "KVM."

    KVM (remember this is "Kernel Virtual Machine") is an open source project that is a loadable kernel module that snaps-in to any Linux kernel from February 2007 onward (version 2.6.20 or newer).

    "KVM versus Xen" is a religious battle that is best hashed out elsewhere, but I'll try to provide an overview of the two sides here:

    • Xen is a full hypervisor. It's more-or-less its own full operating system, complete with its own hardware compatibility list, that was built from the ground-up to host virtual machines.
    • KVM is not a hypervisor. KVM is not its own operating system. KVM is a "snap-in" to an existing operating system, such as Linux, that lets it run processes in "guest" mode in addition to user mode or kernel mode. This means that KVM runs on anything that Linux runs on (which basically means it runs on anything).

    Xen folks attack KVM, saying it's like VMware Server (the free one that was called "GSX") or Microsoft Virtual Server because it's really a Type 2 hypervisor that runs on top of another OS, rather than a "real" Type 1 hypervisor. KVM responds "So what? Why should we rewrite an OS from scratch when something like Linux is available? And if you want to use a KVM machine as a dedicated VM host, then fine, just don't install anything else on that box."

    Xen folks also say that Xen offers better performance since it offers paravirtualization, although KVM is working support for paravirtualized NICs and storage, and it's still unclear whether that even matters.

    Finally, Xen folks say that KVM is too new and unproven. KVM responds by saying, "We've been in the Linux kernel since Feb 2007. Those kernel maintainers aren't dumb, and the fact we're in the kernel shows how solid we are." (Interestingly, Xen has tried several times—but always failed—to get into the kernel.)

    Right now there's no clear winner between Xen and KVM. It's true that KVM's "newness" seems to be it's biggest downside, although support for it is growing. In fact last week RedHat announced that they're throwing their weight behind KVM instead of Xen. And perhaps the most interesting thing about KVM is that it was started by Moshe Bar—the same Moshe Bar who cofounded XenSource!

    So if (when?) Citrix switches to Hyper-V, and now that RedHat is on KVM, who's still going to want to spend the time and effort maintaining Xen?

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    A brief history of Xen and XenSource (2007)

    XenSource originally started as a pay-for support option for organizations that wanted to run the Xen hypervisor.

    Gabe Knuth
    16 Aug 2007

    With the recent acquisition of XenSource by Citrix, it seems like a good idea to take a look at the history of Xen and XenSource so we have our bearings once the dust settles.

    First off, XenSource is a commercial company that sells a suite of enhancements and management products that augment an open-source hypervisor called Xen. The lead developer for XenSource, Ian Pratt, is also the chief architect of the Xen open-source project - a competitive leg up by no small means. In fact, XenSource is the key corporation behind the Xen hypervisor, so any enhancements to Xen also enhance XenSource (and anyone else who uses Xen, for that matter).

    XenSource's goal, to put it unofficially, is to provide a solution akin to VMware, but using an open-source hypervisor. This hypervisor has differences from VMware that some call advantages. Some also call them disadvantages, which has led to what appears to be an epic battle akin to NT/Novell circa 1996. More on this battle in future articles, I'm sure. XenSource originally started as a pay-for support option for organizations that wanted to run the Xen hypervisor. Over the last few years, they have created an impressive suite of management solutions for Xen, and have even collaborated with Microsoft.

    Xen, first released to the public in 2003, uses a type of virtualization called "paravirtualization." This is essentially a software method of interfacing virtual machines to the host hardware...sort of an API. Because of this, Linux operating systems have to have modifications made to run as a guest on a Xen server (called "Xen-enabled"). This is the part that sparked that battle between the VMware guys and the Xen guys. VMware uses a method called Binary Translation, which uses hidden hardware instructions to accomplish the same thing (that is, functional virtual machines).

    With the advent of Intel VT and AMD SVM, Xen now supports Windows virtual machines without changes to the guest OS (Windows virtual machines were not supported prior to support for VT and SVM). This has brought the virtualization methods of the two companies a little closer together, but still fundamentally different.

    From 2003 to 2005, Xen's relative youth, it was developed into a popular desktop hypervisor. Able to support only one 32-bit processor, there wasn't much enterprise appeal. This is when XenSource was simply a commercial Xen support company. In 2005, XenSource released Xen v3, the first enterprise-class release of Xen (even though it was called version 3, it was really the first). With this release Xen could run on servers with up to 32 processors, and was the first version with built-in support for Intel's VT technology. AMD hadn't released SVM yet, but it too was eventually supported. In addition to the processor enhancements, Xen v3 also introduced support for Physical Address Extensions (PAE) to support 32-bit host servers with more than 4GB of memory. At this point, Xen still only supported Xen-enabled Linux guest operating systems.

    The v3 release of Xen also resulted in XenSource's first legitimate approach to an enterprise solution - XenOptimizer. XenOptimizer was intended to be bridge the gap between Xen and VMware. Remember, Xen is just the hypervisor, XenOptimizer from XenSource (confusing, eh?) was the management interface. With XenOptimizer, admins were now able to manage multiple servers from a common interface for server provisioning and resource control.

    In late 2006, XenSource released its first version of XenEnterprise 3.0, a product meant to directly compete with VMware. Based on v3.03 of Xen, it included an new management and monitoring console built on XenOptimizer and, most importantly, support for Windows guest operating systems. This is largely the result of a July 2006 partnership agreement between XenSource and Microsoft to provide interoperability between XenSource and Microsoft's new hypervisor, codename Viridian. Also resulting from this agreement, Xen-enabled Linux guests will be able to run on Viridian and XenEnterprise will be able to recognize Viridian's VHD (Virtual Hard Drive) files.

    Flash forward to just over a week ago, when XenSource released XenEnterprise v4. This is a landmark release for XenSource, and builds on the foundation laid by XenEnterprise v3. The new version adds features and functionality that are meant to rival VMware, but at less than half the cost (list price, of course). Among the new features are:

    • Integrated storage management software, Veritas Storage Foundation from Symantec
    • XenMotion, the Xen equivalent to VMware's VMotion
    • A new system management console, XenCenter, allowing administrators to manage virtual machines just as a user would do with VMware VirtualCenter.

    XenEnterprise v4 runs on the latest version of Xen, v3.1 (sigh...Citrix, please change XenSource's name to something Tazwell or Oddibe).

    That brings us up to the present day. It’s been a busy (and profitable) last few weeks for XenSource, and Citrix is so light in the pocketbook now that they’ve had to tie themselves down with all the red tape they’ve gotten stuck in by purchasing the sponsor company of an open-sourced hypervisor. It’s in Citrix’s best interest to continue to develop Xen, since any improvements will undoubtedly help them. The crazy thing is that those same improvements will now also help out their new direct competitor – VirtualIron.

    Some quick thoughts about the future…

    As the dust has settled over the past few days and I’ve had a chance to look at the reactions of many people, both analysts and admins, it’s pretty clear that the nobody knows for sure what is going to come of all this. There are obvious correlations between XenEnterprise and Citrix Desktop Server, and Citrix may even be able to use some of the Xen technology to enhance AIE and Citrix Streaming server. So far, the most interesting thing I’ve heard comes from a guest’s post in our forums. An excerpt from that post reads:

    “It seems to me that Citrix is looking to strike the same deal with Microsoft regarding virtualization that it has regarding Terminal Services: Microsoft provides the underlying infrastructure and Citrix provides the enterprise solution on top of it. If this is true, then this purchase was probably done with Microsoft’s blessing, and positions Microsoft and Citrix as partners against VMware.”

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Solaris Zones and Linux Containers

    Detlef Drewanz and Lenz Grimmer
    January 2013

    Oracle Solaris Zones and Linux Containers are not standalone products; they are features of an operating system. In principle, both technologies are similar. They are virtualization technologies at the application level, so they are "above" the OS kernel. So unlike hypervisor-based virtualization, they do not add an additional software layer. With Oracle Solaris Zones and Linux Containers, there is one OS kernel that is shared by many zones or containers.

    To put this into perspective, let's reuse the image from the first article in this series, where we showed the position of Oracle Solaris Zones (see Figure 1). In Figure 1, the position of Linux Container can roughly be compared to that of Oracle Solaris Zones. The difference between the two technologies is mainly at the implementation level and in the way they are integrated into the OS.

    Oracle Solaris Zones

    Oracle Solaris Zones technology is an Oracle Solaris feature that first showed up in Solaris Express and Oracle Solaris 10 3/05 (March/2005) and was called Oracle Solaris Containers. With Oracle Solaris 11, we now officially call the technology Oracle Solaris Zones.

    Oracle Solaris Zones technology creates a virtualization layer for applications. We could say a zone is a "sandbox" that provides a playground for an application. The global zone holds the Oracle Solaris kernel, the device drivers and devices, the memory management system, the file system and, in many cases, the network stack. The other zones are called non-global zones and are isolated from each other, but they all share one global zone.

    Figure 2. Oracle Solaris Zones

    The global zone sees all physical resources and provides common access to these resources to the non-global zones. The non-global zones appear to applications like separate Oracle Solaris installations.

    Non-global zones have their own file systems, process namespace, security boundaries, and network addresses. Based on requirements, non-global zones can also have their own network stack with separate network properties. And, yes, there also is a separate administrative login (root) for every non-global zone, but even as a privileged user, there is no way to break into one non-global zone from a neighboring non-global zone. In contrast, looking from the global zone, a non-global zone is just a bunch of processes grouped together by a tag called a zone ID.

    This type of virtualization is often called lightweight virtualization, because there is nearly no overhead for the virtualization layer and the applications running in the non-global zones. Therefore, we get native I/O performance from the OS. Thus, zones are a perfect choice if many applications need to be virtualized and high performance is a requirement.

    Due to the fact that all non-global zones share one global zone, all zones run the same level of OS software—with one exception. Branded zones run non-native application environments. For example, with Oracle Solaris 10, we have the ability to create legacy branded zones such as Oracle Solaris 8 Containers (solaris8) and Oracle Solaris 9 Containers (solaris9), which provide Oracle Solaris 8 and Oracle Solaris 9 runtime environments but still share the Oracle Solaris 10 kernel in the global zone. With Oracle Solaris 11, it is possible to create Oracle Solaris 10 branded zones (solaris10).

    Compared to zones in Oracle Solaris 10, zones in Oracle Solaris 11 have been much more integrated with the OS. Oracle Solaris Zones technology is no longer just an additional feature of the OS. Zones are well integrated into the whole lifecycle management process of the OS when it comes to automatic installation or updates of zones. Also, the better integration of zones with kernel security features enables more delegated administration of zones. Better integration into ZFS, consistent use of boot environments, network virtualization features, and Oracle Solaris resource management are additional improvements to zones in Oracle Solaris 11. Zones have always been very easy to set up on the command line and easy to use. If you want to use a graphical tool to configure zones, you can use Oracle Enterprise Manager Ops Center (which we will cover later in this series).

    Linux Containers (LXC)

    Now that we have discussed Oracle Solaris Zones, what are Linux Containers? Is this the same technology as Oracle Solaris Zones and, if not, how do the two technologies differ?

    Here's the definition from the LXC project page:

    Linux Containers take a completely different approach than system virtualization technologies such as KVM and Xen, which started by booting separate virtual systems on emulated hardware and then attempted to lower their overhead via paravirtualization and related mechanisms. Instead of retrofitting efficiency onto full isolation, LXC started out with an efficient mechanism (existing Linux process management) and added isolation, resulting in a system virtualization mechanism as scalable and portable as chroot, capable of simultaneously supporting thousands of emulated systems on a single server while also providing lightweight virtualization options to routers and smart phones.

    The Linux Containers project started around 2006 as an external set of patches to the Linux kernel. It was integrated into mainline Linux starting with kernel 2.6.29 (March 2009). LXC provides resource management through the Linux kernel's control groups ("Cgroups") subsystem, and it provides resource isolation through process namespaces. LXC uses these kernel features to allow the creation of userspace container objects, which provide full resource isolation and resource control for an individual application, an entire operating system, or both.

    Linux Containers offer essentially native performance, and you can efficiently manage resource allocation in real time. A binary running inside a Linux Container is actually running as a normal process directly on the host's kernel, just like any other process. In particular, this means that CPU and I/O scheduling are much more fair and tunable, and you get native disk I/O performance, which you cannot have with real virtualization (even with Xen when using paravirt mode). This means you can even run applications that have heavy disk I/O, such as databases, inside a Linux Container.

    Unlike full virtualization solutions and similar to Oracle Solaris Zones, LXC will not let you run any other non-Linux operating systems (such as proprietary operating systems or other types of UNIX). However, you are able to run different Linux distributions on the same host kernel in different containers. For example, you could run an instance of Oracle Linux 5 inside a container hosted on an Oracle Linux 6 system running the Unbreakable Enterprise Kernel release 2.

    So in essence, Linux Containers could be described as chroot environments "on steroids" that can be created at various isolation levels but also shared as an isolated group of processes for the Linux kernel.
    Última edição por 5ms; 25-04-2017 às 19:18.

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Virtualization is dead, long live containerization

    Phil Wainewright
    July 2, 2014

    OK, I’ll admit this is a bit of a geeky topic but it’s going to have a huge impact on the cost of cloud computing and on how enterprises develop their cloud applications. In other words, it directly affects some of the most important technology decisions you’ll be involved in over the next year or two. So bear with me on this one.

    The purists will probably object to my headline because containerization is actually just another approach to virtualization. Where it differs, though, is in dispensing with the conventional virtual machine (VM) layer, which is quite a radical departure from the way we’ve thought of cloud computing up until now.

    In a classic infrastructure-as-a-service (IaaS) architecture — think Amazon Web Services, Microsoft Azure, or a VMware-based private cloud — you distribute your computing on virtual machines that run on, but are not tied to, physical servers. Over time, cloud datacenter operators have become very good at automating the provisioning of those VMs to meet demand in a highly elastic fashion — one of the core attributes of cloud computing.

    The trouble with VMs is that that they’re an evolution from an original state when every workload had its own physical server. All the baggage that brings with it makes them very wasteful. Each VM runs a full copy of the operating system along with the various libraries required to host an application. The duplication leads to a lot of memory, bandwidth and storage being used up unnecessarily.

    Orders of magnitude better

    Containerization eliminates all of the baggage of virtualization by getting rid of the hypervisor and its VMs, as illustrated in the diagram. Each application is deployed in its own container that runs on the ‘bare metal’ of the server plus a single, shared instance of the operating system. One way to think of it is as a form of multi-tenancy at the OS level.

    Containerization as it’s practised today takes IT automation to a whole new level, with containers provisioned (and deprovisioned) in seconds from predefined libraries of resource images. The containers consist of only the resources they need to run the application they’re hosting, resulting in much more efficient use of the underlying resources.

    We are talking improvements by orders of magnitude rather than a few dozen percentage points. People commonly report improvements in application density of 10x to 100x or more per physical server, brought about by a combination of the compactness of the containers and the speed with which they are deployed and removed.

    In one example I encountered recently, UK-based IaaS provider ElasticHosts plans to exploit this differential with a metered offering based on Linux containers that eliminates much of the approximation seen in a traditional VM-based hosting environment. Says CEO Richard Davies:

    You can offer very fine grained on-demand scaling … When resources become available they can be completely scaled down [because] the system is one that is more transparent to the operator.

    The plan is to provision resources that are much more closely matched to demand on a minute-by-minute basis, avoiding slowdowns in performance when demand spikes or overpaying when resources remain idle after demand subsides.

    Into the enterprise mainstream

    The concept of LinuX Containers (LXC) is not new. It’s been an important component under the covers of many of the largest web application providers for years. Google is said to have more than 2 billion containers launching in its datacenters every week. Platform-as-a-service vendors such as heroku, OpenShift, dotCloud and Cloud Foundry have been using Linux containerization since their inception.

    What’s changed is that it used to require a lot of expertise and handcrafted code to do it right. It’s only in the past year or two that the mainstream Linux kernels and associated toolsets have built in more robust support.

    Last month, the release of Docker 1.0 productized containerization for enterprise use. Suddenly the concept burst into the tech media headlines, with Docker support announced by Google AppEngine, Microsoft Azure, IBM SoftLayer, Red Hat and others.

    Encouraging cloud-native development

    Containerization is not coming to the classic enterprise software stack anytime soon. That’s still going to be the preserve of the classic VM — quite rightly, as that’s the use case the likes of VMware was designed for.

    Where containerization excels is in deploying the kind of microservices-based architecture that is becoming increasingly characteristic of cloud-native web applications. Thus it’s no surprise to see it being combined with a technology like, a PaaS platform for applications built on Node.js and MongoDB, which last month announced its acquisition by Progress Software. Says VP technology Matt Robinson:

    Node really encourages an API-first design. In line with a container technology, you can have those modular units and have much more fine-grained scalability control over that.

    A technology like Docker is instrumental in automating the principles of devops. Its predefined library images are operationally pretested, allowing developers to just go ahead and deploy. Chris Swan, CTO of CohesiveFT, says that this encourages the practice of rapid testing and ‘fast failing’ while iterating.

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    ICYMI: Will containers take down OpenStack?

    Developers are not spinning virtual machines (VMs) up and down as expected.

    Mathew Lodge
    Aug 19, 2016

    I was struck by a conversation I had earlier this year during the OpenStack conference in Austin with a technical architect from one of the bigger players. He was seeing baffled IT teams who had OpenStack clouds in which the users (developers) were not spinning virtual machines (VMs) up and down as expected. They were just deploying a bunch of VMs and then leaving them running for long periods. When the IT folks investigated, they found the VMs were Docker host VMs and the developers were now deploying everything as containers. There was a lot of dynamic app deployment going on, just not at the VM level.

    Then recently Mirantis announced that it would be porting the OpenStack IaaS platform to run as containers, scheduled (orchestrated) by Kubernetes, and making it easy to install Kubernetes using the Fuel provisioning tool. This is a big shift in focus for the OpenStack consulting firm, as it aims to become a top Kubernetes committer.

    OpenStack, containers and Kubernetes all exist for a singular purpose: to make it faster and easier to build and deploy cloud-native software. It’s vital to pay attention to the needs of the people who build and deploy software inside enterprises, OpenStack’s sweet spot today.
    Questioning OpenStack’s Relevancy

    If I put myself in a developer’s shoes, I am not sure why I care about spinning up VMs and, hence, OpenStack. Docker containers came along and made packaging and deploying microservices much easier than deploying into VMs. And there’s now a strong ecosystem around container technology to fill the gaps, extend its capabilities and make the whole thing deployable in production. The result has been phenomenal growth in container usage in a very short amount of time. The remaining operational problem for the average enterprise is deployment of its container stack of choice onto bare metal or its existing hypervisor, which it can do today with tools it already has, such as Puppet/Chef/Salt or, in the future, using Fuel.

    Of course, this focuses on the developers working on new stuff or refactoring apps. Container penetration is small relative to the mass of existing systems, as lots of things are not in containers today and will be happily uncontained for years to come. So, there’s obviously still a need for VMs. Is that why OpenStack still matters?

    Problem one is that OpenStack initially was a platform to arm service providers to compete with AWS, and when that didn’t pan out, refocused on being the infrastructure as a service (IaaS) for new apps. There was a time when it was hard to read an article about OpenStack without hearing about “pets vs. cattle,” and OpenStack was designed to herd cattle. That was the reason to deploy it, even if you already had vSphere or Hyper-V with automation. It was tough to migrate existing virtualized apps to OpenStack without changes.

    Problem two is that OpenStack itself is a large and complex collection of software to deploy. It has itself become a big, complex pet, which is why Mirantis and others can make a living providing services, software and training. So an OpenStack deployment looks like a non-trivial cost and time investment—not to enable the exciting cloud-native new stuff, but the stuff that is already running just fine elsewhere in the data center. That’s a tough sell.

    That’s why I question the future of OpenStack.

    This is not to say that organizations with OpenStack somehow made a mistake: Giving their users on-demand cloud app environments is a good call. However, if they were making the same decisions today, those enterprises would need to think very hard about what their developers and DevOps teams would prefer: a dynamic container environment perhaps based on Fuel, Docker and Kubernetes—on-premises or in a public cloud—versus an on-prem private IaaS such as OpenStack.

    Tough times ahead.

    [EN] Red Hat, Google: Containers Without Docker

  9. #9
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Com sorte daqui a 20 anos teremos um MVS (IBM) e em 50 anos um MCP (Burroughs)
    Última edição por 5ms; 26-04-2017 às 15:29.

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens