Resultados 1 a 8 de 8
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480

    [EN] HyperGrid - Public Cloud On-Premises

    HyperGrid Enterprise Cloud as a Service


    A fully featured public cloud service delivered as a full stack appliance in your data center.




    How HyperGrid Works

    You give us space, power and bandwidth. We do the rest. Pay as you go.


    HyperCloud is a public cloud like service that is delivered to your own, private, secure data center. When you sign up for service, HyperGrid installs a highly available, scalable infrastructure-as-a-service cloud inside of your facility.

    We own and operate all equipment and software, leaving your teams free to focus on innovation instead of overhead. Our service comes with a 99.95% uptime guarantee, with 4-hour on-site hardware replacement. We use the latest infrastructure systems from leading vendors, with industry leading rackmount servers, integrated and fully redundant 40G fiber interconnect switches, and built in all flash SSD storage.

    Your teams get a simple single-pane cloud management experience where they can self-service provision virtual infrastructure in seconds, accelerating app development and simplifying IT.

    Keep your data safe. Stay compliant.

    Never become locked in.


    The HyperGrid Experience

    Provision in seconds. Automate deployments.

    HyperCloud Management

    HyperGrid comes with a simple, single-pane cloud management experience where users can self-service provision virtual infrastructure in seconds. With full support for Docker containerization, developers can create templates that fully automate the deployment of multi-tier applications and microservices.

    HyperCloud Portal also supports the management of external public clouds, including Amazon Web Services, Microsoft Azure, and GoogleCompute Engine. Using HyperCloud Portal, IT teams can centralize the governance and control of cloud resources enterprise-wide.

    http://hypergrid.com/

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480

    'Cloud-In-A-Can'

    A review of HyperCloud by HyperGrid.

    Trevor Pott
    04/27/2017

    I've recently had a chance to do a review of HyperCloud by HyperGrid, and ended up pleasantly surprised. I've been a strong advocate of the creation of Infrastructure Endgame Machines (IEMs) for a few years now, and HyperCloud is dangerously close to actually being one.

    An IEM is essentially a hybrid cloud solution with a recipe-based workload creation mechanism, integrated monitoring and several other basic infrastructure features taken care of in an automated fashion. The whole idea of IEMs is to bring what the public cloud can do not only to customers, but to service providers, and actually make it easy to use.

    Layer on some automation and orchestration of best practices and tools for dealing with workloads at scale, and you have a revolution in workload provisioning that is to the client/server model what the client/server model was to mainframes: a revolution.

    What Is HyperCloud?

    HyperCloud is the solution offered by HyperGrid. HyperGrid is the result of a merger between Hyperconverged Infrastructure (HCI) vendor Gridstore and DCHQ, a company that focused on workload migration (predominantly of Java applications) to the public cloud.

    Prior to the transformation into HyperGrid, one could have been excused for having forgotten Gridstore existed. They were very much a "me too" HCI player, and one that -- for reasons incomprehensible -- focused exclusively on Hyper-V. Yes, there is a hard core of True Believers that want Microsoft on Microsoft with added Microsoft, but Hyper-V was never going to grow much beyond that niche, so Gridstore stalled out and faded away.

    DCHQ made some quick friends as a cloud brokerage solution, especially among organizations with strong internal development teams that had already made the leap to DevOps and embraced continuous integration. Those who understood desired state configuration and knew how to use it quickly adopted DHCQ, but this too was a limited market.

    Put the two together, however, and you have an HCI cloud-in-a-can that can spin up workloads on your local infrastructure, on a service provider or in the public cloud. It can speak Hyper-V (naturally), VMware, Openstack/KVM and all the major public cloud providers. It can integrate block storage, object storage, and uses a recipe engine with agents to deploy workloads, getting us one step closer to that desired state utopia.

    Separately, Gridstore and DHCQ were also-rans. Together, they're a viable challenger to Microsoft's Azure Stack. That warrants serious consideration.

    Initial Impressions

    HyperCloud's UI reminds me enough of OpenNebula that I had to ask if they had forked the project or if this was a completely custom solution. It turns out that the whole thing is proprietary to HyperGrid, and any similarities are largely accidental. Still, I find the similarities comforting. Working with clouds is hard enough without having to relearn basic interface queues.

    While I largely have nothing but praise for HyperCloud, two downsides stand out to me; one I can blame on HyperGrid and another I must blame on myself. The issue I must blame myself for is that I didn't easily grasp how HyperCloud was going about handling networking.

    I am assured by hard-core networking nerds and those who are very used to working with public clouds that the HyperCloud networking makes sense. Being largely a virtualization administrator, however, I must admit I struggled to understand how to make it go. With luck, there can be a happy middle ground in the future that makes it easier for simpletons like myself.

    The big problem for me -- the one I have to point back to HyperGrid and demand a fix for -- is that HyperCloud doesn't offer proper console access to virtual machines (VMs) or containers, not even for the on-premises Hyper-V, VMware or OpenStack/KVM VMs. They currently have something called the "DCHQ Terminal" which offers a very function-limited pseudo-terminal, but no full sub-operating-system, "tinker with the guts of the VM" console.

    As a virtual administrator that still has "pet" workloads that can't all be scripted, managed with desired state tools and automated to the nines, this is a showstopper. Some workloads just need to be babied. To their credit, however, HyperGrid acknowledged that this is a missing feature in HyperCloud, and said that a proper console is something on their roadmap for inclusion in the next two months.

    Those gripes aside, it's hard to find fault with HyperCloud. It does VMs, it does containers. HyperCloud has a library that comes complete with a wizard to walk you through the creation of apps, VMs and even clusters. The library comes with over 400 recipes (called Blueprints by HyperGrid), and the ability to create your own.

    Available recipes include basic OS environments such as Ubuntu, CentOS and Windows, multi-app platforms like nginx + Tomcat + MySQL, and individual applications such as Wordpress. You'll also find some library solutions that feed back into the platform itself. HyperCloud's canonical example is a Minio library that, when deployed, offers up an object storage solution that can be consumed by the HyperCloud UI.

    Plugins can be created to enhance the HyperCloud UI, and it includes a reasonably detailed reporting engine. HyperCloud includes modest policy capabilities, though they are admittedly nothing to write home about, and some standard LDAP-backed identity services. All in all, about what you'd expect from a production-ready cloud management solution.

    Practically Speaking

    HyperCloud is very strongly influenced by the features and capabilities of the big public clouds. Their approaches to problems and even their limitations are frequently reflected in HyperCloud's design. This makes HyperCloud very approachable to the new generation of cloud natives, but makes adoption by the world's existing virtualization admins something of a struggle.

    HyperCloud shows a lot of potential. I can certainly see using this to run an individual organization, or a department's infrastructure. HyperCloud commands other infrastructure solutions, and thus can be used as a self-service adjunct to, for example, an on-premises VMware infrastructure, rather than necessitating an all-or-nothing abandonment of existing IT practices.

    From what little I could discern from within the demo environment provided, HyperCloud has a lot of potential to scale up. The existing demo environment didn't consist of that many hosts, but the ability to talk to so many different public clouds, on-premises infrastructures and even to register third-party service providers of your choice indicates to me that HyperGrid envisions some pretty large deployments.

    With that in mind, the lack of nested multitenancy is troubling. Other solutions -- most notably Yottabyte -- offer this feature, and it is a strong enabler for service providers (note: Yottabyte is a client of the author’s). Nested multitenancy is the ability for the root cloud owner (the service provider) to carve up resources into virtual datacenters for customers who then create nested virtual datacenters for their internal customers.

    This would allow a company to, for example, rent a fixed amount of infrastructure that it could break up into sub-clouds based on department. Those sub-clouds would have their own administrators and their own users, and possibly even their own authentication systems, but all of it on one bill and with unified reporting and control by the customer.

    HyperCloud isn't at nested multitenancy yet, but I expect it won't be long before they are. What they do offer is a solution which exposes a lot of nerd knobs, logging, timelines, error reporting and -- most importantly -- monitoring. A HyperCloud strength is that its designers were obsessed with making sure that everything was tracked, logged, monitored and integrated into alerts. That's as it should be; it's as all our datacenters should be.

    Solid Product, With Room for Improvement

    HyperCloud works. For the most part, it works quite well. Now the hard work begins: making HyperCloud traditional admin-friendly before the advantage gained over their competitors expires.

    https://virtualizationreview.com/art...form=hootsuite

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480

    Infrastructure Endgame Machines

    Software Defined Infrastructure

    Trevor Pott
    5 Aug 2015

    The end of IT as we know it is upon us. Decades of hype, incremental evolution and bitter disappointment are about to come to an end as the provisioning of IT infrastructure is finally commoditised. By the end of the decade, the majority of new IT purchases will be converged infrastructure solutions that I only semi-jokingly call Infrastructure Endgame Machines (IEMs).

    I've discussed this topic with everyone from coalface systems administrators to the highest-ranking executives of companies that have become household names. Only a few truly see the asteroid headed their way and the collective denial of the entire industry will mean an absolute bloodbath when it hits.

    Back in October, I talked about Software Defined Infrastructure (SDI). I painted a picture of a unicorn-like solution that, in essence, combined hyper-convergence, Software Defined Networking (SDN) and Network Functions Virtualisation (NFV), with orchestration, automation and management software that didn't suck. I thought it was going to be rather a long time before these started showing up.

    Boy, was I wrong.

    The IEM

    An IEM is an SDI Block made manifest*, but as more than merely something you can install on your premises. It would include the ability to move workloads between your local SDI block and those of both a public cloud provider and regional hosted providers.

    This gives those seeking to run workloads on someone else's IEM the choice of using a vendor legally beholden to the US of NSA, or one that operates entirely in their jurisdiction. What a magical future that would be. All the promises of the past 15 years of marketing made real.

    The goal of an IEM is that it removes the requirement to ever think about your IT infrastructure beyond some rather high-level data centre architecting. Figuring out cooling and power delivery will probably take more effort than lighting up an entire private cloud that's ready to deliver any kind of "as a Service" you require, including a full self-service portal.

    Put bluntly, IEM is the data centre product of the year 2020. Storage, networking, servers, hypervisors, operating systems, applications, management and so on will all simply be features.

    Today, it would be rare to find a company that goes out and buys deduplication as a product. It's expected that this is a basic feature of modern storage. By 2020, all of modern storage – and a whole lot more – will be expected to be a basic feature of an IEM.

    It is all too easy to slip back into cynicism and think about the dozen reasons this might never happen. SDI blocks would decimate internal IT teams. Entire classes of specialities would become obsolete overnight.

    Hundreds – if not thousands – of IT providers that only deliver one piece of the puzzle are instantly put on life support. Heck, the US government (amongst others) might intervene to stop the creation of IEMs because it would put at risk their ability to spy on everyone, all the time.

    https://www.theregister.co.uk/2015/0...game_machines/

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480

    vSAN - Riding The Virtual SAN Gravy Train

    Timothy Prickett Morgan
    April 24, 2017

    Being the first mover in establishing a new technology in the enterprise is important, but it is not more important than having a vast installed base and sales force peddling an existing and adjacent product set in which to sell a competing and usually lagging technology.

    VMware can’t be said to have initially been particularly enthusiastic about server-SAN hybrids like those created by upstart Nutanix, with its Acropolis platform, or pioneer Hewlett Packard Enterprise, which bought into the virtual SAN market with its LeftHand Networks acquisition in October 2008 for $360 million and went back to the hyperconverged well again this January with its $650 million deal to acquire SimpliVity. It had a baby Virtual SAN Array, launched in July 2011, that was capped at three nodes and limited features for several years, but with the Virtual SAN launch in early 2015, VMware got serious because there are billions of dollars of profitable compute and storage revenues at stake in the hyperconverged arena.

    VMware took a long time to come to virtual storage arrays that mashed up compute and storage onto the same clusters – vSAN was in private beta in 2012 and moved to a very successful but very lengthy public beta in August 2013. And VMware has also taken its sweet time scaling up vSAN capacity and performance, and no doubt partly because its former parent company, EMC, minted coin selling actual physical SANs. But here in 2017, VMware is aggressively pushing vSAN.

    Back in February 2016, when VMware tweaked the vSAN stack, it had 3,000 customers that had deployed its server-storage hybrid in production, and as of the end of 2016, it had grown that based to over 7,000 customers. About half of VMware’s 500,000 customers using its vSphere server virtualization stack are using its vSphere Enterprise Plus edition, so these are the natural target customers for vSAN. Not all of them need virtual SANs with hundreds of terabytes or petabytes of scale, of course, but there are probably many tens of thousands of customers who do, and this will drive vSAN sales upwards and, perhaps equally importantly, drive software-defined storage sales on X86 servers and take a bite out of the actual SAN market and, in all-flash configurations, also take a slice of the all-flash arrays that are one of the bright spots in enterprise storage these days.



    Hyperconverged infrastructure, or HCI as the cool kids call it, generated about $2 billion in revenues in 2016, and is expected to grow to just shy of $5 billion in 2019, according to statistics from IDC – and every time these projections are updated, they grow bigger. The analysts at Gartner reckon that software-defined storage, in its many guises, accounts for about 5 percent of capacity shipped worldwide today, but will grow to about 30 percent of capacity by 2019. The overall storage market, internal and external disk and flash of all kinds, generates around $60 billion in revenues today, just for comparison’s sake.

    In the fourth quarter of 2016, the latest financial figures we have for VMware at the moment, the vSAN software was selling at an annualized run rate of $300 million, Michael Haag, group product marketing manager for VMware’s storage and availability portfolio, tells The Next Platform, and it has been growing at 150 percent for five consecutive quarters.



    Haag says that the underlying hardware (and we presume related vSphere systems software) has a 3X multiple cost compared to the vSAN licenses and support costs, which is another way of saying that vSAN represents about 25 percent of the total cost. So the run rate for vSAN-based HCI platforms was around $1.2 billion as 2016 was closing out. If you do the math, those incremental 4,000 customers spent $720 million on vSAN systems in 2016, and if the growth rates persist, VMware will add another 10,500 customers in 2017 and they will spend an incremental $1.8 billion on vSAN storage. At that point, assuming VMware can manage 150 percent growth for this year for vSAN licensing, which is possible given the vastness of its customer base, then vSAN licensing and support should be at a run rate of around $750 million as 2017 comes to a close and total spending on vSAN arrays for the year should be around $1.8 billion.

    “vSAN is becoming a significant business for VMware,” says Haag. “If were a standalone and a startup, we would be doing cartwheels and extremely happy.”

    This is still dwarfed by VMware’s overall business, which grew by 7.9 percent to $7.1 billion for all of 2016; the company pulled $1.2 billion of that down to the bottom line, and increase of 19 percent compared to 2015.
    Riding The VxRAIL

    One of the big drivers for HCI is, not surprisingly, technology. Storage appliances, like the SAN arrays of days going by (but not quite gone, mind you), tend to lag in the adoption of new CPUs, memory, non-volatile storage, and even disk drives. But because HCI platforms are based on plain vanilla X86 servers, and servers always get the latest technology first, that means by default that HCI platforms get the technology first, too. It is up to the HCI software vendors to embrace this new stuff, and VMware and its peers are getting better about this, and Haag says that VMware’s vSAN will support Intel’s Optane 3D XPoint non-volatile memory on day one when it is available later this year in volume.

    The other big driver for HCI is just the fact that it is getting more mainstream, and surveys of VMware customers show that over 60 percent of enterprises have deployed some mission critical applications on hyperconverged platforms. In the early days, this was mainly virtual desktop infrastructure (VDI) and test and development jobs, but now companies are trusting vSAN for the big jobs. It is because vSAN scales better and incorporates flash for caching and for primary storage as well as adding compression, de-duplication, and erasure coding to drive down trhe cost of the all-flash variants that customers can even afford to do this. The vSAN 5.5 release in early 2015 got the ball rolling, and the all-flash 6.2 release kicked it into the air. With the 6.6 release that came out this month, VMware is boosting performance, scaling out clusters for capacity and performance, and embedding native security in vSAN.

    The 6.6 update has more than twenty major new features and is being billed as the biggest vSAN release to date. One operational change is that vSAN, which is very tightly coupled to the ESXI hypervisor, has been updated twice a year with the vSphere releases. But going forward, vSAN updates will be available as part of the monthly vSphere patches, significantly speeding up the time it takes to get new features into the field for the server-storage hybrid.

    The vSAN Enterprise edition has two important new features. The first is native encryption, which can be deployed on all-flash or hybrid flash-disk variants of vSAN, and which encrypts data at the cluster level, not at the individual drive level. The self-encrypting drives carry a 20 percent to 30 percent premium over plain flash or disk drives, and each one has to have its own key for encryption and these have to be managed individually, too. Now, you need just one key for the vSAN cluster and the encryption is done across the nodes in the cluster and using the AES-256 encryption acceleration features of Intel’s Xeon processors.

    VMware has been offering stretched clusters for a little while, but are being enhanced with local protection within the cluster and within a datacenter as well as replication across sites. With this enhanced stretched cluster, there are two vSAN clusters linked over two sites in an active-active configuration, serving up applications running inside of VMs. Data is synchronously replicated across the sites. As data comes out of a VM, it is replicated with RAID 1 mirroring between the two sites, but now within those two sites, customers can fire up RAID 5 or RAID 6 data protection within the nodes of the cluster to add another layer of data protection across the active-active clusters. What this means is that if one of the sites is knocked off line and then nodes within the second cluster start failing, there will be further replication to keep it up and running. This stretched cluster replication, by the way, is enabled on a per-VM basis, so you don’t have to replicate your entire vSAN cluster.

    On the performance front, on like for like hardware, VMware has tuned up vSAN with the 6.6 release to offer about 50 percent better I/O operations per second throughput on all flash setups compared to vSAN 6.5 from last fall. VMware tweaked the underlying algorithms used for de-duplication, checksum, and other features to get this added performance. The upshot of these changes was that latencies for IOPS were also reduced by somewhere between 30 percent and 40 percent, too.

    VMware is supporting larger 1.6 TB flash drives for caching in vSAN clusters, and is looking ahead to supporting Optane SSDs, which it says will increase sequential write performance on vSAN clusters by 250 percent compared to using flash cache.

    In addition to the new features with vSAN 6.6, VMware is also doing a bunch of bundles, as shown above, that prepackage the vSphere and vSAN software for specific use cases. The prices shown above are for list price for the components; there is no discount for buying the vSphere and vSAN tools together. But you should probably argue for one just the same.

    https://www.nextplatform.com/2017/04...n-gravy-train/

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480

    Azure On-Premises

    Azure Stack will have a fourth appliance option when it becomes generally available later this year, but questions about pricing continue to emerge.

    Robert Gates
    24 Feb 2017

    The addition of Cisco Unified Computing System, or UCS, to the limited options of the Azure Stack appliance offering may please a handful of customers, but everybody really just wants to know how much it will cost.

    Customers should get answers very soon about the types of programs and contract terms for Microsoft Azure Stack, the hybrid cloud product that extends the company's public cloud into a user's data center.

    Microsoft may choose a competitive entry price for Azure Stack, said Mike Dorosh, research director at analyst firm Gartner. "What they will really make their money on is all the services from Azure that customers will use," he said.

    That likely won't address many users' concerns about whether they will receive an easy licensing scheme to move to Azure Stack, said Timothy Kinnerup, vice president at QCM Technologies Inc., a systems integrator in Scottsdale, Ariz.

    For example, a company with a few Microsoft servers on premises may want to move to Azure, but the company has paid for SQL enterprise licenses -- money that won't be recouped. It is the same for a company that has recently bought Windows Server 2012 licenses, but is considering a move to Azure.

    "I've already spent it on-prem, and now I want to move it to Azure -- I've already spent money, so that is my barrier to entry," Kinnerup said.

    Azure Stack is expected to become generally available this summer through a handful of OEM partners via an Azure Stack appliance from Dell, Hewlett Packard Enterprise (HPE) and Lenovo. Now, Cisco joins with its Microsoft Azure Stack on Cisco UCS.

    Adding Cisco makes sense, since at least one of the four OEM partners is in almost every data center, Dorosh said. Most Azure Stack buyers will stick with an existing vendor they already know and are confident will support them, and has skills and tools built around the platform, he added.

    The ice cream flavors of Azure Stack

    On the inside, the four OEMs' Azure Stack appliances will be largely the same, with similar servers, storage, RAM and networking equipment -- a reminder that hardware, in general, has become a commodity, Dorosh said. The four partners may have a slightly different architectural philosophy, "but it is chocolate, vanilla or strawberry; the differentiation is on the edge uses, not in the mainstream uses," he said.

    All the appliances promise to eliminate the complexity to set up Azure Stack.

    "There's no need to iterate through multiple configurations to find the right mix for Azure Stack; that is already done for you," said Liz Centoni, a Cisco senior vice president and general manager, who has worked to put Microsoft Azure Stack on Cisco UCS since last year.

    Making Azure Stack available only through an appliance from one of four vendors ensures each node is set up in the prescriptive way that Microsoft wants it, she added.

    It is still unknown whether existing Cisco UCS nodes -- or any other hardware, for that matter -- can be added. That is still up for discussion with Microsoft about where it fits on the Azure Stack roadmap, she said.

    "The first sets of Azure Stack nodes deployed have to be the ones fully validated by Microsoft," she said.

    Within a few months, Dorosh said he expects Microsoft will let users buy generic capacity to match up with the converged hardware, which will remain as the operational stack.

    For customers, the decision between Cisco, Dell, HPE or Lenovo will come down to vendor relationships, support and price, said Carl Brooks, an analyst at 451 Research. Azure Stack sales will likely come from discounts and encouragements for enterprises, he said. Azure Stack can be run on noncertified hardware, but Microsoft will not support it.

    "You are going to see a customer acquisition war here," Brooks said.

    While there will be rough parity in pricing, Brooks said he expects Dell's Azure Stack appliance to be the least expensive and the HPE appliance to be the most expensive. Cisco would only say its price will depend on the number of server nodes and the Azure services a customer uses each month.

    Azure Stack will be overpriced, compared to the cost of running it on cheap commodity hardware and network devices to make a private cloud, but the certified appliances will come with guaranteed support levels and maintenance, and include help with implementation and deployment, Brooks said.

    http://searchdatacenter.techtarget.c...estions-linger

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480

    Oracle Cloud On-Premises



    The agility and innovation of Oracle Cloud while meeting data-residency requirements.


    John Soat

    The cloud is generally perceived as being “out there”—somewhere in the ether chugging away at its computing chores. But what if you could turn cloud computing inside out—bring its speed, flexibility, and ease of use right into your data center?

    Public cloud services are widespread and growing, according to research firm Gartner. But increasingly, organizations want to choose where they run their workloads—whether in public clouds or within their own data centers—in order to meet business, legislative, and regulatory requirements. For instance, some companies and government agencies have strict performance demands, requiring close to zero latency between their applications and the data they must access. Some must keep their application development and data processing behind corporate firewalls in order to guarantee custom security or abide by data governance regulations.

    To serve enterprises looking for the cloud’s agility, automation, extensibility, and portability on premises and under their control, Oracle is providing a new family of offerings, called Oracle Cloud at Customer, that place the same hardware, software, and operational services available in its public cloud directly into companies’ data centers, behind their firewalls. Oracle Cloud can now run any place the customer wants it to run,” says Tushar Pandit, senior director of product management at Oracle.

    The new Oracle Cloud Machine makes available Oracle Cloud’s infrastructure as a service (IaaS), including compute, storage, networking, and platform as a service (PaaS), including Oracle Java Cloud Service, Oracle Integration Cloud Service, Oracle Database Cloud Service, and others, accessed in a cloud-oriented subscription model that’s priced the same as public cloud services. “We provide customers exactly the same software, the same services, the same functionality, the same operational capabilities—and the same commercial way to buy them” as are available in Oracle’s public cloud, Pandit says.

    Easy access to flexible and extensible public cloud resources has made their use particularly appealing to applications developers. Many app builders find cloud architectures well-suited to constructing and testing corporate or commercial software.

    Overcoming Integration Problems

    Difficulties arise, however, when developers introduce apps written with infrastructure and tools in the public cloud into the on-premises corporate IT environment. Developers may have trouble running those apps on hardware different from what they were developed on, and have a hard time integrating them with core IT applications developed in-house.

    Some organizations have tried to re-create the advantages of public cloud internally by building their own on-premises private clouds using industry-standard components. But those efforts can also suffer integration problems. Making disparate parts from diverse vendors perform well together, and supporting those cobbled-together systems, can be costly and time consuming. When completed, these private clouds aren’t necessarily easy to connect with public clouds, limiting private clouds’ abilities to tap extra resources when needed.

    The same can be true of so-called converged infrastructures—hardware and software from separate vendors packaged together and offered commercially as single units. Converged infrastructures are often plagued by less-than-optimal integration among those disparate elements, and a lack of compatibility with public cloud options.

    Extension of the Cloud

    Oracle Cloud Machine is a tightly integrated service designed from the ground up for developing enterprise applications using the same Oracle IaaS and PaaS tools and services as are available in its public cloud, and running those apps either on-premises or in Oracle Cloud. It is, in fact, an extension of Oracle Cloud, residing completely within an organization’s data center.

    Oracle Cloud Machine gives enterprise IT architectures cloud-based benefits:

    • Offers enterprises the choice to use public cloud or cloud on premises.
    • Enables developers to rapidly build, test, and deploy new applications, leading to faster time to market.
    • Promotes application innovation, as the latest updates and investments made by Oracle in its public cloud are delivered automatically on premises.
    • Relieves support worries, because it is fully managed on the data center floor by Oracle.
    • Provides OpEx cost advantages and portability of IT spending across premises and public cloud through subscription-based pricing.


    Making disparate parts from different vendors perform well together, plus supporting them, can be costly and time consuming. The IaaS and PaaS components of Oracle Cloud Machine are carefully architected to work together. By adjusting its administrative models, this new platform can be used to support the entire application lifecycle, from development to production, shortening turnaround time. Applications developed on it can run on premises or in Oracle Cloud, and Oracle Cloud Machine can tap additional resources in the public cloud quickly and easily.

    Oracle Cloud Machine is offered via the “as-a-service” model. Oracle is responsible for delivering, installing, and maintaining Oracle Cloud Machine, and customers subscribe to its services or access them in an elastic metered formula. This shifts all maintenance and support to Oracle, while giving customers the financial benefit of reporting Oracle Cloud Machine as an OpEx rather than as a CapEx cost. And pricing is consistent with the same services available in Oracle Cloud.

    Initially, Oracle Cloud Machine will feature Oracle Java Cloud Service and Oracle Integration Cloud Service. Oracle Java Cloud Service provides robust tools for Oracle WebLogic Server, including advanced services that address the complete lifecycle of an application. Oracle Integration Cloud Service provides a platform for line-of-business users to easily integrate applications on premises, in the cloud, or both.

    (continua)

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,480
    Oracle and Non-Oracle Workloads Supported

    Those services will be followed by Oracle Database Cloud, which makes available the most up-to-date version of Oracle’s flagship technology—including advanced cloud tools that simplify database provisioning—and Oracle Application Container Cloud, which will enable customers to run lightweight applications and “multilanguage” workloads on Oracle Cloud Machine. “We realize that everything is not Oracle,” Pandit says. Oracle Application Container Service will let developers use open-source frameworks such as Node.js, Ruby on Rails, and Tomcat, to create and run applications on Oracle Cloud Machine. “This enables many of the non-Oracle workloads that customers have today to be managed in a single, integrated cloud environment,” he says.

    The same is true for the “recipes” and best practices Oracle engineers are building—in the DevOps model—to run open-source workloads directly on Oracle’s public cloud infrastructure as a service. Those same tools and best practices “are going to be available on Oracle Cloud Machine running on premises too, given that it has the same IaaS software and APIs [application programming interfaces] as in Oracle Cloud,” Pandit says.

    It’s important to keep in mind that Oracle Cloud Machine is a self-contained service offering. While many organizations will benefit from its tight integration with Oracle’s public cloud, businesses won’t have to go outside the corporate firewall. “They don’t have to move to the public cloud if they don’t want to,” Pandit says. “Oracle gives them the ultimate choice on how they want to make use of Oracle Cloud.”

    Breaking Boundaries

    Oracle Cloud Machine represents a new way for organizations to benefit from cloud computing. It provides the same services available in Oracle Cloud—not similar, but exactly the same—in an on-premises, tightly integrated Oracle engineered system. As with its public cloud services, Oracle is responsible for administering, updating, and maintaining the system, while customers employ its cloud-oriented development tools using the cloud’s pay-as-you-go model.

    Oracle Cloud Machine “removes artificial boundaries of where a cloud should run,” Pandit says. “In fact, it allows users to make their data centers the edge of Oracle Cloud,” he adds. And by doing so it clears away the last limiting factor to every organization’s ability to exploit the cloud.

    https://www.oracle.com/cloud/bringin...-premises.html



    Complete End-To-End Operations

    Cloud Operations for Oracle Cloud Machine support is included with all Oracle Cloud Machine deployments, providing you with end-to-end management services delivered in your cloud and managed by Oracle via Oracle Advanced Secure Gateway. This service enables you to accelerate time to deployment, increase availability, and reduce business risk. You gain faster access to innovation and a better return on your investment.

    Services Provided by Oracle Cloud Operations

    Installation and Configuration: Comprehensive, standard system hardware installation including site audit, installation and configuration, hardware, network and operating system functionality validation

    Monitoring: Predictive monitoring provides 24x7 proactive system monitoring; these services help ensure uptime and deliver increased service levels via proactive notification of potential issues, enabling staff to focus on core business activities

    Cloud Administration: Manage and maintain the Cloud Machine IaaS resources and PaaS infrastructure

    Incident Management and Resolution: ITIL-based processes and technological expertise for system administration and incident resolution

    Tenant Management: Manage and provide tenant level resource allocation and settings

    Change Management: Maintains the integrity of the Cloud Machine environment in a proactive manner by governing all change requests and maintenance records

    Oracle Cloud Support: Management of product support Service Requests (SR) for hardware and software components of the Oracle Cloud Machine

    Backup and Restoration: Regularly scheduled backups of the Oracle Cloud Machine infrastructure

    Upgrades: Management of on-boarding of new Cloud Services and enhancements to existing services

    Patching: Periodic deployment of patches to proactively keep your business-critical infrastructure up to date

    http://www.oracle.com/us/solutions/c...ds-2949541.pdf

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •