Resultados 1 a 8 de 8
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    [EN] VMware soars as new cloud powerhouse on deals with Amazon, Microsoft, IBM,Google



    VMware's revenue and stock price are surging as it pairs up with four leading public-cloud providers.

    Bob Evans
    Sep 13, 2017

    The great myth of cloud computing is that it would somehow magically make all the agony and cost of integration disappear, and that private clouds and public clouds would somehow interoperate easily and elegantly with each other and also with on-premises systems.

    While that myth remains a total hallucination, the possibility of seamless private/public cloud integration—of highly dependable real-world collaboration between traditional data centers and the cloud—has become much more real with the decision by private-cloud leader VMware to forge powerful partnerships with public-cloud leaders Amazon, Microsoft, IBM and Google.

    Here’s a quick overview of the deals that VMware—#10 on my Cloud Wars Top 10 list—has struck with those four cloud heavyweights. I’ve also included some thoughts on why each of those partnerships will streamline and accelerate corporate customers’ journeys to the cloud via the hybrid approach—some private, some public—that’s the goal of just about every business on the planet.

    • Amazon: VMware Cloud on AWS is a jointly architected solution that allows customers to run VMware’s market-leading compute, storage and network virtualization solutions directly on AWS. With hundreds of thousands of businesses around the world already running big chunks of their operations on VMware’s virtualized systems, this partnership allows them to leverage existing assets, skill sets and processes while also gaining the unique advantages of the cloud: lower operating costs, more flexibility, less infrastructure sprawl to manage. The partnership with AWS was announced about a year ago, and now the jointly developed underlying technology is available.
    • Microsoft: Later this year, VMware’s Desktop-as-a-Service will become available on the Microsoft Azure cloud, and is an outgrowth of VMware’s focus on tying end-user computing into the underlying cloud architecture. In addition, some elements of VMware’s broader Cloud Service will also eventually become available on Azure, including services for management, network security, and automation.
    • IBM: A big portion of the IBM Cloud is built on VMware technology (via IBM’s acquisition of SoftLayer), and this partnership extends the range of compatibility for customers across IBM Cloud and VMware’s offerings.
    • Google: A joint-development project across Google, VMware and Pivotal Software in the red-hot area for “container” services lets enterprises move workloads to the cloud while leveraging existing assets, which VMware says will help business customers accelerate innovation.


    In the meantime, VMware’s financial fortunes have been booming: for its quarter ended July 31, VMware posted revenue of $1.90 billion, up 12.2% for the year, leading CEO Pat Gelsinger to say in the earnings press release, “As we continue our multi-year journey from a compute virtualization company to offer a broad portfolio of products driving efficiency and digital transformation, customers are increasingly turning to VMware to help them run, manage, secure and connect their applications across all clouds and all devices.”



    Investors are, without question, buying into VMware’s strategy: a year ago, on Sept. 13, 2016, VMware’s stock price was $73.67. Today, on Sept. 13, 2017, it’s $109.60—up an whopping 49%.

    VMware COO Sanjay Poonen shared some thoughts about these developments with me via email, emphasizing that every company in every industry will be using both private and public clouds, and that VMware could provide huge benefit to those businesses by helping them orchestrate that essential interplay.

    And while Poonen made it clear that VMware’s bullish about each of its four deals with major public-cloud providers, the centerpiece is the deal with Amazon.

    “Customers across industries want the ability to seamlessly integrate their on-premise data-center environments with AWS, while still using their existing tools and skillsets within a common operating environment on familiar VMware software,” he said.

    “VMware Cloud on AWS delivers on this promise, with a seamlessly integrated hybrid cloud that extends on premise vSphere environments to a VMware Software-Defined Datacenter running on AWS elastic, bare-metal infrastructure.”

    Emphasizing that both Amazon and VMware were keen to provide a solution that would allow businesses to preserve their significant—and sometimes huge—investments in VMware-based applications and processes, Poonen said the partnership doesn’t force customers to choose between either the private-cloud path or the public-cloud path, but instead allows them to pursue both in line with the specific requirements of the business.

    “Ultimately,” he wrote, “it means that VMware customers can easily operate a consistent and seamless hybrid cloud without rewriting their applications or changing their operating model, while taking advantage of AWS’s global footprint and scale, as well as its services in storage, databases, analytics and more.”

    Calling the new VMware Cloud on AWS service “a data center in the cloud,” Poonen said VMware “expects this AWS offering to be “the flagship offering in our Cloud portfolio.”

    After VMware’s initial cloud foray a few years ago proved to be a total flop and a near-disaster for the company, this new customer-driven strategy has helped VMware carve out a unique position in the intensely crowded and competitive cloud marketplace.

    “AWS, Azure, IBM and Google are the top four public-cloud players by market-share, and each of these key cloud players is now working with VMware in material but unique ways that are completely customer-centric,” Poonen said.

    And while VMware’s public comments about the deals echoed Poonen’s excitement at being deeply aligned with those four public-cloud leaders, he also made it clear that VMware’s not just some shy violet that can’t believe it got asked to the prom.

    “Every industry is becoming increasingly technology-driven at its core, and our belief is software is changing the world,” Poonen wrote in response to my question about the uniqueness of VMware’s opportunity.

    “And we believe we have the most innovative technology to revolutionize the data-center, modernize the digital workspace and build bridges into the world of hybrid cloud. No other company at our level of scale has the immense base of highly satisfied customers and market leadership in the private cloud, which now allows us a unique opportunity to be relevant and strategic to all the public-cloud vendors, starting with AWS.

    “No other vendor has this level of strategic leverage in the data-center and hybrid cloud.”

    In the Cloud Wars, that’s a very big claim to make—but it looks like that at least for now, VMware’s claim is completely credible.

    Kudos to VMware for charting a bold new path and for making business benefit to customers a top priority.

    https://www.forbes.com/sites/bobevan...nd-google/amp/

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    Performance of Enterprise Web Applications in Docker Containers on VMware vSphere 6.5

    Harold Rosenberg
    September 19, 2017

    Docker containers are growing in popularity as a deployment platform for enterprise applications. However, the performance impact of running these applications in Docker containers on virtualized infrastructures is not well understood. A new white paper is available that uses the open source Weathervane performance benchmark to investigate the performance of an enterprise web application running in Docker containers in VMware vSphere 6.5 virtual machines (VMs). The results show that an enterprise web application can run in Docker on a VMware vSphere environment with not only no degradation of performance, but even better performance than a Docker installation on bare-metal.

    Weathervane is used to evaluate the performance of virtualized and cloud infrastructures by deploying an enterprise web application on the infrastructure and then driving a load on the application. The tests discussed in the paper use three different deployment configurations for the Weathervane application.

    • VMs without Docker containers: The application runs directly in the guest operating systems in vSphere 6.5 VMs, with no Docker containers.
    • VMs with Docker containers: The application runs in Docker containers, which run in guest operating systems in vSphere 6.5 VMs.
    • Bare-metal with Docker containers: The application runs in Docker containers, but the containers run in an operating system that is installed on a bare-metal server.


    The figure below shows the peak results achieved when running the Weathervane benchmark in the three configurations. The results using Docker containers include the impact of tuning options that are discussed in detail in the paper.



    Some important things to note in these results:

    • The performance of the application using Docker containers in vSphere 6.5 VMs is almost identical to that of the same application running in VMs without Docker.
    • The application running in Docker containers in VMs outperforms the same application running in Docker containers on bare metal by about 5%. Most of this advantage can be attributed to the sophisticated algorithms employed by the vSphere 6.5 scheduler.


    The results discussed in the paper, along with the results of previous investigations of Docker performance on vSphere, show that vSphere 6.5 is an ideal platform for deploying applications in Docker containers.

    https://blogs.vmware.com/performance...vane-perf.html

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    Cisco Intersight rolls up data center management software

    Cisco is trying to move the center of gravity from a vCenter world to an Intersight world

    Robert Gates
    22 Sep 2017

    Cisco is folding its data center management software into a cloud based product called Cisco Intersight that will harness the collective intelligence of its customers.

    Cisco is throwing a cloud party for all of its data center management tools, and everyone's invited.

    Cisco has pulled together its UCS Manager, UCS Director and other software into a new cloud-based management and automation platform. Cisco Intersight applies lessons from Meraki, Cisco's cloud-based Wi-Fi and routing platform to systems management, to analyze telemetry data from all users and come up with policy-driven automation -- and eventually allow machines to manage machines.

    Cisco Intersight will initially target UCS servers and HyperFlex hyper-converged infrastructure, and eventually extend to converged infrastructure, with connections to Pure Storage, IBM and other vendors.

    "By moving [data center management software] to the cloud, you can now do interesting things," such as crowdsource users' behavior and apply the benefits of artificial intelligence, said Ashish Nadkarni, an analyst at IDC. Plus, in a multi-data center environment, IT pros will not have to worry about disaster recovery for the infrastructure management layer since it is in the cloud with Intersight, he said.

    Cloud-based Intersight -- called Project Starship while in development-- will include features from Cisco Integrated Management Controller (IMC) and UCS Manager at first, and will later add UCS Director tools for orchestration. To start, Intersight also will include a recommendation engine and within six months add a workload optimization tool to predict the effect of proposed system changes. It also will entail continuous updates, versus major releases every six months.

    The technical preview started on Aug. 1, and 8,000 licenses are now in use across a few dozen participants including Cisco's own IT department and several partners, said Ken Spear, senior product marketing manager at Cisco.

    Tom Doll, business development manager at IT engineering and professional service firm CST Corp. in Houston, Texas, got a walkthrough of Intersight, although his company isn't using the data center management software yet. From what he saw, though, he likes its cloud-based operation and the scope of telemetry data it collects to become smarter, he said.

    Cisco Intersight likely will attract organizations with an aggressive cloud strategy, but most organizations are hesitant to abandon their on-premises management, Nadkarni said. The management layer often integrates with many in-house systems such as VMware vCenter and several APIs, and users worry that those connections will break when moved to the cloud. Nevertheless, many enterprises eventually will retire those on-premises tools and move them to the cloud, especially if they can shift to an Opex vs. Capex model.

    "Cisco is almost trying to move the center of gravity from a vCenter world to an Intersight world," Nadkarni said.

    Other products that also collect product usage data include OneView from Hewlett Packard Enterprise (HPE), and VMware Skyline Collector, and Cloud Physics also offers predictive analytics capabilities. But the phased release of Intersight and the 2018 release of some of its most useful features means that there is plenty of time for others to keep pace.

    Intersight still needs to give users better granularity, expanded functionality and a way to manage cloud and non-cloud resources together, Nadkarni said. Other areas for improvement for the data center management software include additional hardware support, including for non-Cisco products, integration with other tools such as VMware vSphere and vCenter and multi-cloud management. Intersight connections will have an OData RESTful API with and Chinook open standards.

    For now, though, the functionality is similar to the current UCS Manager, and many users that rely on the basic functionality of UCS Manager could benefit from the intuitive functions of Intersight, said Chris Gardner, senior analyst at Forrester Research, Inc. Organizations that use UCS Manager and UCS Director for advanced, customized workflows may find the transition more of a challenge, he said.

    Cisco Intersight will roll out in three stages. Base Edition, which is free, will include global health monitoring, a customizable dashboard, HyperFlex Installer, UCS Manager, IMC and HyperFlex Connect element managers. An Essentials Edition adds policy-based configuration with service profiles, firmware management with scheduled updates, hardware compatibility checks and upgrade recommendations. Essentials' list price is $12.48 per physical server, per month, and it will be generally available in November, Spear said. After that, a version called Standard, which will include UCS Director and an "adaptive assist" tool will be available in the fourth quarter of 2018, and a future version called Advantage with advanced analytics is more than a year away.

    Cisco says it won't force users to move to Intersight and they can keep current element managers for UCS, ICM and HyperFlex. Cisco's existing tools, including IMC, UCS Director and UCS Manager, will have a "peaceful coexistence" with Intersight at least until early 2019, Spear said -- although he didn't specify what would happen after that.

    http://searchdatacenter.techtarget.c...ement-software

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    IBM Strategic Imperatives mark a transition into software and services

    Ed Scannell

    IBM's transition from a dominant server hardware supplier to one focused on new-age software and services has been what you might expect from a 100-year old company: glacially slow and sometimes painful.

    But lately the 100-year-old seems to have a bit more spring in its step.

    Several years ago, IBM launched its Strategic Imperatives initiative, a mission to drive revenues in emerging markets, including cloud, analytics, mobile, social and security services, to hedge against rapidly falling sales of its legacy server hardware.

    Over much of that time, IBM Strategic Imperatives revenues have grown double digits and now represent over 43% of IBM's overall sales over the past 12 months. The company's cloud revenues reached $15.1 billion over the past 12 months and as-a-service revenues from the Strategic Imperatives rose to $8.8 billion.

    IBM's transition to a more meaningful cloud strategy has been circuitous. The company had a somewhat aimless focus on cloud strategy until it purchased the Dallas-based SoftLayer Technologies, Inc. in 2013 for $2.5 billion. SoftLayer, then the largest privately held infrastructure provider, would hook its public cloud services with IBM's SmartCloud offerings so users could more quickly and easily incorporate cloud computing.

    While SoftLayer provided IBM with a platform to deliver its SaaS-based products both in the U.S. and overseas, that platform was actually better suited for hosting than delivering cloud services, said Lydia Leong, an analyst with Gartner.

    "SoftLayer has been both successful and unsuccessful for IBM," she said. "SoftLayer was not a significant cloud provider, but more a hosting provider from the old-school competing with vendors like Internap and Peer 1."

    IBM has done little to enhance SoftLayer over the past three-plus years, with the exception of some minor improvements to its ability to handle storage back in early 2015, according to Leong. It has hardly kept pace with the hundreds of upgrades competitors, including Microsoft and Amazon Web Services (AWS), have made to their cloud platforms the past three years, Leong said.

    Over the past couple of years, however, IBM has pieced together a new cloud architecture made up of state-of-the-art hardware and software technologies. The goal of the project, codenamed Genesis, is to accelerate delivery of modern web services and products with what the company calls next-generation infrastructure (NGI).

    "The NGI product is intended to bring IBM into the modern world of infrastructure," Leong said. "It has a hardware design that looks similar to the designs used in AWS or Azure that can then deliver services that look more modern."

    The end result of the NGI project will be a fabric computer, which will incorporate technologies such as 3D Torus, a new method of interconnecting multiple servers, and Single Large Expensive Disk. Together, those will reduce latency to less than 20 milliseconds, according to sources briefed by IBM. The company hopes this speed of web services will provide the edge it needs to compete against Amazon.

    This collection of technologies will not be sold in commercial servers for IT shops; they will only reside in servers inside IBM's 56 data centers, specifically for IBM cloud customers.

    Winnowing down the hardware portfolio

    While the IBM Strategic Imperatives initiative has held up its end of the bargain, IBM's legacy hardware has not. Sales of the company's core server hardware, especially its Power series proprietary server, have taken a severe beating. For every step forward with Strategic Imperative products, server hardware takes two steps back.

    IBM's transition away from server hardware dependence started with the sale of its Intel-based System x server line to Lenovo in early 2014 for $2.3 billion. That business was profitable at the time of the sale but didn't fit with IBM's longer-term focus on higher-margin products, such as its proprietary Power servers and z Systems mainframes.

    The company dumped more unprofitable hardware investments when it sold off its chip manufacturing facilities to GlobalFoundries, also in 2014. IBM was so eager to get the chip plant off its balance sheet that it actually paid GlobalFoundries $1.5 billion to take it off its hands.

    The decision to sell off the System x line proved to be ill-timed, however. Soon after, sales of the Power series began to rapidly decline, losing to a host of competitors selling much less expensive, and steadily more powerful, Intel-based servers.

    While the Power series has taken a beating, sales of the beleaguered servers appear to have bottomed out over the past several fiscal quarters. Some analysts say that growing acceptance of the IBM Strategic Imperatives software will correspondingly boost the fortunes of the Power series.

    Another development that could boost IBM Power sales would be the success of OEMs licensed to resell the product.

    "Products from the OpenPower Foundation are now just starting to come to market," said Charles King, president and principal analyst with Pund-IT, Inc. "If we see the level of adoption [OpenPower] partners believe they will get, we could see incremental revenues generated as a result."

    While IBM's mainframe line has slowed its decline over the past three or four years, revenues are still only half where they used to be, at about $2 billion a year versus $3 billion to $4 billion, according to Bernstein Research. But the release of its z14 Systems mainframe this month should boost mainframe sales in 2018.

    Ironically, the very hardware on which IBM is trying to lessen its dependence will further spur sales of products that are part of the Strategic Imperatives.

    "IBM [will frame] its Strategic Imperatives push with the delivery of the z14 and then later this year the Power9-based servers," said Geoffrey Woollacott, principal analyst with Technology Business Research, Inc. in Hampton, N.H. "The success of the Strategic Imperatives could make those hardware platforms increasingly more relevant to businesses."

    So, even before the Strategic Imperatives initiative completes its mission, IBM's next transition -- a refocusing on its hardware technologies -- is on the horizon.

    "Software will always need to run on something," King said. "I don't imagine IBM will ever get out of the hardware business entirely."

    http://searchdatacenter.techtarget.c...e-and-services

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    IBM Migration Tool Ships Data to the Cloud by Air Mail


    IBM packs 120TB into a carry-on bag, for snow-balling cloud uploads

    Jessica Lyons Hardcastle
    September 18, 2017

    IBM rolled out a data migration appliance that allows enterprises to move data from their on-premises servers to IBM Cloud via overnight mail.

    It’s a suitcase-sized device on wheels that can store 120 terabytes of data.

    Mailing data to the cloud “seems retro,” said Michael Fork, distinguished engineer and director, cloud infrastructure, IBM Watson and Cloud Platform. “But in a lot of cases it’s the only option to move large amounts of data to the cloud.”

    Transferring large data sets can take months, depending on a company’s access to high-speed bandwidth. Enterprises face other data transport challenges including high network costs, limited Internet connectivity, and security concerns, Fork added.

    “Our goal is to build the simplest, most affordable way to move large amounts of data to the cloud,” he said.

    The device, called Mass Data Migration, includes AES-256 encryption to ensure data is protected during transport and ingestion. It also uses RAID-6, which protects against two disk failures. “While these are rugged, tamper-proof, and shock-proof cases, we wanted to make sure that even in the event of a drive failure, we can still read your data and complete a successful migration,” Fork said.

    The service is available now in the U.S. IBM expects to offer it in Europe before the end of the year, Fork said.

    Some customers have already used the migration tool in beta. “With the recent VMware announcement, we have been migrating a lot of VMware customers to the IBM Cloud,” Fork said. IBM last week announced a new hybrid cloud service with Vodafone that allows enterprises to move their VMware workloads between Vodafone-hosted private clouds and IBM’s cloud.

    Another use case comes from companies that need to store, manage, and access large video, audio, and image files.

    The device and service costs $395, which includes UPS Next Day Air shipping.

    https://www.sdxcentral.com/articles/...-mail/2017/09/
    Última edição por 5ms; 24-09-2017 às 16:43.

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    off-topic: "The device and service costs $395". Não exatamente

    Mass Data Migration: How it works



    Simple process

    IBM sends you a pre-configured device enabling you to simply connect and ingest data. When you are finished, ship the device back to IBM where we offload your data into IBM Cloud Object Storage. Once your offload is complete, enjoy immediate access to your data in the cloud after IBM securely erases all data from the transport device.

    End-to-end protection

    Mass Data Migration devices are designed to maximize security from the inside-out. Devices are housed in rugged, tamper-evident, waterproof, shockproof cases to ensure secure protection during device-handling and transport. The technology offers industry-standard 256-bit encryption, as well as in-line compression, to ensure an efficient and secure data migration.

    Secure erasure

    IBM uses a four-pass DOD-level data wipe to ensure complete and prompt erasure of all customer data from Mass Data Migration devices.

    https://www.ibm.com/cloud-computing/...data-migration

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    off-topic: Cloud Data Migration – Data Transfer Using Physical Shipping & Appliances


    AWS Snowmobile


    Chris Evans
    21 July 2017
    Updated 19 September 2017 with details of IBM’s new shipping offering

    One of the most interesting challenges with using the public cloud is how to get data into cloud storage platforms so it can be used with services like analytics or e-discovery. One scenario is to use “analogue” shipping services to send physical devices like hard drives and appliances to the cloud provider.

    Shipping a Drive

    Initially we saw AWS and Azure offer the ability to ship individual drives containing data. These services still exist today. AWS Import/Export allows customers to ship drives that meet a standard set of requirements – basically an eSATA or USB connection with a file system that can be read by Red Hat Linux. Customers use AWS-provided tools to ship their data, which can be EBS or S3-based. Azure also has an Import/Export service. This differs slightly in that only internal drives are accepted. Microsoft uses external connectors and docking stations to access the drive contents, which get put into BLOB storage.

    Naturally there is quite a lot of work required to prepare a drive for shipment. The contents need to be encrypted and the device loaded with a vendor supplied tool to ensure it can be read at the receiving end. There are also lots of steps to take in order to ensure the drive is identified as belonging to the right customer account. Charging is pretty simple and based usually on a fixed cost per drive unit and additional charges to ship the drive back to the customer.

    Compare the Network

    With current drive capacities, it’s possible to ship terabytes of unstructured data on physical media. Compare this to a 1Gb/s network connection to the cloud provider which could shift around 100MB/s (without compression or other data reduction technologies). A standard 10TB drive would take around 30 hours to upload. If you have 100TB of data or perhaps a 100MB/s network connection then you’re looking at eleven or twelve days to ship the data in. For IT shops that don’t have or can’t afford that level of networking then physical shipment looks like a good deal.

    Appliances

    What happens if you have more than tens of terabytes (or are even petabyte scale)?

    Both Google Cloud Platform and Amazon Web Services now have appliances they will ship to you. These are basically hardened servers that are capable of taking greater storage capacities than a single hard drive.

    Google’s Transfer Appliance, launched this week, comes in 100TB and 480TB (raw) capacities. The appliance is a rack-mount server stuffed with drives that can hold up to a petabyte of capacity (assuming 2:1 compression/dedupe). Google claims the appliance can store a petabyte of data, transferred over a maximum of 25 days (before additional charging comes in), with a subsequent upload of up to 25 days at the receiving end. A quick calculation shows that this implies 2x 10Gb/s networking on the appliance, which the customer needs to be able to support on their infrastructure (more on that in a moment).

    AWS now has three solutions. Snowball, Snowball Edge and Snowmobile. Snowball is a suitcase-sized appliance that comes in 50TB and 80TB (raw) capacities (42/72TB usable). Network support is 10GbE, so potentially a device can be filled within 24 hours. Externally, Snowball has shipping details displayed with e-ink. It’s a self-contained device, rather than a rack mount server, so could be placed anywhere within the data centre (subject to power/networking connections). Snowball Edge is a standard Snowball with additional compute capacity. Customers can run Lambda code on their data and so do pre-processing before shipping to AWS.

    Snowmobile

    If you have petabyte storage requirements, then AWS offers the Snowmobile, which is literally a truck-full of storage capacity. Snowmobile is recommended for customers with 10PB of capacity or more and can hold a maximum of 100PB. Rather than simply plug into the network, Snowmobile comes with a rack of equipment for managing data transfer, with up to 1Tb/s of bandwidth provided through multiple 40Gb/s network connections. This is a serious piece of hardware that needs 350KW of local power to support, so not for the feint-hearted.

    Migration Challenges

    Shipping data around introduces some immediate challenges. Excluding the most obvious around security (which is solved by encryption), two main problems are data concurrency and physical transfer. By concurrency we mean the ability to keep track of updates to the data being uploaded to the cloud provider. We would expect effectively 100% of the content transferred through physical media to be unstructured files and objects, so much of the content may not change. However, with load/unload and shipping times that may run into weeks or months – do the calculation on how long it would take to fill a Snowmobile – then data concurrency is an issue.

    Choices have to be made about how data is processed while in the transfer phase. It may be a case of saying that analysis remains onsite until the transfer is complete, but that can be complicated if data changes rapidly or is being added to every day. Looking at AWS uploads, there is a big issue here that could cause customers problems. In the fine print on the limits of using Snowball, we see the following:

    All objects transferred to the Snowball have their metadata changed. The only metadata that remains the same is filename and filesize. All other metadata is set as in the following example:
    -rw-rw-r-- 1 root root [filesize] Dec 31 1969 [path/filename]

    Ouch! All my metadata goes and I can’t track file/object status by date/time or ownership. This may represent a big problem for customers trying to keep their on/off premises copies in sync.

    The second challenge is actually getting data onto the transfer device. Vendors offer tools for data transfer, but these need to be run on relatively high-end machines to be effective. For example, Snowball transfer software requires a minimum recommended 16GB of RAM and 16-core processor, with 7GB of RAM required for each data transfer stream. The high amount of RAM required is due to the in-flight encryption process. Today servers are cheap, but some planning is needed to optimise getting data onto the transfer devices as quickly as possible, including having enough local bandwidth to transfer data without impacting local services.

    One final thought; when calculating the difference between network and offline shipping, remember data movement occurs twice (on and off the transfer device), plus shipping time, plus preparation time. So determining the breakpoint where offline shipping is more practical could actually be further away than customers think.

    IBM

    On 19th September 2017 IBM announced the availability of IBM Cloud Mass Data Migration, a 120TB appliance in a suitcase that can be used to ship data to the IBM Cloud. The service uses AES-256 encryption, RAID-6 protection and includes overnight courier shipping (currently only in the US) all for $395. Is this reasonable? It’s probably not too expensive compared to networking, but then it depends on your urgency and network bandwidth.

    Da página da IBM:

    Migrating your data has never been easier

    Move data fast

    Using a single Mass Data Migration device, you can migrate up to 120 TB of usable data (at RAID-6) in just days, as opposed to weeks or months using traditional data transfer methods.

    Flexible and Scalable

    Whether you need to migrate a few terabytes or many petabytes of data, you have the flexibility to request one or multiple devices to accommodate your workload.

    Affordable

    Moving large data sets can be expensive and time-consuming. Each Mass Data Migration device is offered at a low, flat rate including roundtrip shipping and 10 days of use at your site.



    Device Fee: USD 295

    Shipping: USD 100
    UPS Next-Day Air
    (flat rate roundtrip)

    Use: No charge
    for first 10 days and
    $30 per day extension charge

    Data Import: No charge


    https://www.ibm.com/cloud-computing/...data-migration



    https://blog.architecting.it/cloud-d...ng-appliances/
    Última edição por 5ms; 24-09-2017 às 17:44.

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    Announcing the Azure Data Box preview



    Dean Paron
    September 25, 2017

    Every organization we work with is actively looking for ways to use the power of the cloud to turn data into insights and improve IT efficiency. What we’ve heard from customers is that moving data to the cloud over the network isn’t always enough – even fast networks can take months to move data that is measured in terabytes and petabytes. Customers are asking for another option: a secure, human-portable, and easy-to-get Microsoft appliance that enables the offline transfer of large data sets to the Azure cloud.

    Which is why today, we’re thrilled to announce the preview program for the Azure Data Box. The Azure Data Box is a 45 pound, ruggedized, tamper-resistant, and human-manageable appliance that will help organizations overcome the data transfer barriers that can block productivity and slow innovation. We built the Data Box from the ground up to meet the following needs for our customers:

    • Secure – Tamper-resistant appliance which supports 256 bit AES encryption on your data
    • Tough – Built using the toughest materials to handle the inevitable bumps and bruises during transport, without requiring external packaging
    • Easy to use –Plugs right into your network, can store approximately 100 TB, and uses NAS protocols (SMB/CIFS)
    • Simple – Rent it, fill it up and return it - all tracked in the Azure Portal
    • Partner Supported – Integrated with a global array of industry-leading Azure partners


    Azure Data Box is already in action!

    For the last several months, we’ve been working directly with customers of all industries and sizes to get their data imported into Azure. One of those customers, Oceaneering International, was an adopter of an early prototype of the Azure Data Box. Learn how they are using it to bring data from the depths of the ocean into Azure in a fraction of the original time. Video: Case Study: Azure Data Box | Oceaneering Intl

    In our evaluation of moving workloads to the cloud, Xerox used Azure Data Box to quickly and efficiently load terabytes of database backups from on premises servers to Azure. The integration into our data center was easy, and within a few minutes we were able to copy data to the Data Box. The entire process, from delivery to return shipping, was simple and saved us considerable time and effort! – Dennis Skrtic, Systems Consultant Principal at Xerox Corporation

    Partner friendly

    We also know that our customers store their data in conjunction with a myriad of applications and solutions. So, we’ve also been working closely with many of our partners to make sure that the Azure Data Box integrates directly with their products to seamlessly leverage offline transport to the cloud.

    [Microsoft is working with partners like Commvault, Veritas, NetApp, Avid, Rubrik and CloudLanes, who will all integrate their services with the Data Box]

    Randy DeMeno, Commvault Chief Technologist - Microsoft Products & Partnership says “As with other Azure integration points, the ‘Data Box’ team has made it simple for Commvault to work with the ‘Data Box’ solution such that we can jointly use Commvault Data Management software to seed large amounts of data to Azure via the easy-to-use Azure ‘Data Box’ hardware. This enables not just heterogeneous backup and data management, but the ability to index this data for archiving, E-Discovery/Search, AI and Analytics using Commvault and Microsoft software.”

    “We are excited to further extend our Veeam capabilities across Microsoft Azure with support for Data Box,” said Paul Mattes, Vice President of Global Cloud Group at Veeam. “This will enable our customers to accelerate data protection in cloud environments by powering efficient transfers of large Veeam data sets to Azure.”

    Sign up today for the preview

    The preview is currently available only in the US, but we will continue to expand to more markets in coming months. We expect appliance supplies will be limited due to the already high demand, but we’re continually expanding and adding more systems to the fleet. If you are interested, we want to hear from you.

    https://azure.microsoft.com/en-us/bl...l-be-unlocked/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •