Página 1 de 2 12 ÚltimoÚltimo
Resultados 1 a 10 de 12
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    Amazon e Google: a Microsoft estava certa ...

    Mais alguns anos e descobrirão que IBM, Oracle, Rackspace também tinham razão

    Kevin McLaughlin
    Sep. 21, 2016

    Amazon Web Services and Google are developing storage software for companies to use in their own data centers, a reversal of the cloud providers’ usual efforts to get companies to put all their data in the clouds run by AWS and Google, according to people who have been briefed on the products.

    The targets are companies like banks that aren’t big users of public clouds and instead rely more on their own data centers, known as “private clouds.”

    ...

    https://www.theinformation.com/aws-g...-center-battle


    Matt Weinberger
    Sep. 21, 2016

    ...

    This is the exact opposite of the public cloud model that made Amazon and Google's businesses. It acknowledges that not all customers want to move everything into the cloud. Some customers, because of regulatory and data governance concerns, simply can't allow certain information and processes to live outside their own data centers.

    For Amazon, this is kind of a big deal. As recently as 2015, Amazon CTO Werner Vogels says that the company sees the so-called "hybrid cloud" — the industry term for the model where some customer data resides in the data center, and some on big "public cloud" platforms like Amazon — as a mere stopgap trend as companies increasingly outsource their entire IT infrastructure to AWS.

    And while Amazon Web Services offers some hybrid cloud-enabling products, like the AWS Storage Gateway, they've historically mostly focused on making it easier to move data into the Amazon cloud — rather than software that helps you manage it yourself.

    Meanwhile, Microsoft is working on accelerating its own hybrid cloud strategy with the 2017 launch of Azure Stack, which lets customers install a virtual carbon copy of the Microsoft Azure cloud to certain pre-certified servers, further simplifying the integration between the two

    If Amazon itself is tasking engineers with building out on-premises software, it means that it's starting to take that market seriously. It's no surprise, with Microsoft rapidly building out both its technology and its sales strategy to try to close its revenue gap with Amazon Web Services.

    http://www.businessinsider.com/amazo...oftware-2016-9

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    Microsoft launches two Azure regions in Germany

    Alice MacGregor
    21 Sep 2016

    Microsoft has announced the general availability of its two new Azure regions in Germany, Germany Northeast (Magdeburg) and Germany Central (Frankfurt am Main).


    While the data centres will immediately deliver computing, networking and storage services, Microsoft has scheduled Office 365 applications to go online for customers from the first quarter of 2017, and Microsoft Dynamics CRM suite in the first half of 2017.

    The tech giant first spoke of its plans to launch the regions in November last year, and began a preview period in March. According to today’s release, a trustee partnership with Deutsche Telekom-owned T-Systems International ensures that the facilities’ cloud offerings comply with strict German data privacy and sovereignty regulations.

    Microsoft noted that new German cloud customers include driveline and chassis manufacturer ZF, which is interested in the Azure IoT Suite to allow for improved connection, control, and management of its transport technologies.

    Data processing firm TELEPORT, digital workplace solutions provider Haufe Group, and the Fraunhofer Institute for Industrial Mathematics, are also among the first German organisations to cooperate with Microsoft’s new regional cloud offerings.

    Microsoft’s move into Germany follows rival Amazon Web Services (AWS), which opened a data centre region in Frankfurt in 2014. Cloud leaders Microsoft, AWS and Google, are all pushing to expand their geographic reach to reduce latency issues and meet country-specific data handling laws.

    Azure recently expanded to the UK, with two British regions going live earlier this month. Plans for Korean regions are underway too, as well as two Azure Government locations in the United States specifically for the Department of Defense – catering to the data requirements of the Army, Navy, Air Force, the National Security Agency (NSA), the Defense Intelligence Agency (DIA), among other federal bodies.

    https://azure.microsoft.com/en-us/bl...ud-for-europe/

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    PR: Microsoft Azure Germany now available via first-of-its-kind cloud for Europe

    Today, Microsoft Azure is generally available from the new Microsoft Cloud Germany, a first-of-its-kind model in Europe developed in response to customer needs. It represents a major accomplishment for our Azure team.

    The Microsoft Cloud Germany provides a differentiated option to the Microsoft Cloud services already available across Europe, creating increased opportunities for innovation and economic growth for highly regulated partners and customers in Germany, the European Union (EU) and the European Free Trade Association (EFTA).

    Customer data in these new datacenters, in Magdeburg and Frankfurt, is managed under the control of a data trustee, T-Systems International, an independent German company and subsidiary of Deutsche Telekom. Microsoft’s commercial cloud services in these datacenters adhere to German data handling regulations and give customers additional choices of how and where data is processed.

    With Azure available in Germany, Microsoft now has announced 34 Azure regions, and Azure is available in 30 regions around the world — more than any other major cloud provider. Our global cloud is backed by billions of dollars invested in building a highly secure, scalable, available and sustainable cloud infrastructure on which customers can rely.

    Built on Microsoft’s Trusted Cloud principles of security, privacy, compliance and transparency, the Microsoft Cloud Germany brings data residency, in transit and at rest in Germany, and data replication across German datacenters for business continuity. Azure Germany offers a comprehensive set of cloud computing solutions providing customers with the ability to transition to the cloud on their terms through services available today.

    • For businesses, including automotive, healthcare and construction that rely on SAP enterprise applications, SAP HANA is now certified to run in production on Azure, which will simplify infrastructure management, improve time to market and lower costs. Specifically, customers and partners can now take the advantage of storing and processing their most sensitive data.
    • Addressing the global scale of IoT while ensuring data resides in-country, Azure IoT Suite enables businesses, including the robust industrial and manufacturing sector in Germany, to adopt the latest cloud and IoT solutions. Azure IoT Suite enables enterprises to quickly get started connecting their devices and assets, uncovering actionable intelligence and ultimately modernizing their business.
    • With Industry 4.0-compatible integration of OPC Unified Architecture into Azure IoT Suite, customers and partners can connect their existing machines to Azure for sending telemetry data for analysis to the cloud and for sending commands to their machines from the cloud (i.e. control them from anywhere in the world) without making any changes to their machines or infrastructure, including firewall settings.
    • Microsoft, and particularly Azure, has been a significant and growing contributor to open source projects supporting numerous open source programming models, libraries and Linux distributions. Startups, independent software vendors and partners can take advantage of a robust open source ecosystem including Linux environments, Web/LAMP implementations and e-commerce PaaS solutions from partners.
    • Furthermore, with the open source .NET Standard reference stack and sample applications Microsoft has recently contributed to the OPC Foundation’s GitHub, customers and partners can quickly create and save money maintaining cross-platform OPC UA applications, which easily connect to the cloud via the OPC Publisher samples available for .NET, .NET Standard, Java and ANSI-C.
    • Azure ExpressRoute provides enterprise customers with the option of private connectivity to our German cloud. It offers greater reliability, faster speeds, lower latencies and more predictable performance than typical internet connections and is delivered in partnership with a number of the leading network service providers including Colt Telekom, e-Shelter, Equinix, Interxion and T-Systems International.


    The Microsoft Cloud Germany is our response to the growing demand for Microsoft cloud services in Germany and across Europe. Customers in the EU and EFTA can continue to use Microsoft cloud options as they do today, or, for those who want the option, they’re able to use the services from German datacenters.

    https://azure.microsoft.com/en-us/bl...ud-for-europe/

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    Azure and AWS Data Centers


  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    AWS’ Latest Proposal to Woo Major Banks

    By Kevin McLaughlin
    Sep. 01, 2016

    Amazon Web Services has come up with a new strategy to lure banks to use its cloud computing service. AWS has discussed with financial institutions the idea of hosting their data on isolated “safe zones” across AWS’ data centers that would be walled off from other customers and the public internet, according to two people close to AWS.

    The proposal is the latest in a series of moves AWS has made over the years to sign up global banks. But the firms have been slow to make the leap to cloud providers due to fears of regulatory repercussions if the data they store in servers used by multiple businesses is stolen by hackers.

    ...

    https://www.theinformation.com/aws-l...?shared=e7e82a

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    IBM Builds A Bridge Between Private And Public Power Clouds

    Timothy Prickett Morgan
    September 21, 2016

    Two years ago, when Big Blue put a stake through the heart of its impartial attitude about the X86 server business, it was also putting a stake in the ground for its Power systems business.

    IBM bet that it could make more money selling Power machinery to its existing customer base and while at the same time expanding it out to hyperscalers through the OpenPower Foundation while at the same time gradually building out a companion public cloud offering of Power machinery on its SoftLayer cloud and through partners like Rackspace Hosting. This is a big bet, and not one that will necessarily pay off for the company, but it sure is a lot more interesting than Big Blue selling off the Power server division to Hitachi or Lenovo or someone else.

    For that bet to truly pay off, IBM needs to have hybrid infrastructure that is available in private datacenters that looks and feels like that in public clouds. But as Microsoft’s own experience with the Azure Stack private version of the Azure public cloud shows, it is not so easy to scale down a public cloud to even a size that large enterprises can use economically. And the public cloud has a whole new metaphor and interface for computing, which is different from the barely orchestrated server virtualization that still prevails in the enterprise.

    Like other platform providers that are trying to bridge the gap between public and private, IBM has to build that bridge from two sides at the same time and somehow get them to meet in the middle. This is a tough feat, but ants can pull it off and IBM, Microsoft, and others who control their platforms should be able to as well, given enough time and patience from customers to see how the Power Systems in the private datacenter will align with OpenPower systems in the cloud.

    This week, with a high-end Power8 server lineup that includes machines preconfigured as private clouds and links to Power capacity running on SoftLayer, IBM is taking an important step forward in delivering the true hybrid cloud capability that its enterprise customers crave, helping those customers feel comfortable about staying on Power iron into the future and moving them towards a new way of managing infrastructure at a higher level of abstraction than what they are used to. But don’t be confused. The new Power Systems E-class C models that Big Blue announced at its Edge conference in Las Vegas are not just chips off the Power-based servers that IBM has been adding to the SoftLayer cloud in the past year.

    “This is desirable, and we have been out talking to clients for a little over a month now to get feedback from them and to build this offering in an agile way,” Steve Sibley, director of worldwide product management for IBM’s Power Systems line, tells The Next Platform. “This is one of the primary requirements, but customers have not told us that without that, they won’t start. In fact, over 70 percent of the customers that we talked to about this hybrid offering told us it was relevant to them and it would accelerate their move to Power8 systems. But no question, customers do say that they want to move AIX and IBM i to the cloud and back in addition to Linux. Even though the full software stack is not the same – it is PowerVM on Power Systems and PowerKVM on SoftLayer running only Linux – you can move Linux workloads that you develop out on SoftLayer down onto PowerVM in the datacenter. It is all just Linux. But what we see more often is that customers will develop an application out there on Power Linux – maybe connecting back to a DB2 or Oracle database back in the datacenter – and keep it there.”

    At the moment, IBM’s positioning for Power-based hybrid clouds has customers using midrange and high-end NUMA systems based on the Power8 processors in their own datacenters, which come with elastic capacity that allows customers to processing capacity and memory on a daily basis as well as leases that provide capacity on a monthly basis. The three new cloud machines include the Power E870C and Power E880C, which are based on IBM’s eight-socket and sixteen-socket NUMA boxes called the Power Systems E870 and Power Systems E880. We detailed these two big iron boxes back in May 2015, and IBM enhanced them with larger memory footprints and more powerful Power8 chips in January this year. In October, IBM is expected to offer a Power E850C variant that is based on its four-socket Power8 system, the Power Systems E850, which debuted in May 2015 and which, if history is any guide, will get a memory and processor bump with the Power E850C since these machines were not enhanced back in January.

    Under normal circumstances, IBM would have debuted a Power8+ processor this year, perhaps with slightly higher clock speeds and performance and definitely with lower prices. Having not done thateven after saying it would – IBM is using the advent of the C-styled Power machines as a good excuse to cut prices. With the lowering of CPU and memory activation costs on the systems as well as lower prices on Linux, AIX, and IBM i licenses on the C-style systems, the C machines cost somewhere between 10 percent and 20 percent less than the regular Power E850, E870, and E880 systems, says Sibley.

    The C at the end of the name of these new systems stands for cloud, obviously, and these boxes come equipped with IBM’s variant of the OpenStack cloud controller, called PowerVC, as well as being supported by the OpenStack implementation that is embedded in Ubuntu Server by Canonical.

    (cont)

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030
    IBM’s enterprise customers for Power Systems tend to buy the biggest boxes possible and then use the logical partitioning enabled by its homegrown PowerVM hypervisor to carve up the boxes. IBM has special cores that are tweaked to only allow Linux to run on them called Integrated Facilities for Linux, or IFLs, on this big iron, and they have 75 percent lower CPU core and memory capacity activation costs compared to cores that are allowed to run AIX or IBM i in addition to Linux. This is a big price break, just to show you how serious IBM is about Linux on big Power iron. In addition to this lower-cost capacity, which brings the cost of a NUMA machine a lot closer to the cost of a cluster of Xeon machines, IBM also allows for licenses to cores and memory activations to be migrated around a cluster of machines – what it calls a Power Enterprise Pool. When you layer OpenStack on top of this – IBM’s PowerVC variant to be specific – and its PowerVM hypervisor and its Live Partition Mobility live migration, you get what is in essence a big iron cloud. In this case, it is one that can scale up to 200 nodes with as many as 5,000 VMs, and IBM’s PowerHA clustering tools can do failover disaster recovery for logical partitions on the cloud, too.

    The elastic pricing on capacity that IBM offers is not something you can get with VMware ESXi/vSphere or Microsoft Hyper-V or Red Hat OpenStack or even CloudForms. You can possibly get a full lease on a stack of hardware and software from a server vendor or its banker, but IBM Global Services is a bank in its own right and can offer a full lease on this and the Power Systems division has the ability to charge metered pricing directly to customers for cores and memory without resorting to a complex lease. Sibley says that IBM has hundreds of customers who are using it throughout the workloads running on their Power Systems iron “as a matter of course” with many more who use it for occasional processing spikes. But with the pooling of capacity and the ability to live migrate partitions as well as core and memory capacity around a cluster of machines, more and more customers are looking at elastic hardware pricing.

    This is not something that Intel and its hardware and operating system partners offer.

    Just using capacity on demand pricing for cores and memory (rather than an operational lease on a stack), IBM says that it can deliver a Power8 core running at 4 GHz on a Power E870C system with 16 GB of capacity for $13 a day. Investing in a base system is like buying reserved capacity and using the elastic capacity on demand pricing for cores normally inactive in the system is like buying spot capacity on a public cloud.

    Customers who buy one of the C-style machines will get a full year of use of a mid-sized Power8 C812L-M single-socket server, which is code-named “Habanero” inside of IBM and which is made by ODM Wistron. (This is sold in the IBM catalog as the Power System S812LC, and it is not one of the new Linux-only machine made by Supermicro that will eventually, says Sibley, make its way onto the SoftLayer cloud.) SoftLayer added the Habanero machines to selected datacenters back in May, and this particular machine that is being given to Power C-style private cloud builders has ten cores running at 3.49 GHz with 256 GB of main memory and two 4 TB disk drives. It costs $1,626 per month to run apps on this bare metal machine, which can be configured with the PowerKVM variant of the KVM hypervisor that IBM has cooked up for the Power8 chips. That works out to a value of just under $20,000, which is cool, but it ain’t much against the cost of one of these big NUMA systems, which can run hundreds of thousands to millions of dollars fully configured. But, there are advantages to NUMA systems and enterprises pay the premium because they can run really big jobs or lots of really small jobs on the same boxes.

    Amazon Web Services, Microsoft Azure, and Google Cloud Platform can’t do that, and the wonder as far as we are concerned is why IBM doesn’t have some of these C-style NUMA machines in the SoftLayer cloud proper. Instead, IBM will back up big iron through its disaster recovery services, which are another part of its Global Services behemoth, distinct from SoftLayer.

    At the moment, IBM is offering geographically dispersed resiliency, or GDR, for customers who want to automate recovery operations with PowerVM logical partitions within their own datacenters. Starting next year, says Sibley, IBM will extend this out to its own disaster recovery centers as well as to third party partners who provide such services. The GDR software works with AIX, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Canonical Ubuntu Server guests, and in the first half of next year will support IBM i. EMC storage (well, Dell storage now) will be supported for this replication, but over time IBM will add support for its own disk arrays.

    The Power E870C and E880C will be available starting September 29, with the Power E850C coming later in October. IBM will eventually offer upgrades from prior generations of Power7+ NUMA machines to these cloudy Power8 boxes, probably in the first quarter of next year but the date has not been set as yet.

    http://www.nextplatform.com/2016/09/...-power-clouds/

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    Microsoft accuses public cloud rivals of overlooking enterprise

    Caroline Donnelly
    27 Sep 2016

    Microsoft has hit out at its public cloud rivals for treating the needs of enterprise IT users as an afterthought by failing to realise their hybrid cloud requirements early enough.

    Scott Guthrie, executive vice-president of the software giant’s cloud and enterprise group, used the opening day keynote at Microsoft’s Ignite conference in Atlanta, Georgia, to talk up the company’s enterprise credentials and how they compare with those of its competitors.

    “No other company offers both the breadth and depth of what the MS cloud delivers and we deliver all this in a global, trusted, hybrid promise that is truly differentiated in the industry,” he said.

    “We have 34 unique Azure regions around the world. To put that into perspective, that’s twice the number of locations and countries that AWS [Amazon Web Services] has today.

    “This enables you to run your applications and services closer to your customers and employees than ever before and compete in even more geographic markets.”

    Both Google and AWS have made no secret of their desire to ramp up the number of enterprise customers that use their cloud infrastructure services, having found solid early success with startups.

    This has seen both firms outline their commitment to helping enterprises address their data sovereignty concerns by expanding their global datacentre footprints, and rolling out a slew of enterprise-friendly cloud features in recent years.

    However, Guthrie said no such learning curve had been necessary for Microsoft because its products were so deeply entrenched within enterprise IT environments.

    “The Microsoft cloud is optimised for organisations,” he said. “For us, enterprises are not an afterthought – they are a critical design point.”

    Julia White, Microsoft cloud platform product marketing executive, also touched on this theme during a pre-event Q&A session, where she hit out at AWS and Google for coming late to the hybrid cloud party.

    “AWS and Google, to some extent, have woken up to the hybrid reality that we have known since the beginning and it is really in our DNA,” she said.

    “Whether it be Office 365 and Dynamics all the way down the [infrastructure] stack to having our own cloud as well as an on-premise capability so that people can run hybrid [environments] and move at their own pace.”

    White cautioned enterprises against cloud suppliers that resorted to “hybrid washing” techniques to market their wares, but stopped short of naming those guilty of doing so.

    “You will see people in the industry using hybrid washing and they mean hybrid [in that] they connect my datacentre with the cloud, and the connectivity is hybrid, but we don’t believe that,” she said.

    Hybrid cloud setup

    Instead, enterprise IT buyers should focus on seeking out suppliers that offered a hybrid cloud setup that provided a consistent user experience across their on-premise, private and public cloud environments, said White.

    “We do consistency by having that same management experience, the same development APIs [application programming interfaces], and having it across your entire enterprise estate,” she added.

    The Ignite show has seen Microsoft roll out updates to Windows Server and System Center in its push to help enterprises move to the hybrid cloud, along with another technical preview of its datacentre-based, cloud services-enabling Azure Stack.

    http://www.computerweekly.com/news/4...ing-enterprise

  9. #9
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030

    Private S3 Storage

    Timothy Prickett Morgan
    October 11, 2016

    The only companies that want all compute and storage to move to the public cloud are those public clouds that do not have a compelling private cloud story to tell. But the fact remains that for many enterprises, their most sensitive data and workloads cannot – and will not – move to the public cloud.

    This almost demands, as we have discussed before, the creation of private versions of public cloud infrastructure, which interestingly enough, is not as easy as it might seem. Scaling infrastructure down so it is still cost effective and usable by IT shops is as hard as scaling it up so it can span the globe and have millions of customers all sharing the compute and storage utilities as well as higher level services running atop this infrastructure.

    Aside from the scale down issues, which present their own engineering challenges, there is another big problem: The gravity of data. Moving data is much more of a hassle than computing against that data at this point, and the biggest public cloud providers and hyperscalers have spent enormous fortunes creating vast compute and storage utilities that make all compute and storage look local to each other (thanks to some of the biggest networks ever built). Enterprises running private clouds want to have scalable compute and storage too, but the much smaller scale they operate at requires a different kind of architecture.

    At least that is the contention of Kiran Bhageshpur, CEO and co-founder at Igneous Systems, which has just uncloaked from stealth mode after three years of funding and development. Bhageshpur was previously vice president of engineering at the Isilon storage division of Dell EMC and prior to that was senior director of engineering at Isilon Systems, where he was responsible for the development of the OneFS file system and its related clustering software. Bhageshpur started Igneous in the public cloud hotbed of Seattle back in October 2013, and brought together public cloud engineers from Amazon Web Services and Microsoft Azure as well as techies from Isilon, NetApp, and EMC, and is creating a series of networked appliances that will be able to mimic the compute and storage functions of the big public clouds – meaning support their APIs – but do so on hardware that is radically different from both that used by the cloud providers themselves and the standard rack-based gear used by enterprises. The Igneous hardware is developed to be tightly integrated with its software and provide a lower-cost for a given compute or storage service than the public clouds offer.

    This is a neat trick, if Igneous can pull it off, and it is at the heart of the data-centric computing architecture that Bhageshpur discussed last week here at The Next Platform.

    “Workflows are really much more around and about the data itself and the infrastructure on which the data lives,” Bhageshpur explains. “This is the broad problem that we are here to solve. If you think about it, in a traditional infrastructure from the EMCs and NetApps and HPEs and IBMs of the world, it is all local in a customer’s datacenter, but they are all managed one at a time and acquired as a capital asset upfront well ahead of need. Clearly, the reaction to this was the birth of the public cloud, with Amazon Web Services leading the way, which was compelling because companies did not buy hardware and they don’t install software or monitor or manage the fleet of infrastructure. Instead, they focus on its logical consumption across APIs. We have gone from the world of talking file and block and Oracle databases to talking S3 and Elasticsearch and having Elastic Container Services or higher-level services like AWS Lambda. The reaction to the public cloud has been the birth of the private cloud, which in our opinion is certainly not cloud. You are still buying the hardware, you are still installing software, and you are still monitoring and managing infrastructure and apart from improvements in orchestration, you are not getting any of these higher-level services is what makes the cloud services so rich. This is the gap that we are shooting with Igneous.”

    To develop its first products, Igneous raised $26.7 million in funding from NEA, Madrona Venture Group (an early investor in Amazon), and Redpoint Ventures. The very first of those, which is launching this week, is called the Igneous Data Service, and it is a fabric of disk drives with ARM-based compute attached to each drive that are configured to look and smell and taste like the S3 object storage Amazon Web Services. This last bit is the key, and it means that applications that are written to the S3 protocol won’t be able to tell they are not using the S3 service out on the AWS cloud even when it is running locally inside of a private datacenter.

    Igneous is not limited to only supporting AWS protocols for storage, and Bhageshpur hints at some of the possibilities without giving too much away about the company’s plans. “The way we are thinking about this as we build out is that the back-end is what is more important, and where needed we will use the appropriate APIs from Amazon Web Services or Microsoft Azure or Google Cloud Platform. We do not believe there is any reason to reinvent any APIs for various services, and using these existing APIs with customer data is where the value really is.”

    It is not difficult to imagine that Igneous will come up with local versions of the AWS Elastic Block Storage (EBS) service as well as equivalents to its Elastic Compute Cloud (EC2) service, and then move on to provide the equivalents to the compute and storage services provided by Google and Microsoft on their public clouds. It would be useful to have this all running on the same iron, and it would be even more interesting if Igneous is the one that comes up with truly hybrid compute and storage services that run on the same physical infrastructure that it develops, manufacturers, installs in datacenters, and sells as a service instead of as a capital investment to end users.

    This would be yet another type of hybrid cloud, and quite a feat. Bhageshpur is not promising this, mind you, but it is not s stretch to see that it could be done, and be a lot more useful than having something that is incompatible with the public clouds (as OpenStack is) running internally.

    The software for the S3 service running atop the Igneous Data Service is all internally developed and closed source, just as it is at all of the cloud providers, by the way. For hardware to support this private S3 clone, Igneous has developed a compute element based on a Marvell Armada 370 processor, which is a 32-bit processor with two Cortex-A9 cores running at up to 1 GHz. This is not a lot of compute, but it can run a Linux instance and, importantly, it puts the compute right on the 6 TB disk drive itself. This system-on-chip has two 1 Gb/sec Ethernet ports, which is not a lot of bandwidth mind you, but you are creating a mesh fabric of compute and storage, so the aggregates and the topology matter as much as the speed and latency of any port. The whole thing is called a “nanoserver” by Igneous, something that may or may not be trademarked by Microsoft, whose new cut-down version of Windows Server 2016 is called Nano Server.

    “If you think about a traditional storage server, it has not changed in 30 years,” says Bhageshpur. “It consists of a CPU, now based on an Intel Xeon, with a bunch of disks, and this is really an I/O bottleneck for large datasets that are growing. You have a powerful CPU and lots of capacity, but a thin straw in between them. And these are also large fault domains – an enterprise could have a dozen or more of these, but if one of these fail, it could be up to a half petabyte of capacity today, you have a major outage and you need to figure out how to resolve that. This is antithetical to how you do things at hyperscale. On the pets to cattle spectrum, this is about as pets as pets can be.”

    The architecture of the Igneous Data Service does not have such large fault domains across its distributed cluster of disks, and the idea is to have a consistent ratio of clock cycles, disk capacity, and network bandwidth as the system scales up. A single disk drive becomes the fault domain, and using tweaked Reed Solomon erasure coding techniques common with object storage systems and local processing from the ARM processors on adjacent drives, spare compute on the drives in the Igneous Data Service enclosures where the drive fails help rebuild that lost data quickly (and in parallel) on spare drives in the system.

    (cont)

  10. #10
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,030
    The Igneous Data Service enclosure is a standard 4U rack-based unit that holds 60 of the Igneous ARM-based drives, which all run Linux and the Igneous object storage system that is compatible with S3. This unit is manufactured by the same ODMs that make iron for AWS, Google, and Microsoft (and that probably means it is Quanta Computer), and instead of SAS disk controllers in the chassis it has two embedded Ethernet switches. Multiple enclosures can be daisy-chained together to grow the storage system.

    The private S3 clone that Igneous has created employs whitebox servers based on Intel Xeon processors to work as “data routers” to steer data around the disk drive fabric and to do some intelligent processing and indexing of the data that is spread across the drives. Finding data is a lot harder that storing it, and this is where companies spend a lot of their time.

    All of the management and monitoring of this S3 storage clone is done remotely on a cloud run by Igneous itself.

    The whole shebang is priced at utility pricing that is actually lower in cost than the real S3. (We don’t know how its performance is, but such data is no doubt coming.) With 6 TB drives and those Armada 370 processors and that relatively modest networking, the base Igneous Data Service enclosure has 360 TB of raw capacity and about 212 TB after erasure encoding and spares are taken into account. This unit is available for a subscription price of under $40,000 per year for the service, or around $188 per TB or 1.5 cents per GB per month. On premises storage can cost in the order of $1,000 per TB, says Bhageshpur, and that does not include the operational headaches and personnel costs associated with managing such storage. AWS S3 storage costs 3 cents per GB per month, not including data access and data transfer charges across the AWS network. This is a pretty compelling price difference in favor of Igneous, and one that is bound to get some attention.

    What will be even more interesting is when Igneous has a full suite of iron that can deliver compute and storage services on private clouds that are compatible with the AWS API stack. While the company is using ARM chips on its S3 appliances, it does not stand to reason that its EC2 clone, when it appears, will be based on ARM.

    “For regular compute tiers, just running compute-heavy workflows, this is still very much Intel Xeon, which has a great price/performance and there is no question about that,” says Bhageshpur. “But when we start looking at the number of chips and putting compute close to data, that is where we believe ARM is the way to go because price/performance and power profiles are unmatched.”

    The question we have is how much the Igneous Data Service hardware looks like the actual iron behind the real S3 service. Amazon bought Apurna Labs and makes its own ARM chips, after all.

    Applications are not yet running in large numbers on ARM server chips proper – and we do not think of the Marvell Armada 370 chip as a proper server chip – but when and if they do, you can bet that Igneous will be putting together a compute chassis based on some 64-bit ARM nodes as well as what will probably be Xeon D and Xeon compute nodes for an EC2 clone. We shall see.

    http://www.nextplatform.com/2016/10/...te-s3-storage/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •