Página 1 de 2 12 ÚltimoÚltimo
Resultados 1 a 10 de 16
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    [EN] AWS: How Many Data Centers Needed World-Wide

    James Hamilton
    April 18, 2017

    Last week Fortune asked Mark Hurd, Oracle co-CEO, how Oracle was going to compete in cloud computing when their capital spending came in at $1.7B whereas the aggregate spending of the three cloud players was $31B. Essentially the question was, if you assume the big three are spending roughly equally, how can $1.7B compete with more than $10B when it comes to serving customers? It’s a pretty good question and Mark’s answer was an interesting one “If I have two-times faster computers, I don’t need as many data centers. If I can speed up the database, maybe I need one fourth as may data centers.”

    Of course, I don’t believe that Oracle has, or will ever get, servers 2x faster than the big three cloud providers. I also would argue that “speeding up the database” isn’t something Oracle is uniquely positioned to offer. All major cloud providers have deep database investments but, ignoring that, extraordinary database performance won’t change most of the factors that force successful cloud providers to offer a large multi-national data center footprint to serve the world. Still, Hurd’s offhand comment raises the interesting question of how many data centers will be required by successful international cloud service providers.

    I’ll argue the number is considerably bigger than that deployed by even the largest providers today. Yes this represents massive cost given that even a medium sized data center will likely exceed $200m. All the providers are very focused on cost and none want to open the massive number of facilities I predict, so let’s look deeper at the myriad of drivers for large data center counts.

    *N+1 Redundancy: The most efficient number of data centers per region is one. There are some scaling gains in having a single, very large facility. But one facility will have some very serious and difficult-to-avoid full-facility fault modes like flood and, to a lesser extent, fire. It’s absolutely necessary to have two independent facilities per region and it’s actually much more efficient and easy to manage with three. 2+1 redundancy is cheaper than 1+1 and, when there are 3 facilities, a single facility can experience a fault without eliminating all redundancy from the system. Consequently, whenever AWS goes into a new region, it’s usual that three new facilities be opened rather than just one with some racks on different power domains.

    *Too Big to Fail: Even when building three new data centers when opening up a new region, there are some very good reasons to have more than three data centers as a region grows. There is some absolute data center size where the facility becomes “too big to fail.” This line is gray and open to debate but the limiting factor is how big of a facility can an operator lose before the lost resources and the massive network access pattern changes on failure can’t be hidden from customers. AWS can easily build 100-megawatt facilities, but the cost savings from scaling a single facility without bound are logarithmic, whereas the negative impact of blast radius is linear. When facing seriously sub-linear gains for linear risk, it makes sense to cap the maximum facility size. Over time this cap may change as technology evolves but AWS currently elects to build right around 32MW. If we instead built to 100MW and just pocketed the slight gains, it’s unlikely anyone would notice. But there is a slim chance of full-facility fault, so we elect to limit the blast radius in our current builds to around 32MW.

    These groupings of multiple data centers in a redundancy group are often referred to as a region. As the region scales, to avoid allowing any of the facilities that make up the region to become too big to fail, the number of data centers can easily escalate to far beyond ten. AWS already has regions scaled far beyond 10 data centers.

    What factors drive a large scale operator to offer more than a single region and how big might this number of regions get for successful international operators? Clearly the most efficient number of regions is one covering the entire planet just as one is the most efficient number of data centers if other factors are ignored. There are some significant scaling cost gains that can be achieved by only deploying a single region.

    *Blast Radius: Just as we discovered that a single facility eventually gets too big to fail, the same thing happens with a very large, mega-region. If an operator were to concentrate their world-wide capacity in a single region it would quickly become too big to fail.

    I’m proud to say that AWS hasn’t had a regional failure in recent history but the industry continues to see them rarely. They have never been common but they still are within the realm of possibility, so a single region deployment model doesn’t seem ideal for customers. The mega region would also suffer from decaying economics where, just as was the case in the single large data center, the gains from scaling become ever smaller while the downside risks continue to climb. Eventually the incremental cost reductions of scaling the region become quite small while the downside risk continues to escalate.

    The mega-region downside risks can be at least partially mitigated by essentially dividing the region up into smaller independent regions but this increases costs and further decreases the scaling gains. Eventually it just make better sense to offer customers alternative regions rather than attempting to scale a single region and the argument in favor of multiple regions become even stronger when other factors are considered.

    (continua)

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573
    *Latency and the Speed of Light: The speed of light remains hard to exceed and the round trip time just across North America is nearly 100 ms (Why are there data centers in NY, Hong Kong, and Tokyo). Low latency is a very important success factor in many industries so, for latency reasons alone, the world will not be well served by a single data center or a single region.

    Actually it turns out that the speed of light in fiber is about 30% less than the speed of light in other media so it actually is possible to run faster (Communicating data beyond the speed of light). But, without a more fundamental solution to the speed of light problem, many regions are the only practical way to effectively serve the entire planet for many workloads.

    There are many factors beyond latency that will push cloud providers to offer a large number of regions and I’m going to argue that latency is not the prime driver of very large numbers of regions. If latency was the only driver the number of required regions would likely be in the 30 to 100 range. Akamai, the world leading Content Distribution Network (CDN), reports more than 1,500 PoPs (Points of Presence) but many experts see them 10x bigger than would be strictly required by latency. Another major CDN, Limelight, reports more than 80 PoPs. This number is closer to the one I would come up with for the number of PoPs required if latency was the only concern. However, latency isn’t the only concern and the upward pressure from other factors appears to dominate latency.

    *Networking Ecosystem Inefficiencies: The world telecom market is a bit of a mess with many regions being served by state sponsored agents, monopolies, or a small number of providers that, for a variety of reasons, don’t compete efficiently. Many regions are underserved by providers that have trouble with the capital investment to roll out the needed capacity. Some providers lack the technical ability to roll out capacity at the needed rate. All these factors conspire to produce more than an order of magnitude difference in cost between the (sort of) competitive US market and some other important world-wide markets.

    Imagine a $20,000 car in one market costing far more than $200,000 in another market. That’s where we are in the network transit world. This is one of the reasons why all the major cloud providers have private world-wide networks. This is a sensible step and certainly does help but it doesn’t fully address the market inefficiencies around last-mile networks. Most users are only served by a single access network and these last-mile network providers often can’t or don’t own the interconnection networks that link different access networks together. Each access network must be reached by all cloud providers and each of these access networks themselves face a challenge with sometimes unreasonable interconnection fees that increase their costs, especially for video content.

    Netflix took an interesting approach to the access network cost problem. Their approach helps Netflix customers and, at the same time, helps access networks serve customers better. Netflix offers to place caching servers (essentially Netflix-specific CDN nodes) in the central offices of access networks. This allows the access network to avoid having to pay the cost to their transit providers to move the bits required to serve their Netflix customers. This also gives the customers of these access networks a potentially higher quality of service (for Netflix content). A further advantage for Netflix is in reducing the Netflix dependence on the large transit providers, it reduces the control these transit providers have over Netflix and Netflix customers. This was a brilliant move and it’s another data point on how many points of presence might be required to server the world. Netflix reports they have close to 1,000 separate locations around the world.

    *Social and Political Factors: We have seen good reason to have order 10^3 regions to deliver the latency required by the most demanding customers. We have also looked at economic anomalies in networking costs requiring O(10^3) regions to fully serve the world economically. What we haven’t talked about yet are the potentially more important social and political factors. Some cloud computing users really want to serve their customers from local data centers and this will impact their cloud provider choices. In addition, some national jurisdictions will put in place legal restrictions than make it difficult to fully serve the market without a local region. Even within a single nation, there will sometimes be local government restrictions that won’t allow certain types of data to be housed outside of their jurisdiction. Even within the same country won’t meet the needs of all customers and political bodies. These social and political drivers again require O(10^3) points of presence and perhaps that many full regions.

    As the percentage of servers-side computing hosted in the cloud swings closer to 100%, the above factors will cause the largest of the international cloud providers to have between several hundred to as many as a thousand regions. Each region will require at least three data centers and the largest will run tens of independent facilities. Taking both the number of regions and the number of data centers required in each of these regions into account argues the total data center count of the world largest cloud operators will rise from the current O(10^2) to O(10^5).

    It may be the case that there will be many regional cloud providers rather than a small group of international providers. I can see arguments and factors supporting both outcomes but, whatever the outcome, the number of world-wide cloud data centers will far exceed O(10^5) and these will be medium to large data centers. When a competitor argues that fast computers or databases will save them from this outcome, don’t believe it.

    Oracle is hardly unique in having their own semiconductor team. Amazon does custom ASICs, Google acquired an ARM team and has done custom ASIC for machine learning. Microsoft has done significant work with FPGAs and is also an ARM licensee. All the big players have major custom hardware investments underway and some are even doing custom ASICs. It’s hard to call which company is delivering the most customer value from these investments, but it certainly doesn’t look like Oracle is ahead.

    We will all work hard to eliminate every penny of unneeded infrastructure investment, but there will be no escaping the massive data center counts outlined here nor the billions these deployments will cost. There is no short cut and the only way to achieve excellent world-wide cloud services is to deploy at massive scale.

    http://perspectives.mvdirona.com/201...ed-world-wide/

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    Oracle CEO: We Can Beat Amazon and Microsoft Without as Many Data Centers

    Barb Darrow
    Apr 12, 2017

    Amazon Web Services, Microsoft Azure, and Google Cloud Platform spent roughly $31 billion last year to extend their data center capacity around the world, according to the Wall Street Journal, which tabulated that total from corporate filings.

    By comparison, Oracle, which is making its own public cloud push, spent about $1.7 billion. To most observers, that looks like a stunning mismatch.

    But Mark Hurd, Oracle's co-chief executive, would beg to differ. In his view, there are data centers and then there are data centers. And Oracle's data centers, he said, can be more efficient because they run Oracle hardware and supercharged databases.

    "We try not to get into this capital expenditure discussion. It's an interesting thesis that whoever has the most capex wins," Hurd said in response to a question from Fortune at a Boston event on Tuesday. "If I have two-times faster computers, I don't need as many data centers. If I can speed up the database, maybe I need one fourth as may data centers. I can go on and on about how tech drives this."

    "Our core advantage is what we've said all along, which is that it's about the intellectual property and the software, not about who's got the most real estate," Hurd added. "We have spent billions over the past year, but in isolation, that's a discrete argument that I find interesting, but not fascinating."

    Following up via email, Hurd said: “This isn’t a battle of capex. This is about R&D, about technology, software, innovation and IP; and then the capex to make it work."

    Oracle has said it runs its data centers on Oracle Exadata servers, which are turbocharged machines that differ fundamentally from the bare-bones servers that other public cloud providers deploy by the hundreds of thousands in what is called a scale-out model. The idea is that when a server or two among the thousands fail—as they will—the jobs get routed to still-working machines. It's about designing applications that are easily redeployed.

    Oracle is banking more on what techies call a "scale-up" model in which fewer, but very powerful computers—in Exadata's case each with its own integrated networking and storage—take on big workloads.
    Oracle execs, including executive chairman Larry Ellison, have argued that Oracle's big machines can actually work cheaper and more efficiently than the other public cloud configurations. Many industry analysts have their doubts on that, maintaining Oracle must spend much more to catchup with Amazon. Toward that end, in January, Oracle announced plans to add three new data center farms within six months and more to come.

    There are those who think that Fortune 500 companies relying on Oracle databases and financial applications give Oracle an advantage because they are loathe to move those workloads to another cloud provider—despite AWS wooing them with promises of easy migrations other perks.

    In late March, AWS chief executive Andy Jassy claimed the company had converted 22,000 databases from other vendors to its own database services. AWS does not break out which databases those customers had been using.

    Hurd took up that point as well: "How much database market will Oracle lose to [Amazon] Aurora? My guess is close to zero." (Aurora is one of several database options that AWS offers.)

    "The third largest database in the world is IBM DB2, and it's been going out of business for 20 years," Hurd said in a characterization that IBM would dispute. "If it was so easy to replace databases, DB2 market share would be zero."

    That is because most databases—which companies rely on as the basis for core accounting and financial operations—run custom programming, which is hard to move.

    http://fortune.com/2017/04/12/mark-h...-data-centers/

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    How Netflix Works With ISPs Around the Globe

    Ken Florance
    17 March 2016

    Tomorrow we'll release Season 2 of Marvel's Daredevil to 190 countries simultaneously. Netflix members all over the planet will instantly be able to stream the show on any internet-connected device. Even though millions of people around the world will be watching, there will be very little additional traffic on the “internet” because of a decision we made in 2011 to build our own content delivery network, or CDN.

    Since we went global in January, we’ve had increased interest in how we deliver a great Netflix viewing experience to 190 countries simultaneously. We achieve that with Netflix Open Connect, our globally distributed CDN. This map of our network gives you a sense for how much this effort has scaled in the last five years.



    Netflix Open Connect delivers 100% of our video traffic, currently over 125 million hours of viewing per day. This amounts to tens of terabits per second of simultaneous peak traffic, making Netflix Open Connect one of the highest-volume networks in the world.

    Globally, close to 90% of our traffic is delivered via direct connections between Open Connect and the residential Internet Service Providers (ISPs) our members use to access the internet. Most of these connections are localized to the regional point of interconnection that’s geographically closest to the member who’s watching. Because connections to the Netflix Open Connect network are always free and our traffic delivery is highly localized, thousands of ISPs around the world enthusiastically participate.

    We also give qualifying ISPs the same Open Connect Appliances (OCAs) that we use in our internet interconnection locations. After these appliances are installed in an ISP’s data center, almost all Netflix content is served from the local OCAs rather than “upstream” from the internet. Many ISPs take advantage of this option, in addition to local network interconnection, because it reduces the amount of capacity they need to build to the rest of the internet since Netflix is no longer a significant factor in that capacity. This has the dual benefit of reducing the ISP’s cost of operation and ensuring the best possible Netflix experience for their subscribers.

    We now have Open Connect Appliances in close to 1,000 separate locations around the world. In big cities like New York, Paris, London, Hong Kong, and Tokyo, as well as more remote locations — as far north as Greenland and Tromsø, Norway and as far south as Puerto Montt, Chile, and Hobart, Tasmania. ISPs have even placed OCAs in Macapá and Manaus in the Amazon rainforest — on every continent, except Antarctica and on many islands such as Jamaica, Malta, Guam, and Okinawa. This means that most of our members are getting their Netflix audio and video bits from a server that’s either inside of, or directly connected to, their ISP’s network within their local region.

    As our service continues to grow in all of the new global locations we’re reaching, so will our Netflix Open Connect footprint, as ISPs take advantage of the costs savings available to them by participating in our Netflix Open Connect program. That means Netflix quality in places like India, the Middle East, Africa and Asia will continue to see improvements.

    How Does Open Connect Work?

    We shared in a recent blog post that Netflix uses Amazon’s AWS “cloud” for generic, scalable computing. Essentially everything before you hit “play” happens in AWS, including all of the logic of the application interface, the content discovery and selection experience, recommendation algorithms, transcoding, etc.; we use AWS for these applications because the need for this type of computing is not unique to Netflix and we can take advantage of the ease of use and growing commoditization of the “cloud” market.

    Everything after you hit “play” is unique to Netflix, and our growing need for scale in this area presented the opportunity to create greater efficiency for our content delivery and for the internet in general.

    To understand how all of this happens, let’s look a little more deeply at how Open Connect came about, and how it works:

    Netflix Open Connect was originally developed in 2011 (and announced in 2012) as a response to the ever-increasing scale of Netflix streaming. Since the launch of the streaming service in 2007, Netflix had proved to be a significant and increasingly large share of internet traffic in every market in which we operated. Although third-party content delivery networks were doing a great job delivering Netflix content (as well as all kinds of other content on the internet), we realized we could be much more efficient based on our knowledge of how our members use Netflix. Although the number and size of the files that make up our content library can be staggering, we are able to use sophisticated popularity models to make sure the right file is on the right server at the right time. These advanced algorithms share some common approaches, and sometimes common inputs, with our industry-leading content recommendation systems.

    As we touched on above, pre-positioning content in this way allows us to avoid any significant utilization of internet “backbone” capacity. Take the continent of Australia, for example. All access to internet content that does not originate in Australia comes via a number of undersea cables. Rather than using this expensive undersea capacity to serve Netflix traffic, we copy each file once from our US-based transcoding repository to the storage locations within Australia. This is done during off-peak hours, when we’re not competing with other internet traffic. After each file is on the continent, it is then replicated to dozens of Open Connect servers within each ISP network.




    Beyond the basic concept of pre-positioning content, we were also able to focus on creating a highly efficient combination of hardware and software for our Open Connect Appliances. This specialization and focus on optimization has allowed us to improve OCA efficiency by an order of magnitude since the start of the program. We went from delivering 8 Gbps of throughput from a single server in 2012 to over 90 Gbps from a single server in 2016.

    At the same time, Open Connect Appliances have become smaller and more power efficient. This means each TV show or movie that is watched by a Netflix subscriber requires less energy to power and cool a server that fits into a smaller space. In fact, our entire content serving footprint is carbon neutral, as we recently pointed out in this blog.

    Moving ForwardThis year, we’ve extended our service everywhere in the world, with the exception of China. We’re excited about the role Netflix Open Connect can play in bringing enjoyment to people all over the planet. It feels like the adventure is just beginning!

    https://media.netflix.com/en/company...ing-experience

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    FPGAs To Shake Up Stodgy Relational Databases

    Timothy Prickett Morgan
    April 18, 2017

    So you are a system architect, and you want to make the databases behind your applications run a lot faster. There are a lot of different ways to accomplish this, and now, there is another one.

    You can move from disks to flash memory, You can move from a row-based database to a columnar data store that segments and speeds up accesses to data. And for even more of a performance boost, you can pull that columnar data into main memory to be read and manipulated at memory speeds.

    All of the in-memory databases – the “Hekaton” variant of SQL Server from Microsoft, the 12c database from Oracle, the BLU Acceleration add-ons for DB2 from IBM, and the HANA database from SAP are the dominant commercial ones – do the latter. (Spark kind of does the same thing for Hadoop analytics clusters, but the Spark/Hadoop stack is not a relational database, strictly speaking, even if with some overlays it can be taught to speak a dialect of SQL.)

    You could shard your database tables and run queries in parallel across a cluster of machines.

    You can also shift to any number of NewSQL databases and NoSQL data stores, which sacrifice some of the properties of relational databases in terms of data consistency in exchange for speed and scale.

    You can also move to one of the new GPU-accelerated databases, such as MapD or Kinetica.

    Or, you can slip an FPGA into the server node, load up the database acceleration layer from startup Swarm64, and not tell anyone how you made MySQL, MariaDB, or PostgreSQL run an order of magnitude faster – and be able to do so on databases that are an order of magnitude larger. If you wait a bit, and if enough customers ask for it, Swarm64 will even support Oracle database acceleration, and there is even the possibility in the future – provided there is enough customer demand – of getting FPGA acceleration for Microsoft’s SQL Server without any of the limitations that came with the Hekaton in-memory feature.

    The factor of 10X improvement in response time, database size, and data ingestion rates on a system is something that is bound to get the attention of anyone who has a database application that is being hampered by the performance of the database engine, but the fact that it can be done underneath and transparent to popular relational databases in use at enterprises means that Swarm64 has the chance to be extremely disruptive in the part of the server market that is driven by databases.

    This helps explain, in part, why Intel shelled out $16.7 billion for FPGA maker Altera back in June 2015 and also why it is keen on making hybrid CPU-FPGA compute engines, starting with a package that combines a fifteen-core “Broadwell” Xeon with an Arria 10 FPGA on a single package and eventually having a future Xeon chip with the FPGA etched onto the same die as the Xeon cores. The impact relational database acceleration could have on server shipments is enormous.

    Here is how Karsten Rönner, CEO at Swarm64, cases the market. The world consumes somewhere on the order of 12 million servers a year, based on 2015 data that he had on hand. About a quarter of these, or 3 million machines, are used to run relational databases and data warehouse software to do transaction processing and analytics against, and assuming that about half of the workloads are latency sensitive and performance constrained, that is around 1.5 million units of database servers that might be accelerated by FPGAs. And if the company can get a 10X boost in performance and database size with its FPGA acceleration, then this could potentially be a lot of Xeon processors that Intel does not end up selling.

    As we said before, Intel must be pretty afraid of something to have paid so much money for Altera – and to have made such a beeline for it. This is certainly part of it, and Intel clearly wants to control this future rather than be controlled by it.

    Swarm64 has dual headquarters in Oslo, Norway and Berlin, Germany, and was founded in 2012 by Eivind Liland, Thomas Richter, and Alfonso Martinez. The company has raised $8.7 million in two rounds of venture funding, has 19 employees, and is now touting the fact that it is partnering with Intel to push its Scalable Data Accelerator into the broader market. The Swarm64 stack is not tied to Altera FPGAs and can run on systems with Xilinx FPGAs, but clearly having Intel in your corner as you try to replace Xeon compute with FPGA acceleration is a plus rather than a minus.

    (continua)

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573





    Ahead Of The Hyperscalers

    The challenge of accelerating databases with either FPGAs or GPUs is that you can push enough data through the analytical engine so that the bandwidth through it is as high as you can possibly make it,” Rönner tells The Next Platform. “For FPGA acceleration on a peripheral card, that is obviously limited to the PCI-Express bus speed, which is one limiting factor. The other element of this is to try to touch as little of the data as is possible because the more you touch it, the more bandwidth you need.”

    This is one of the guiding principles in columnar databases, of course, but Swarm64 is not a columnar database. Rather, it is a highly compressed database that keeps active data in the processor main memory and committed data on flash storage. The Lempel-Ziv, or LZ, lossless data compression method is used to scrunch and expand this data, and it is implemented in VHDL on the FPGA for very high bandwidth and very high speed. The FPGAs are also used to accelerate the functioning of the underlying storage engine used in MySQL, MariaDB, and PostgreSQL databases, which is the secret sauce of the Swarm64 stack.

    Rönner says that there are two schools of thought when it comes to database acceleration using FPGAs. The first is to literally translate SQL queries in the VHDL language of FPGAs, and people have done this. But this approach has many problems, not the least of which is that you cannot change the queries or the database management systems once you implement it and this takes a lot of time. The other approach is to process the parts of the SQL query on the FPGA – including filtering and SQL query preprocessing – so it reduces the load on the CPU but also reduces the effective bandwidth used to the CPU. The Swarm64 storage engine is not a columnar store, but rather a row store with an indexless structure, but it only chews on the necessary parts of the data like a columnar store does. By not having an index, this reduces a lot of the metadata and other overhead associated with a relational database and allows for data to be ingested and processed in near real-time as it comes in. Swarm64 does store data in its own tables and they are obviously not in the same format as the indexed tables generated by MySQL, MariaDB, and PostgreSQL.

    The company has been peddling its own FPGA card, based on chips from Xilinx, but is partnering with Intel to sell a hardware stack based on the current Arria 10 outboard FPGA cards and is in position to employ the CPU-FPGA hybrids coming later this year as well as the beerier Stratix 10 FPGAs from Intel. The current setup puts a single Arria 10 FPGA card in a two-socket Xeon server.

    The compression and indexlessness of the Swarm64 database storage engine allows for a database table to be compressed down to a 64 TB footprint; a “real” MySQL, MariaDB, or PostgreSQL database with the same data would take up 640 TB of space, just to give you the comparison. That is the current maximum size of the database supported at the moment, and this is sufficient for most use cases given that move databases are on the order of tens of gigabytes to tens of terabytes in real enterprises. This 640 TB effective capacity limit covers 95 percent of the addressable databases used in the world, according to Rönner.

    To scale out the performance of the Swarm64 database storage engine, customers can put multiple FPGAs or more powerful FPGAs into their nodes, and if they need to horizontally scale, they can shard their databases (as is often done at enterprises and hyperscalers) to push the throughout even further. (This sharding doesn’t help reduce latencies as much, of course.) In the future, says Rönner, Swarm64 will be implementing a more sophisticated horizontal scaling method that has tighter coupling and that will not require sharding of data. And until then, customers can scale up from one, to two, to four, to eight sockets and a larger and larger main memory footprint to accommodate larger database tables.

    In a test to prove the performance of the Swarm64 engine, the company tool a network monitoring data stream and tested how quickly it could ingest data. It took a two-socket Xeon server with 256 GB of main memory and some flash drives and allocated half of the compute and memory to a MariaDB database. Using the default storage engine, this half-node could ingest and query around 100,000 packets per second of data coming in from network devices spewing their telemetry; switching to the Swarm64 engine boosted the performance of this workload – and without any changes to the application or to the MariaDB database management system – to 1.14 million packets per second. Contrast this with the FPGA-accelerated Netezza appliance that IBM acquired a few years back, which is based on a proprietary implementation of PostgreSQL that has had trouble keeping up with the pace of the open source PostgreSQL development. And also realized you could buy a massive NUMA machine with maybe 12 or 16 or 32 sockets and put the same number of FPGAs into the box as sockets and create a massively accelerated database that would scale up nearly linearly.

    Swarm64 is charging $1,000 per month per FPGA to license its software stack; the Linux driver that lets Swarm64 talk to the FPGA and the databases will eventually be open sourced, so in theory you might be able to roll your own and support yourself. Either way, you have to buy your own hardware, including the FPGA accelerators, and while FPGAs are not cheap, there is no key component of a modern, balanced system that is more expensive than main memory. And adding experts to the IT staff who understand database sharding or in-memory databases and porting applications to these is not cheap, either. The in-memory databases like SAP HANA, reckons Rönner, cost ten times as much per unit of performance than a server using its software and FPGAs – and the in-memory databases are limited to double digit terabytes of of capacity at that when Swarm64 can scale pretty much as far as customers want if they are willing to tolerate sharding or pay for a big NUMA box. It is fun to contemplate a cluster of fat NUMA machines like maybe the 32-socket SGI UV 300 machines from Hewlett Packard Enterprise, each with 64 TB of physical memory and an effective capacity of 640 TB each running the Swarm64 engine. A tidy sixteen nodes would yield a database with an effective capacity of over 1 PB.

    At some point, Swarm64 will be able to take advantage of Optane 3D XPoint SSDs and memory sticks and really expand the effective capacity of a node or a cluster of nodes – and is planning with Intel right now on exactly how best to do this.

    The roadmap also calls for the Swarm64 database storage engine to use Oracle’s Data Cartridge analog to a storage engine to provide support for Oracle databases, and this would mean that many customers would not need to go to the 12c in-memory features, which cost $23,000 per core compared to the $47,500 per core charge of the Oracle 12c Enterprise Edition database. This is a potentially huge savings for Oracle shops. Similarly, Microsoft is said to use something very similar to a storage engine for SQL Server, and Rönner believes that with a little help from Microsoft, it might only take three or four months to slide Swarm64 underneath it.

    If all of that comes to pass, then those who charge per-core for their database licenses will see a revenue downdraft just like Intel will for Xeon processors and server makers will for server footprints. But Intel will make at least some of it back in FPGA sales.

    All of this begs the question: If hyperscalers are so smart, why didn’t they do FPGA acceleration of storage engines before investing so much effort into flash, sharding, and NoSQL? They must really like to have monolithic X86 CPU architectures, but with the Swarm64 driver open source and perhaps opening up the code to the big hyperscalers, Swarm64 might be able to close some big deals up on high, not just in the enterprise datacenter it is targeting. If nothing else, this will prove that it can be done, and if the idea takes off, hyperscalers will probably just invent it themselves. They tend to do that, even if they do reinvent the wheel a lot.

    https://www.nextplatform.com/2017/04...nal-databases/

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    Equinix: 250% Growth in AWS Direct Connections

    PR: Demonstrates company's ongoing commitment to bringing AWS Direct Connect to customers worldwide


    Equinix, Inc.
    Apr 18, 2017

    REDWOOD CITY, Calif., April 18, 2017 /PRNewswire/ -- Equinix, Inc. (Nasdaq: EQIX), the global interconnection and data center company, today announced that it has become an Advanced Technology Partner in the AWS Partner Network (APN). The Advanced designation is the highest level an APN Technology Partner can achieve, and it underscores Equinix's ongoing commitment to serving AWS customers by providing direct and secure access inside its global footprint of International Business Exchange (IBX®) data centers. Equinix first began offering AWS Direct Connect in 2011, and has since witnessed substantial growth in connections to the cloud. In fact, since the beginning of 2016, customer connections to AWS via the Equinix Cloud Exchange have grown by more than 250%.

    Enterprises are increasingly incorporating cloud-based solutions as part of their overall IT infrastructure. In fact, a recent IDC cloud survey, CloudView 2016, shows that 58% of all organizations surveyed are embracing cloud, using public or private cloud for more than one or two small applications or workloads, up from 24% from 14 months ago.* By providing direct access to AWS cloud inside Equinix data centers, Equinix helps its enterprise customers advance their cloud strategies and capitalize on these cloud benefits by seamlessly incorporating cloud services into their existing architectures.

    Highlights / Key Facts

    • In the past several months, Equinix added availability of AWS Direct Connect in three new markets – Chicago, London and Munich. With the addition of these markets, Equinix now offers Direct Connect across 14 metros. The Chicago AWS Direct Connect capability will be part of AWS's AWS US East (Ohio) Region, and the London and Munich access points will serve customers in these critical European markets.
    • To obtain Advanced Technology Partner status, AWS requires partners to meet stringent criteria, including the ability to demonstrate success in providing AWS services to a wide range of customers and use cases. Additionally, partners must complete a technical solution validation by AWS. As part of this esteemed group of partners, Equinix will have access to a wide range of collaborative joint sales and marketing programs, which enables it to deliver AWS Direct Connect to a broader group of potential customers, worldwide, thus strengthening Equinix's position in cloud interconnection services.
    • By directly and securely connecting to AWS via AWS Direct Connect, customers can take advantage of
      • Predictable performance and user experience with dedicated, low-latency, high bandwidth direct connections to AWS
      • Enhanced compliance by connecting privately to AWS and keeping data within region, without going over the public internet

    • With the addition of Chicago, London and Munich, Equinix now offers the AWS Direct Connect service in 14 markets, including Amsterdam, Chicago, Dallas, Frankfurt, Los Angeles, London, Munich, Osaka, Seattle, Silicon Valley, Singapore, Sydney, Tokyo and Washington, D.C./Northern Virginia. Additionally, AWS GovCloud is available in all US AWS Direct Connect locations. Equinix customers in these metros will be able to lower network costs into and out of AWS and take advantage of reduced AWS Direct Connect data transfer rates.
    • Equinix will be showcasing its interconnection capabilities at this week's AWS Summit San Francisco, taking place April 18-19, 2017 at the Moscone Center in San Francisco.


    Quotes

    • Greg Adgate, vice president, global technology partners and alliances, Equinix:
      "As one of the first data center providers to enable direct access to AWS via its AWS Direct Connect service, we have strived to continue to bring access to all our enterprise customers, worldwide. We are honored to have reached Advanced Technology Partner status. Through our collaboration with AWS, we are providing additional ways for our global customers to achieve improved performance of their cloud-based applications."
    • Robert Mahowald, group vice president, applications and cloud business models, IDC:
      "In our recent research, we are seeing a strong trend within enterprise IT to move applications and workloads off premises onto cloud platforms such as AWS, to achieve cost and application performance benefits. Equinix is helping enterprise customers easily make this transition by enabling direct, low-latency, secure connections to cloud services, like AWS Direct Connect, within its global footprint of data centers."


    http://www.prnewswire.com/news-relea...300440770.html

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    NTT: Multi-Cloud Connect extends connection to Oracle Cloud

    Press Release -- April 18th, 2017
    Source: NTT Communications

    TOKYO, Japan —April 18th, 2017 — NTT Communications Corporation, the ICT solutions and international communications business within the NTT Group, announced the extension of Multi-Cloud Connect connection to Oracle Cloud, to help multi-national customers take advantage of performance, cost and innovation benefits of the cloud.

    While enterprises understand the promise and many benefits of the cloud, most experience issues such as latency, packet loss and security threats given that connectivity to cloud services are still heavily dependent to public Internet. With Multi-Cloud Connect, Oracle Cloud users will be able to leverage NTT Com's secure, reliable, high performing MPLS network to access their business critical applications.

    Multi-Cloud Connect will connect directly to Oracle Cloud's platform through Oracle Network Cloud Services- FastConnect enabling private connection to its broad portfolios and features: platform as a service (PaaS), and infrastructure as a service (IaaS). This includes middleware such as “Oracle Database Cloud Service" and “Oracle Java Cloud Service", as well as integration and business analytics features. Furthermore, NTT Com and Oracle will enable hybrid deployment of Oracle Cloud and Oracle software hosted on-premises or “Oracle Cloud at Customer", under one global network.

    ...

    About NTT Communications Corporation NTT Communications provides consultancy, architecture, security and cloud services to optimize the information and communications technology (ICT) environments of enterprises. These offerings are backed by the company’s worldwide infrastructure, including the leading global tier-1 IP network, the Arcstar Universal One™ VPN network reaching 196 countries/regions, and over 140 secure data centers worldwide. NTT Communications’ solutions leverage the global resources of NTT Group companies including Dimension Data, NTT DOCOMO and NTT DATA.

    About Oracle PartnerNetwork Oracle PartnerNetwork (OPN) is Oracle's partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle's products and solutions and has evolved to recognize Oracle's growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud.

    http://newswire.telecomramblings.com...-oracle-cloud/

  9. #9
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    Comcast Business Now Provides Enterprises with Dedicated Links to IBM Cloud

    PR: Up to 10 Gbps Connections to IBM Cloud for Public, Private or Hybrid Cloud Deployments

    PHILADELPHIA - 13 Apr 2017: Comcast Business today announced that it now provides direct, dedicated links to IBM (NYSE: IBM) Cloud’s global network of data centers, allowing Comcast Business to provide enterprise customers added flexibility with more choices for connections to cloud enablement. Comcast Business customers can receive up to 10 Gigabits-per-second (Gbps) of private network connectivity to IBM Cloud for public, private, or hybrid cloud deployments, as well as bare metal environments, to deliver a wide range of critical business applications.

    “By working with IBM Cloud, Comcast Business gives enterprises more choices for connectivity so they can store data, optimize their workloads, and execute mission-critical applications in the cloud, whether it be on-premise, off-premise or a combination of the two,” said Jeff Lewis, vice president of data services at Comcast Business. “Through dedicated, reliable, secure access with multi-Gigabit performance and low latency, organizations can connect to a cloud system that best fits their needs with the ability to easily scale up in the future as requirements change.”

    Direct cloud connectivity helps businesses achieve better performance, security and availability compared to connecting over the open Internet, and Comcast’s offering is backed by a service level agreement (SLA). With Direct Link, Comcast Business clients will have access to IBM Cloud's expanding global footprint, which currently includes over 50 data centers in 19 countries across six continents. All IBM Cloud data centers connect to advanced networking infrastructure, hardware and software with robust bandwidth and connectivity for high performance and reliability.

    “Cloud platforms are fundamentally changing how the world’s data is processed, stored and delivered while delivering better agility for enterprises to help reduce costs, improve consumer experiences and create new revenue opportunities,” said Steve Canepa, general manager, global telecommunications, media and entertainment industry, IBM. “Today’s announcement represents a continued collaboration with Comcast to deliver robust technologies that drive innovation -- integrating the power of cloud across private cloud solutions, hybrid deployments and public cloud offerings.”

    Comcast Business’ network connects to nearly 500 data centers as well as cloud exchanges for dynamic access to multiple cloud providers.

    About Comcast Business

    Comcast Business offers Ethernet, Internet, Wi-Fi, Voice, TV and Managed Enterprise Solutions to help organizations of all sizes transform their business. Powered by a next-generation, advanced network, and backed by 24/7 technical support, Comcast Business is one of the largest contributors to the growth of Comcast Cable. Comcast Business is the nation’s largest cable provider to small and mid-size businesses and has emerged as a force in the Enterprise market; recognized over the last two years by leading industry associations as one of the fastest growing provider of Ethernet services.

    http://www-03.ibm.com/press/us/en/pr...ease/52061.wss

  10. #10
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,573

    170 PoPs around the world get Microsoft Azure ExpressRoute link

    Equinix, Digital Realty, CoreSite, Telx, Switch Supernap, Interxion, Telehouse all used as bases for the PoPs in 20 countries.

    João Marques Lima
    30 March, 2017

    Direct connect solutions provider Console Connect has integrated with Microsoft Azure ExpressRoute to expand customers’ capabilities around cloud connectivity.

    The company is now offering Azure ExpressRoute via the Console Platform which is currently available for connections between Console’s 170 PoPs and Azure cloud regions in the US and the UK.

    At the launch stage, the integration enables interconnections to be set up between any Console node, and Azure ExpressRoute locations in Silicon Valley, CA; Los Angeles, CA; Ashburn, VA; Chicago, IL; Dallas, TX; Toronto, Canada; and London, UK.

    Greg Freeman, VP EMEA at Console Connect, told Data Economy: “We have a strong partner ecosystem, and will continue to grow our footprint in order to give our customers the best, most secure way of reaching their Direct Connect targets.

    “At our core is the ability to connect to multiple products – such as Microsoft ExpressRoute – XaaS providers, partner companies and ecosystems. We are constantly growing the depth of service providers on the Console Connect platform to ensure we offer unparalleled diversity in choice so that businesses can connect to one another.”

    Console’s take on Microsoft Azure ExpressRoute expands the company’s cloud interconnection portfolio which already counts with Azure’s competitors AWS and Google Cloud.

    The company’s 170 PoPs are located across regions including EMEA, USA and Canada and APAC, sitting in countries such as Japan, USA, UK, Poland, Sweden, Germany, Luxembourg, Ireland and UAE.

    The company utilises most of the main colocation and wholesale data centre providers including Equinix, Digital Realty, CoreSite, Telx, Switch Supernap, IO, ViaWest, Cologix, zColo, Interxion, Telehouse and LuxConnect.

    Jef Graham, CEO, Console Connect, said: “As CIOs look to secure their cloud environments, the networks on which they connect to the cloud are coming under increasing scrutiny.

    Opening up more opportunities for secure, predictable direct connections will help ensure safer cloud deployments as adoption continues to rise.”

    https://data-economy.com/170-pops-ar...essroute-link/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •