Resultados 1 a 6 de 6
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,012

    [EN] SPARC T7 and M7 Servers Faster Than Xeon and POWER8

    SPARC M7 2.3x sustainable memory bandwidth of IBM Power8; delivers >1TB/sec on STREAM benchmark. 2.4x Xeon

    The STREAM benchmark measures delivered memory bandwidth on a variety of memory intensive tasks. Delivered memory bandwidth is key to a server delivering high performance on a wide variety of workloads. The STREAM benchmark is typically run where each chip in the system gets its memory requests satisfied from local memory. This report presents performance of Oracle's SPARC M7 processor based servers and compares their performance to x86 and IBM POWER8 servers.

    Bisection bandwidth on a server is a measure of the cross-chip data bandwidth between the processors of a system where no memory access is local to the processor. Systems with large cross-chip penalties show dramatically lower bisection bandwidth. Real-world ad hoc workloads tend to perform better on systems with better bisection bandwidth because their memory usage characteristics tend to be chaotic.

    IBM says the sustained or delivered bandwidth of the IBM POWER8 12-core chip is 230 GB/s. This number is a peak bandwidth calculation: 230.4 GB/sec = 9.6 GHz * 3 (r+w) * 8 byte. A similar calculation is used by IBM for the POWER8 dual-chip-module (two 6-core chips) to show a sustained or delivered bandwidth of 192 GB/sec (192.0 GB/sec = 8.0 GHz * 3 (r+w) * 8 byte). Peaks are the theoretical limits used for marketing hype, but true measured delivered bandwidth is the only useful comparison to help one understand delivered performance of real applications.

    The STREAM benchmark is easy to run and anyone can measure memory bandwidth on a target system (see Key Points and Best Practices section).

    • The SPARC M7-8 server delivers over 1 TB/sec on the STREAM benchmark. This is over 2.4 times the triad bandwidth of an eight-chip x86 E7 v3 server.
    • The SPARC T7-4 delivered 2.2 times the STREAM triad bandwidth of a four-chip x86 E7 v3 server and 1.7 times the triad bandwidth of a four-chip IBM Power System S824 server.
    • The SPARC T7-2 delivered 2.5 times the STREAM triad bandwidth of a two-chip x86 E5 v3 server.
    • The SPARC M7-8 server delivered over 8.5 times the triad bisection bandwidth of an eight-chip x86 E7 v3 server.
    • The SPARC T7-4 server delivered over 2.7 times the triad bisection bandwidth of a four-chip x86 E7 v3 server and 2.3 times the triad bisection bandwidth of a four-chip IBM Power System S824 server.
    • The SPARC T7-2 server delivered over 2.7 times the triad bisection bandwidth of a two-chip x86 E5 v3 server.
    https://blogs.oracle.com/BestPerf/en...stream_sparcm7

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,012

    PR: What Customers and Partners are Saying About SPARC M7-Based Systems

    ORACLE OPENWORLD, SAN FRANCISCO—Oct 26, 2015

    B&H Photo Video

    “B&H Photo Video continually evaluates new technology to achieve greater performance and scale for our e-commerce infrastructure with a specific focus on providing a superior customer experience. To accelerate on-line transactions, we analyze massive amounts of information to make better, quicker decisions. We tested the SPARC T7 with Oracle Database 12c in-memory options. With the new M7 Software in Silicon acceleration, queries like "How many unique price points do we have on our price list?" run 83x faster on SPARC T7 versus the same server (without silicon acceleration) using the current industry approach of flash storage," said Shlome Seidenfeld, CIO & VP E-Commerce, B&H Photo Video. “Using Oracle SPARC T7 servers, we can reach new levels of insight with real-time queries on up-to-date transactional data.”

    Informationsverarbeitung für Versicherungen GmbH (ivv)

    Oracle’s SPARC T7 servers will dramatically increase the speed of services we provide our insurance customers. The new platform, which runs Oracle Solaris 11.3, delivers even greater security for our customers, so they can be assured their data is safe with ivv,” said Thorsten Mühlmann, lead architect, Unix Systems, ivv.

    University Hospitals Leuven

    “Today, SPARC is the only suitable platform that meets our application needs. We selected SPARC servers over IBM and x86-based solutions because scalability and performance are essential for our mission-critical SAP Adaptive Server Enterprise database infrastructure. With the SPARC M7 servers, we can expand our business and grow at the speed of our customers,” said Jan Demey, Team Leader for IT Infrastructure, Leuven University Hospital.

    BPC

    “BPC Banking Technologies’ long-term relationship with Oracle aims to find the best technology solutions for our clients. We successfully tested SmartVista on Oracle’s SPARC M7 server running Oracle Solaris, and measured the impact of the Oracle Database In-Memory option along with the SPARC M7 processor’s new SQL in Silicon feature,” said Evgeny Kozhin, senior solutions architect, BPC Banking Technologies. “We were excited to see dramatic performance increases for both our online and batch processing tests. SmartVista is highly tuned and traditionally we only see incremental performance gains with new processor generations. No modifications to SmartVista were needed to get these extraordinary results.”

    Capitek

    “Capitek AAA is a carrier-grade access authentication management application for the wireless communication networks across China. In our tests processing log files for each AAA server, Oracle's SPARC M7 systems with Silicon Secured Memory and Oracle Solaris Studio development tools proved to be the only effective method of protection against dangerous programming vulnerabilities,” said Jerry Chen, senior manager, Telecom Software Product Department. “It enabled Capitek AAA to be more secure and highly available with very little impact on overall system performance. Other software based memory checking tools proved to be unusable due to their large overhead.”

    JomaSoft

    “JomaSoft recently completed performance tests on Oracle’s SPARC T7 system running Virtual Datacenter Control Framework (VDCF), our management solution for creating, migrating, patching and monitoring Oracle Solaris environments. Our results showed VDCF to be 1.5x faster core-to-core on SPARC T7 compared to SPARC T5. JomaSoft views Oracle’s powerful SPARC M7 and T7 systems as ideal platforms for customer consolidation and virtualization projects, with technology and value that no other vendor can offer,” said Marcel Hofstetter, CEO at JomaSoft.

    MSC Software

    “MSC Software, a worldwide leader in multidiscipline simulation technology, recently tested our SimManager simulation data and process management system on Oracle’s SPARC M7 system with Oracle Database 12c. Our testing found SPARC M7 to be extremely scalable and able to deliver better core-to-core throughput than an Intel Xeon X5 v3 server running a SimManager workload. Oracle Solaris 11 virtualization also consolidates multiple instances of the MSC SimManager server, providing a simplified method of managing and processing hundreds of thousands of simulations for product design onto a single platform,” said Leo Kilfoy, general manager, Engineering Lifecycle Management Business Unit, MSC Software Corporation.

    SAS

    "Oracle's Software in Silicon technology delivers significant value to both SAS customers and internal development teams. The scalability, performance and extensive memory bandwidth of the Oracle SPARC M7 is well-matched with the highly threaded and memory intensive algorithms of our high performance Business Analytics software – which means customers running SAS on Oracle will see faster analysis of their data so they can make better business decisions,” said Craig Rubendall, vice president, Research & Development, SAS. “In addition, SAS uses a variety of tools to ensure the quality of code that is delivered to our customers. The SPARC M7’s Silicon Secured Memory feature along with the Oracle Solaris Studio Code Analyzer detected difficult to find run-time errors far more quickly than other products we use for this purpose, resulting in faster fixes to common code across all platforms.”

    Siemens PLM Software

    “As a leading global provider of product lifecycle management software and services, Siemens PLM Software, helps thousands of companies realize innovation by optimizing their processes. We continually leverage our strong relationship with Oracle to ensure that our Teamcenter software is tuned to run on Oracle platforms. Teamcenter tests of the new Oracle SPARC M7 servers showed dramatic performance improvements, surpassing any improvements seen with a single generation upgrade of SPARC servers. Software-in-Silicon features of the SPARC M7 processor such as the Silicon Secured Memory and SQL in Silicon offer unique capabilities for performance tuning,” said Chris Brosz, vice president of Technical Operations, Siemens PLM Software.

    Software AG

    “Software AG’s Adabas Database Management System Platform is optimized for large-scale transaction processing and provides high-performance and reliable data processing for enterprise business transactions. We have been collaborating closely with Oracle engineering and we recently tested Adabas version 6.4 SP 1 on Oracle’s SPARC M7 system through their early access program and achieved an amazing 2.8X performance increase over Oracle’s SPARC T5 system,” said Angelika Siffring, VP, Product Management, Software AG. “Software AG’s relationship with Oracle helps us provide the fastest and most secure software solutions to our mutual customers.”


    https://www.oracle.com/corporate/pre...m7-102615.html

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,012

    Ellison Pits Oracle Cloud Against AWS, Azure

    October 26, 2015 Timothy Prickett Morgan

    Oracle co-founder and now chief technology officer Larry Ellison may have come late to the term cloud computing, but the database giant that expanded into middleware and applications over his tenure was not – definitely not – late to understanding the transformational aspects of compute utilities hosting application software.

    The message that Ellison conveyed, with his wry humor, during his opening keynote at the OpenWorld extravaganza in San Francisco, was that contrary to what many have said, Oracle was not late to the cloud but rather started out a decade ago to transform itself and its code to be a participant in the software as a service, or SaaS cloud, which was about the same time that online retailer Amazon decided to start from the bottom up and sell raw compute and storage capacity to startups as a side business.

    “We are in the middle – and I really do mean in the middle – of a generational shift in computing that is no less important than our shift to personal computing when mainframes and minicomputers dominated our industry,” Ellison explained. And he knows because he was there. “It seems like early days. The biggest cloud companies are $6 billion in size, they are not $100 billion, in terms of their cloud business.”

    To be sure, the cloud transformation has been going on for fifteen years, and there are some very big players like Amazon, Google, Microsoft, IBM, and Rackspace Hosting that are in the field and some others like Hewlett-Packard and Dell who have left the public cloud and are focusing on selling wares to enterprises to build private clouds. Amazon doesn’t believe in private clouds and says straight up that cloud means public cloud because this is the only way to get scale, and yet it has had to concede that companies will want, at least for some time, a transitional period where they run some of their applications on their own gear in their own datacenters. Google has not let its opinion be known, but we presume it is similar to Amazon’s. Rackspace tries to sell on both sides of the firewall, and so does IBM with its SoftLayer cloud on the public side and now its Power machines and mainframes on the private side. (Most clouds are X86-based, so this is problematic, and due to IBM’s selling off its System x division to Lenovo last October.) HP and Dell partner with the big three cloud providers – and notably they both also sell lots of gear to Microsoft as it builds up Azure – and are also focusing on hybrid scenarios to make their bucks. Dell will inherit yet another public cloud, VMware’s vCloud Air, if its $67 billion acquisition of EMC goes through next year, and it will be interesting to see what Michael Dell does with that. Or doesn’t.

    (If you are looking for Ellison to sweep in and try to buy EMC or Dell or both, don’t hold your breath, although that would be a fun fight to watch from the sidelines.)

    When Oracle started in the cloud business, it was really seeking to blunt the attack of Salesforce.com, the online CRM application founded by Marc Benioff, a former top Oracle executive, and a slew of other companies that were seeking to get into the application service provider, or ASP, business as it was called back then. The difference between ASP and SaaS, in concept, is negligible, but the underpinnings are wildly different and the need for cheaper hardware and software infrastructure than was available during the dot-com boom is why ASPs, with the exception of Salesforce.com, NetSuite, and a few others from that time fifteen years ago, largely failed. Since that time, hyperscale servers and storage, scalable databases, and multitenant capabilities. Having developed and acquired its middleware and application stacks and more than a few databases as well as its flagship relational database, a decade ago Oracle started down the road of rewriting its middleware and applications from scratch in Java, which it referred to as Fusion.

    Oracle did not really intend to get into the PaaS or IaaS markets, Ellison conceded, but the need to deliver SaaS applications made PaaS necessary, and then once you are doing PaaS, you need to create IaaS, too. By this same logic, AWS is moving up the value chain from IaaS to PaaS and we can expect that, at some point, AWS will offer applications of its own too, although you could be generous and count the applications available in the AWS Marketplace right now.

    “In this new world of cloud computing, everything has changed, and almost all of our competitors are new,” Ellison continued. “We now compete with Salesforce.com and Workday in applications. These are the companies we see most frequently when we are selling applications in the marketplace, and we virtually never, ever see SAP. This is a stunning change. The largest application company in the world is still SAP, but we never see them in the cloud – and we sell a lot of application in the cloud. One of the companies that has been an historic competitor of ours is Microsoft, and Microsoft is the only one of our traditional competitors that has crossed the chasm and is now competing aggressively in the cloud business at all three layers – infrastructure, platform, and applications. The same applications they offered on premises they offer on the cloud, and they also offer infrastructure plus database and their programming languages. In infrastructure, again, a stunning change. We compete with Amazon.com, primarily, in infrastructure. We see Google occasionally but not all that often. And we never, ever see IBM. So this is how much our world has changed: The two competitors we watched most closely over the last two decades have been IBM and SAP, and we no longer pay any attention to either one of them. It is quite a shock. I can make the case that IBM was the greatest company in the history of companies. They are nowhere in the cloud. SAP is the largest application company that has ever existed. They are nowhere in the cloud.”

    In fact, the way Ellison sees Microsoft and Oracle are the only two companies that have taken on all three layers of the cloud, and you get the sense that he believes that a certain amount of scale is necessary to make that happen – scale that a small SaaS provider running on top of Amazon Web Service or Microsoft Azure or Google Compute Engine won’t be able to match.

    As for the basic infrastructure underlying the Oracle Cloud, Ellison said that the design goal is to create its systems such that it has the lowest acquisition price and the lowest total cost of ownership for its compute and storage infrastructure. And to be specific, the goal is to meet or beat AWS on compute and storage, although it is tough for Oracle to beat AWS on these factors except in archival storage, said Ellison, where Oracle is one of the three major suppliers of tape libraries in the industry. The other goal was to offer the fastest database, middleware, and analytics software on the cloud, and that performance is just another aspect of price because if you can finish a job faster on a cloud, that is a kind of savings, too. (Hence the TCO argument, not just the raw price per unit of compute or storage.) Peak performance is sometimes a requirement anyway, so this is a kind of positive by-product of that need. And like the hyperscalers, Oracle has learned to add more automation to its software stack to not only make software installation easier and quicker, but to eliminate human error – both of which further reduce costs.

    But Oracle has one other big advantage, said Ellison. “As we got into this cloud business and started build SaaS applications, it dawned on us that this is not going to work unless we provide a platform whereby you can extend those applications and integrate those applications with on premises applications and other cloud applications. We have done that, and by the way, Salesforce.com has done that. But most cloud companies are pretty small startups, and it is very difficult for a small startup company to tackle building their SaaS application and a comprehensive platform that they make available to their customers. So this is a huge advantage that Oracle has competing in the SaaS world that we have an underlying platform that makes our applications extensible.”

    [continua]

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,012
    [continuação]


    So what precisely is on the Oracle Cloud today? At the SaaS level, Fusion applications for enterprise resource planning, enterprise performance management, supply chain management, marketing and sales, service and support, human capital management, and talent management. There are also special modules aimed at the banking, telecommunications, utilities, pharmaceuticals, retail, and hospitality industry sectors. With the latest update to the Fusion stack this week, Oracle is adding manufacturing and e-commerce to this stack. Ellison said that Oracle has about 1,300 Fusion ERP customers on the cloud, more than ten times that WorkDay has, and over 5,000 HCM customers and about 1,000 of them using Fusion HCM and around 4,000 using the Taleo SaaS tools Oracle acquired. (The 1,000 customers for Fusion HCM was more customers than WorkDay has for its HCM SaaS tools as well.)

    As for SaaS CRM, Oracle has around 5,000 customers today, making it the number two provider behind Salesforce.com. It will take Oracle a while to catch Salesforce.com, which had $5.37 billion in sales in its fiscal 2015 ended this past February and which has orders of magnitude more customers. But, Ellison said that Salesforce.com had a plan to boost its revenues in CRM by $1 billion in new business this year, and Oracle is on track to do an incremental $1.5 billion, so momentum is shifting Oracle’s way thanks to its large enterprise customers. Salesforce.com has been growing, but it has also been losing hundreds of millions of dollars per year to do so. Oracle now has 2,259 customers using its Database Cloud Service, up from 87 a year ago.

    As part of the OpenWorld conference, Ellison trotted out a bunch of new features and functions for parts of the Oracle stack. In addition to supporting in-memory columnar storage and query processing with Oracle 12c, the database is also given multitenancy capabilities. The database is split into a container database, or CDB, that containers the data, control, redo log, and other files associated with a database – most of the working parts of the database. On top of the CDB rides what Oracle calls pluggable databases, or PDBs, and this contains data that is relevant to a specific database instance; as far as an application is concerned, a PDB looks like a whole database, even though it is isolated from others around it and sharing certain aspects of the underlying database management system through the CDB. With the Oracle 12c 12.2 update, Oracle now supports 4,096 PDBs per CDB, up from 252 PDBs with Oracle 12c 12.1. The 12.2 update also supports hot cloning and refresh of PDBs and online tenant relocation through movement of PDBs.

    Oracle has not said what iron it is using to build the Oracle Cloud, but it has not been very interested in peddling bare bones machines and has largely removed itself from the HPC market that Sun Microsystems, which it bought in 2010, used to play in. But Ellison did say that it was now parking Exadata parallel database clusters in the cloud so that customers could use them on premises or in the Oracle Cloud. The cloudy Exadata machines scale in increments of 28 to 68 cores, 512 GB of main memory, 19.2 TB of PCI-Express flash, and 42 TB of disk capacity. Pricing was not available at press time.

    All Exadata machines, cloud or on premises, are now also able to store columnar data in the PCI-Express flash cards in the Exadata machines as well as in DRAM main memory, effectively boosting their columnar storage by a factor of 10X to 100X. Ellison said this would allow these machines to store all but the largest databases in the world in memory. (He was using flash plus main memory as “memory,” something we had better get used to as 3D XPoint comes to market next year.) Oracle is now also allowing for Oracle Real Application Clustering to be used across whatever iron it does use in its cloud; presumably this means not just on Exadata iron, but on virtual machine and bare metal instances on Oracle Cloud.

    The other interesting feature is called Multitenant Java Server, which is a pluggable variant of the WebLogic application server that will allow the live migration of applications between on premises and the Oracle Cloud while also offering some of the same multitenancy security that has been added to the 12c database. Importantly, Oracle will be a big user of this feature since it allows for 3X compression of Java instances on servers. As a cloud provider, Oracle has to get its iron to do as much work as it can. Oracle is also simplifying and automating its Coherence Java server replication and caching software to make it easier to create fault tolerant Java applications that can span multiple datacenters, something that Ellison said would take some “hotshot engineers” to set up before all of this automation.

    http://www.theplatform.net/2015/10/2...nst-aws-azure/

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,012

    Entrevista com John Fowler

    At Oracle OpenWorld 2015 in San Francisco, John Fowler, Oracle’s executive vice president for systems, introduced new servers based on the company’s latest microprocessor, SPARC M7. The SPARC M7 is the sixth SPARC microprocessor Oracle has released since it acquired Sun Microsystems in 2010. The new processor is something of an engineering watershed for Oracle because it features security and performance functions directly hard-wired onto the chip using a technique known as “software in silicon.”

    Fowler sat down to talk about the engineering effort that led to the SPARC M7, and the implications of the new processor in the technology market.

    Q: Where does the new SPARC M7 processor fit into Oracle’s overall technology strategy?

    Fowler: Oracle acquired Sun some five-and-a-half years ago. At the time people didn’t understand what we were doing. But now, after generations of engineered systems and innovations, they can see that we’re really about providing very highly integrated and co-engineered products that include hardware and software.

    The exciting thing about SPARC M7 is that this is where we take that whole concept down to the silicon level. We’re able to exploit the opportunity of combining software and hardware engineers and do some things in the microprocessor itself to directly help enterprise applications.

    We’re obviously very excited about this. These kinds of development efforts are extraordinarily expensive and long-lived. Oracle has both the economic capability plus the intellectual property capability to undertake this kind of project.

    We don’t think of the SPARC M7 as a product. We think of it as an expression of our strategy.

    Q: How did the technique known as “software in silicon” come about?

    Fowler: We took hundreds of ideas that the hardware and software co-design teams had and we distilled those into a subset of ideas that was the most interesting. Then we wrote simulation models and then design models for the most promising ideas. We coded those in simulated processors and then ran real benchmarks against them to see if they worked properly, the way we thought they would.

    So what you see coming out in SPARC M7 is not the product of just a few ideas, it’s the product of a very disciplined, empirical, and engineering-driven activity that starts as an idea, goes all the way through simulation modeling and then into product. This has been quite an exciting journey for us.

    Q: Is this an innovation in terms of microprocessor development?

    Fowler: There’s an interesting aspect here if you look at the history of microprocessors. About 20 years ago 64-bit processors became available and they enabled a whole new generation of software. Ten years ago multicore and multithreaded processors became available. That really changed the economics of performance. We’ve been climbing up the core and thread curve ever since.

    Unfortunately, you get an element of diminishing returns by just constantly adding more cores and capabilities. A more thoughtful view is to take a look at what functions you can put on the chip and therefore enable better computing.

    I believe the next decade is going to be about doing more in terms of embedding software functions on the chips. The SPARC M7 is the first in this “capability generation.” We’re very happy to be first out of the gate.

    Q: Where do you see SPARC M7 making the most difference to enterprise computing?

    Fowler: There are three basic areas that we worked on. The first area is around security, where we tried to tackle two important elements.

    The first element is very high-speed encryption. We’ve actually had a history of working on this in chips and we improved it again in SPARC M7. By incorporating very high performance encryption in the chip we’re able to not only do it quickly but also leave a substantial amount of the processor’s resources to do other projects. That’s very important. A software-only encryption scheme typically will consume all of the processing resources of the chip.

    Encryption is a foundational element of enterprise computing, and we believe we should have done this a long time ago. Everyone needs to run their data centers fully encrypted—nothing should be done “in the clear.” That’s the future of computing. Absolutely everything is encrypted, whether it’s stored on a disk, on a wire, on your laptop, in the back-end data center, or on a tape drive. This is the first processor that enables that.

    Q: What other security area was important?

    Fowler: The second thing we did on security was to enable memory protection "in a feature known as Silicon Secured Memory". The software team drove this effort initially, because they were developing large in-memory data stores and wanted a way to protect them from corruption.

    We later figured out that this was a more general feature that, if added to the processor and enabled for all applications, would eliminate a pretty broad class of compromises. Take Heartbleed for example. Heartbleed became a brand-name vulnerability because of a programming error, and our hardware memory protects against this kind of error.

    Q: What goes into the decision-making of what to try to put on a chip?

    Fowler: The database team focused in on, how do we improve not just the performance but also the efficiency of the database? This is driven by our vision that everything moves to in-memory computing. I know in the world of hardware today people talk a lot about flash. Frankly, I don’t think it’s that interesting. It’s an intermediate point to putting everything in memory.

    In the case of database integration with the processor, we did two specific things that are closely related. First, we took two portions of SQL processing—the part that scans for particular strings across a large amount of memory and the part that helps you filter and join rows—which are very low levels of how the database operates, and we encapsulated those in co-processors in silicon.

    That means, first of all, if you want to tackle a very, very large problem, we have extraordinary performance. Also, since these are operations that normally use a lot of processor time when executed only in software, we’re giving back a huge amount of efficiency to the customer. That’s because we’re making the database more effective on our processors.

    So that’s an area that took a lot of study. What would we put in a chip that would be straightforward enough to encapsulate in a chip design, but still have a huge benefit to the customer? And since we own the database and the chip, we’re able to deconstruct and find out which functions of the database go best on the chip.

    Q: What other embedded feature contributes significantly to performance?

    Fowler: That query acceleration is paired with another glorious feature: memory decompression. The idea here is—and maybe it’s a little counter-intuitive—we’ve connected up a silicon accelerator for decompressing data together with the SQL stuff that I talked about a moment ago. It lets a customer take a database that’s fairly large in size, say two terabytes, and have the whole thing reside in memory at say, one-eighth or one-tenth of the size that it normally is on disk, and still operate on it with full performance.

    So these two features go together. We offload database processing from the cores, which makes them more efficient and faster. Then we’re also able to put the entire database into a lot less memory, which lets users have either less expensive systems or tackle larger problems.

    Those are the first two pillars of SPARC M7—security and the database. And they’re both very strong upgrades to what you can get in processors today.

    Q: And the third?

    Fowler: The last thing, and it continues to be our hallmark in processor engineering, is that we wanted to make sure that we had the world’s fastest commercial microprocessor. So whether you want to tackle a large problem, or simplify your environment by having a small number of machines—whether you’re running Oracle applications or not—we wanted to have the world’s fastest commercial microprocessor.

    The SPARC M7 is the first server chip in the world with 32 cores and 256 threads. By any measure of memory bandwidth, overall performance, and benchmarking, it’s the fastest commercial processor in the world.

    For us, that’s the bottom line to the bigger SPARC M7 story: Add in security, do some significant upgrades for the database, and then, for good measure, make sure we’re faster than anybody else—even without those tricks.

    Q: Now that servers using SPARC M7 are available, what are you most excited about?

    Fowler: One of the things engineers get excited about is witnessing the ways customers actually use these things. It will be very interesting to see how different customers take advantage of these—Silicon Secured Memory and the new encryption technology, the database acceleration, the memory compression—because they are very rich features.

    https://www.oracle.com/servers/sparc...r-q-and-a.html

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,012

    The IBM POWER8 Review: Challenging the Intel Xeon

    by Johan De Gelas on November 6, 2015 8:00 AM EST

    Five years. That is how much time has passed since we have seen an affordable server processor that could keep up with or even beat Intel's best Xeons. These days no less than 95% of the server CPUs shipped are Intel Xeons. A few years ago, it looked like ARM servers were going to shake up the market this year, but to cut a long story short, it looks like the IBM POWER8 chip is probably the only viable alternative for the time being.

    That was also noticeable in our Xeon E7 review, which was much more popular than we ever hoped. One of the reasons was the inclusion of a few IBM POWER8 benchmarks. We admit that the article was however incomplete: the POWER8 development machine we tested was a virtual machine with only 1 core, 8 threads and 2 GB of RAM, which is not enough to do any thorough server testing.

    After seeing the reader interest in POWER8 in that previous article, we decided to investigate the matter further. To that end we met with Franz Bourlet, an enthusiastic technical sales engineer at IBM and he made sure we got access to an IBM S822L server. Thanks to Franz and the good people of Arrow Enterprise Computing Solutions, Arrow was able to lend us an IBM S822L server for our testing.

    Artigo completo: http://www.anandtech.com/print/9567/...he-intel-xeon-
    Última edição por 5ms; 07-11-2015 às 10:40.

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •