Resultados 1 a 5 de 5
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    [EN] 14TB hard disk drives

    Pushing Data Center Capacity to New Heights with the Ultrastar Hs14

    Lenny Sharp
    October 3, 2017

    We have been advancing the state of hard disk drive (HDD) technology to keep up with the flood of data being stored in data centers around the world. With data created increasing at a rate around 40% per year[2], driving capacity up and the cost of storage down, has been of paramount importance as we introduce new technology and products. Today’s announcement of the availability of the Ultrastar® Hs14 drive is a great testament of the value our innovation brings customers.

    Today, there is no better solution to maximizing capacity than a helium-sealed HDD. We were the first to introduce this important breakthrough technology, and we’ve perfected it through four generations of products.

    With a density 1/7th that of air, helium provides a low turbulence environment for the recording head to fly over the disk at nanometer scale. Starting with 6TB helium-sealed drives in 2013, our proprietary sealing technology solved a problem the industry had been grappling with for years. Several award-winning innovations and breakthroughs needed to come together in order to create the first helium-sealed HDDs – which you can read more about in my recent blog.

    Then, through 8TB, 10TB and then 12TB generations, the company continued to innovate and improve the design, adding enhancements to the mechanical and electrical components, and improving the robustness and reliability of the platform. With the fourth generation He12, we added an 8th disk to the design, increasing capacity and even further lowering the cost per TB.

    Now, we’re pleased to introduce an extension of the fourth generation platform, leveraging SMR (shingled magnetic recording) technology to increase the useable capacity to 14TB. SMR uses a unique “overlapping” writing technique, to pack more bits into the same space. This way of storing data is a great match for hyperscale data center software architectures that rely heavily on sequential writes to store vast amounts of data.

    SMR can be implemented in multiple ways. Working closely with key cloud customers, we have focused on the Host Managed (HM) implementation of SMR. Unlike other approaches, HM gives Cloud Service Providers (CSP) the most control over the way the data is written to disk, and ensures consistently high performance with no latency surprises.

    We have also been actively involved with Industry Standards bodies, as well as the open source community, to make sure that APIs (Application Programming Interfaces), code samples, and technical information is readily available to customers, empowering them to fully utilize the power of SMR.

    Armed with the additional capacity of the Ultrastar Hs14, CSPs will be able to leverage the TCO (total cost of ownership) benefits of the additional capacity, to provide even more value to their customers.

    In a data-centric world, our ability to define the future of storage devices and systems helps businesses, and their customers, take advantage of the value and the possibilities data harbors through both big data and fast data.

    Our heritage in HDD has enabled us to pioneer the helium drives and secure a future of higher capacity drives. The next time you store photos, stream a video, or build a website, your data may be stored on an Ultrastar Hs14!

  2. #2

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Technological limit


    Few years ago HDD capacity was doubling about each year. It's not the case anymore because the technology to increase the areal density is more and more complicated. Here WD is associating host-managed SMR and helium on the Hs14 being based on the same platform as former He12 but with host-managed SMR.

    Since the introduction of the first HDD, IBM RAMAC, at 3.75MB, in 1956, capacity increased at a 28% CAGR up to now. From 12TB to 14TB, growth is just 17%.

    The third HDD maker, Toshiba, is just entering into 10TB capacity in same 3.5-inch form factor.

    Note that the new Ultrastar Hs14 enterprise at 7,200rpm HDD can be appreciated for archiving. It is not for all applications, aimed specifically at sequential writes (because of SMR) and not replacing other enterprise devices, especially faster 12Gb SAS 2.5-inch HDD culminating at 900GB for 15,000rpm devices and 2.4TB for units rotating at 10,000rpm. Specs are also far from much faster - maximum sustained transfer rate of is 233MB/s and R/W seek time of 7.7/12ms for the Hs14 - and more expansive SSDs, the best ones even surpassing 30TB.

    Is it possible to reach 18TB and then 20TB into a 3.5-inch form factor with a fifth generation of HelioSeal and a third generation of host-managed SMR? Either you increase the areal density on each disk - here a maximum of 1,034Gb per square inch - or you add one more platter into the device - now the maximum being 8 with 16 heads like in new WD unit? But in both cases, we approach a technological limit. Next technology to come is supposed to be HAMR but the manufacturers evoke it since many years and nothing is happening.

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    IBM and Sony cram up to 330 terabytes into tiny tape cartridge

    Note that commercial tape cartridges max out at 15TB—so, less than the theoretical amount enabled by the 2010 breakthrough.

    Sebastian Anthony

    IBM and Sony have developed a new magnetic tape system capable of storing 201 gigabits of data per square inch, for a max theoretical capacity of 330 terabytes in a single palm-sized cartridge.

    For comparison, the world's largest hard drives—which are about twice the physical size of a Sony tape cartridge—are the 60TB Seagate SSD or 12TB HGST helium-filled HDD. The largest commercially available tapes only store 15TB. So, 330TB is quite a lot.

    To achieve such a dramatic increase in areal density, Sony and IBM tackled different parts of the problem: Sony developed a new type of tape that has a higher density of magnetic recording sites, and IBM Research worked on new heads and signal processing tech to actually read and extract data from those nanometre-long patches of magnetism.

    Sony's new tape is underpinned by two novel technologies: an improved built-in lubricant layer, which keeps it running smoothly through the machine, and a new type of magnetic layer. Usually, a tape's magnetic layer is applied in liquid form, kind of like paint—which is one of the reasons that magnetic tape is so cheap and easy to produce in huge quantities. In this case, Sony has instead used sputter deposition, a mature technique that has been used by the semiconductor and hard drive industries for decades to lay down thin films.

    The main upshot of sputtering is that it produces magnetic tape with magnetic grains that are just a few nanometres across, rather than tens or hundreds of nanometres in the case of commercially available tape.

    The new lubrication layer, which we don't know much about, makes sure that the tape streams out of the cartridge and through the machine extremely smoothly. Some of the biggest difficulties of tape recording and playback are managing friction and air resistance, which cause wear and tear and chaotic movements. When you're trying to read a magnetic site that is just 7nm across, with the tape whizzing by at almost 10 metres per second, even the smallest of movements can be massively problematic.

    We know a little more about IBM's new read head, which appears to be a 48nm-wide tunnelling magneto-resistive head that would usually be found in a hard disk drive—which makes sense, given the tape's sputtered medium is very similar to the surface of a hard drive platter. This new head, combined with new servo tech that precisely controls the flow of tape through the system, allows for a positional accuracy of under 7nm. A new signal processing algorithm helps the system make sense of the tiny magnetic fields that are being read by the head.

    IBM Research Zurich's Mark Lantz. Modern tape cartridges are small, just four inches across.

    The new cartridges, when they're eventually commercialised, will be significantly more expensive because of the tape's complex manufacturing process. Likewise, a new tape drive (costing several thousand pounds) would be required. Still, given the massive increase in per-cartridge capacity, the companies that still use tape storage for backups and cold storage will be quite excited.

    IBM And Sony Researchers Cram 330TB Of Uncompressed Data Onto Magnetic Tape Cartridge

    Paul Lilly
    August 02, 2017


    "Tape has traditionally been used for video archives, back-up files, replicas for disaster recovery and retention of information on premise, but the industry is also expanding to off-premises applications in the cloud," IBM fellow Evangelos Eleftheriou said in a statement. "While sputtered tape is expected to cost a little more to manufacture than current commercial tape, the potential for very high capacity will make the cost per terabyte very attractive, making this technology practical for cold storage in the cloud."

    The strength of magnetic storage is that it has proven reliable over long periods of time. And on top of it all, magnetic tape storage boasts low power consumption and comparatively low costs to other storage medium.

    Advancements like this are helping to keep magnetic tape storage relevant despite being a storage medium that's been in use for several decades. Just seven years ago, IBM and Fujifilm were bragging about a 35TB magnetic tape storage device. And in the past 11 years, tape cartridge capacity has increased from 8TB to now 330TB with this new development.

    Our Comments

    Up to now, FujiFilm was the preferred supplier of tape cartridges to IBM.

    Sony revealed in May 2014 a tape technology able to store 185TB in an LTO cartridge. I was followed rapidly by an answer from FujiFilm and IBM stating to be able to put native 154TB into the same volume.

    Remember also that, in 2015, Big Blue, this time in collaboration with FujiFilm, demonstrated the potential for recording at an areal density of 123Gb/in2 using a prototype BaFe tape fabricated using low cost particulate coating technology. It was estimated that this areal density will enable cartridge capacities of up to 220TB uncompressed.

    Consequently 220TB was the highest capacity revealed for a tape cartridge, but never integrated into a commercial products.

    Now the record is 330TB without compression, apparently on a 1,098 meters (more tan one kilometer!) magnetic roll with Sony based on a tape with 103nm track width and 818,000bpi linear density on sputtered - and not BaFe - media of 4.7μm thickness, for an aerial density of 201 billion bits per square inch. And once more no availability is announced.

    What's actually available is a record native of only 15TB capacity into proprietary cartridges inserted into IBM TS1155 drive, and 6TB on LTO-7, both of them using half-inch tape. Per comparison, HDDs culminate at 12TB.

    Historically, the first tape drive for a computer was the UNISERVO from Remington Rand, Inc.'s Eckert-Mauchly Division. One year later IBM launched in May 21, 1952 its first unit, the model 726 Magnetic Tape Reader/Recorder, with an initial capacity of about 2MB per reel, four years before the first HDD, IBM Ramac, at 5MB.

    Magnetic tape media and heads are the two key elements to improve the performance of longitudinal magnetic tape media. That's why, on one side, IBM, manufacturer of tape drives, including heads and all the mechanism associated, but not producing tape media at all, and on the other side, Fujifilm and Sony, in tape media, collaborating with Big Blue since many years to enhance their common solution.
    Última edição por 5ms; 05-10-2017 às 14:28.

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    2017 Magic Quadrant for Backup and Recovery Solutions

    Commvault, IBM, Dell/EMC, Veritas and Veeam selected as leaders

    This report was published on July 31, 2017 and written by Dave Russell, Pushan Rinnen and Robert Rhame, analysts, Gartner, Inc.


    Enterprise backup is among the most critical tasks for infrastructure and operations professionals. Gartner provides analysis and evaluation of the leading data center backup solution vendors that offer a range of traditional to innovative availability capabilities.

    Strategic Planning Assumptions

    • By 2021, 50% of organizations will augment or replace their current backup application with another solution, compared to what they deployed at the beginning of 2017.
    • By 2022, 20% of storage systems will be self-protecting, obviating the need for backup applications, up from less than 5% today.
    • By 2020, 30% of large enterprises will leverage snapshots and backup for more than just operational recovery (e.g., DR, test/development, DevOps, etc.), up from less than 15% at the beginning of 2017.
    • By 2020, 30% of organizations will have replaced traditional backup applications with storage- or HCIS-native functions for the majority backup workloads, up from 15% today.
    • By 2020, the number of enterprises using the cloud as a backup target will double, up from 10% at the beginning of 2017.
    • By 2021, over 50% of organizations will supplant backup with archiving for long-term data retention, up from 30% in 2017.
    • By 2019, despite increasing effectiveness of countermeasures, successful ransomware attacks will double in frequency year over year, up from 2 to 3 million in 2016.

    Market Definition/Description

    Gartner defines data center backup and recovery solutions as those solutions focused on providing backup capabilities for the upper-end mid-market and large enterprise environments. Gartner defines the upper-end mid-market as being 500 to 999 employees, and the large enterprise as being 1,000 employees or more. Protected data comprises data center workloads, such as file share, file system, OS, hypervisor, database, email, content management, CRM, ERP and collaboration application data. Today, these workloads are largely on-premises; however, protecting SaaS applications (such as Salesforce and Microsoft Office 365) and infrastructure as a service (IaaS) are becoming increasingly important, as are other, newer 'born in the public, private or hybrid cloud' applications.

    These backup and recovery solutions provide writing data to tape, conventional random access media (such as a HDD or SSDs) or devices that emulate the previous backup targets (such as VTL). Data services, such as data reduction (compression, deduplication or single instancing), array and/or server-based snapshot, heterogeneous replication (from/to dissimilar devices), and near-CDP can also be offered. Exploiting converged data management (also referred to as 'copy data management') has become important; here, backup data is leveraged for additional use cases, such as analytics, DR, test/dev, reporting and so on. In particular, the concept of performing a live mount of the backup data, prior to actually restoring it - making it usable nearly instantly, then using tools, such as VMware's Storage vMotion, to move the data from the backup store to primary storage - has become table stakes in the market.

    Additionally, integration and exploitation of the cloud, particularly the public cloud, or on-premises object storage as a backup target or to a co-location facility are becoming more important for backup workloads, despite modest deployment to date.

    As the backup and recovery market has hundreds of vendors, this report narrows the focus down to those that have a very strong presence worldwide in the upper-end mid-market and large enterprise environments. Solutions that are predominantly sold as a service (backup as a service (BaaS) do not meet the market definition for inclusion. Software for a homogeneous environment, such as native tools from Microsoft or VMware, primarily for their own platforms, are also excluded, as many midsize and large customers prefer a more heterogeneous, scalable backup product for their environments.

    Provider solutions that primarily address backup and recovery of remote office, small enterprise, individual system and/or an endpoint device data are outside of the scope of this data-center-oriented focus. Some providers may, however, also address these workloads, as well as the larger data center workloads described above. However, those are not the primary use cases for deploying these data center solutions.


Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens