Resultados 1 a 5 de 5
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    14,992

    [EN] Google: Higher rate of problems with SSDs rather than HDDs

    Also: SLC drives, which are targeted at the enterprise market and considered to be higher end, are not more reliable than the lower end MLC drives.


    Flash Reliability in Production: The Expected and the Unexpected

    Authors: Bianca Schroeder, University of Toronto; Raghav Lagisetty and Arif Merchant, Google, Inc.

    Abstract:

    As solid state drives based on flash technology are becoming a staple for persistent data storage in data centers, it is important to understand their reliability characteristics. While there is a large body of work based on experiments with individual flash chips in a controlled lab environment under synthetic workloads, there is a dearth of information on their behavior in the field. This paper provides a large-scale field study covering many millions of drive days, ten different drive models, different flash technologies (MLC, eMLC, SLC) over 6 years of production use in Google’s data centers. We study a wide range of reliability characteristics and come to a number of unexpected conclusions. For example, raw bit error rates (RBER) grow at a much slower rate with wear-out than the exponential rate commonly assumed and, more importantly, they are not predictive of uncorrectable errors or other error modes. The widely used metric UBER (uncorrectable bit error rate) is not a meaningful metric, since we see no correlation between the number of reads and the number of uncorrectable errors. We see no evidence that higher-end SLC drives are more reliable than MLC drives within typical drive lifetimes. Comparing with traditional hard disk drives, flash drives have a significantly lower replacement rate in the field, however, they have a higher rate of uncorrectable errors.

    https://www.usenix.org/conference/fa...tion/schroeder
    Última edição por 5ms; 06-03-2016 às 19:24.

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    14,992

    PR

    Some of the findings and conclusions might be surprising.

    • Between 20–63% of drives experience at least one uncorrectable error during their first four years in the field, making uncorrectable errors the most common non-transparent error in these drives. Between 2–6 out of 1,000 drive days are affected by them.
    • The majority of drive days experience at least one correctable error, however other types of transparent errors, i.e. errors which the drive can mask from the user, are rare compared to non-transparent errors.
    • We find that RBER (raw bit error rate), the standard metric for drive reliability, is not a good predictor of those failure modes that are the major concern in practice. In particular, higher RBER does not translate to a higher incidence of uncorrectable errors.
    • We find that UBER (uncorrectable bit error rate), the standard metric to measure uncorrectable errors, is not very meaningful. We see no correlation between UEs and number of reads, so normalizing uncorrectable errors by the number of bits read will artificially inflate the reported error rate for drives with low read count.
    • Both RBER and the number of uncorrectable errors grow with PE cycles, however the rate of growth is slower than commonly expected, following a linear rather than exponential rate, and there are no sudden spikes once a drive exceeds the vendor’s PE cycle limit, within the PE cycle ranges we observe in the field.
    • While wear-out from usage is often the focus of attention, we note that independently of usage the age of a drive, i.e. the time spent in the field, affects reliability.
    • SLC drives, which are targeted at the enterprise market and considered to be higher end, are not more reliable than the lower end MLC drives.
    • We observe that chips with smaller feature size tend to experience higher RBER, but are not necessarily the ones with the highest incidence of non-transparent errors, such as uncorrectable errors.
    • While flash drives offer lower field replacement rates than hard disk drives, they have a significantly higher rate of problems that can impact the user, such as uncorrectable errors.
    • Previous errors of various types are predictive of later uncorrectable errors. (In fact, we have work in progress showing that standard machine learning techniques can predict uncorrectable errors based on age and prior errors with an interesting accuracy.)
    • Bad blocks and bad chips occur at a signicant rate: depending on the model, 30-80% of drives develop at least one bad block and and 2-7% develop at least one bad chip during the first four years in the field. The latter emphasizes the importance of mechanisms for mapping out bad chips, as otherwise drives with a bad chips will require repairs or be returned to the vendor.
    • Drives tend to either have less than a handful of bad blocks, or a large number of them, suggesting that impending chip failure could be predicted based on prior number of bad blocks (and maybe other factors). Also, a drive with a large number of factory bad blocks has a higher chance of developing more bad blocks in the field, as well as certain types of errors.
    Última edição por 5ms; 06-03-2016 às 19:32.

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    14,992

    Google seeks new disks for data centers

    Google Wants New HDDs. Higher capacity and more IO/s even if less reliable.

    Eric Brewer, VP Infrastructure, Google
    Tuesday, February 23, 2016


    Today, during my keynote at the 2016 USENIX conference on File and Storage Technologies (FAST 2016), I’ll be talking about our goal to work with industry and academia to develop new lines of disks that are a better fit for data centers supporting cloud-based storage services. We're also releasing a white paper on the evolution of disk drives that we hope will help continue the decades of remarkable innovation achieved by the industry to date.

    But why now? It's a fun but apocryphal story that the width of Roman chariots drove the spacing of modern train tracks. However, it is true that the modern disk drive owes its dimensions to the 3½” floppy disk used in PCs. It's very unlikely that's the optimal design, and now that we're firmly in the era of cloud-based storage, it's time to reevaluate broadly the design of modern disk drives.

    The rise of cloud-based storage means that most (spinning) hard disks will be deployed primarily as part of large storage services housed in data centers. Such services are already the fastest growing market for disks and will be the majority market in the near future. For example, for YouTube alone, users upload over 400 hours of video every minute, which at one gigabyte per hour requires more than one petabyte (1M GB) of new storage every day or about 100x the Library of Congress. As shown in the graph, this continues to grow exponentially, with a 10x increase every five years.



    At the heart of the paper is the idea that we need to optimize the collection of disks, rather than a single disk in a server. This shift has a range of interesting consequences including the counter-intuitive goal of having disks that are actually a little more likely to lose data, as we already have to have that data somewhere else anyway. It’s not that we want the disk to lose data, but rather that we can better focus the cost and effort spent trying to avoid data loss for other gains such as capacity or system performance.

    We explore physical changes, such as taller drives and grouping of disks, as well as a range of shorter-term firmware-only changes. Our goals include higher capacity and more I/O operations per second, in addition to a better overall total cost of ownership. We hope this is the beginning of both a new chapter for disks and a broad and healthy discussion, including vendors, academia and other customers, about what “data center” disks should be in the era of cloud.


    http://googlecloudplatform.blogspot....a-centers.html

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    14,992
    Disks for Data Centers
    White paper for FAST 2016

    Authors: Eric Brewer, Lawrence Ying, Lawrence Greenfield, Robert Cypher, and Theodore Ts'o
    Google, Inc.
    February 29, 2016

    Abstract

    Disks form the central element of Cloud-based storage, whose demand far outpaces the considerable rate of innovation in disks. Exponential growth in demand, already in progress for 15+ years, implies that most future disks will be in data centers and thus part of a large collection of disks. We describe the “collection view” of disks and how it and the focus on tail latency, driven by live services, place new and different requirements on disks. Beyond defining key metrics for data-center disks, we explore a range of new physical design options and changes to firmware that could improve these metrics.

    We hope this is the beginning of a new era of “data center” disks and a new broad and open discussion about how to evolve disks for data centers. The ideas presented here provide some guidance and some options, but we believe the best solutions will come from the combined efforts of industry, academia and other large customers.




    PDF (16p): https://static.googleusercontent.com...hive/44830.pdf

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    14,992
    "While wear-out from usage is often the focus of attention, we note that independently of usage the age of a drive, i.e. the time spent in the field, affects reliability."


    Lembrando que alguns data centers do Google, entre outros, operam em temperaturas elevadas:


    Industrial Temperature and NAND Flash in SSD Products | EEWeb

    Conclusion

    NAND is subject to two competing factors relative to temperature. At high temperature, programming and erasing a NAND cell is relatively less stressful to its structure, but data retention of a NAND cell suffers. At low temperature, data retention of the NAND cell is enhanced but the relative stress to the cell structure due to program and erase operations increases.
    Última edição por 5ms; 06-03-2016 às 20:23.

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •