Resultados 1 a 3 de 3
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,203

    [EN] C14 with Rclone

    Sync Files And Directories To C14 From Anywhere

    Edouard Bonlieu
    24th of March 2017

    Rclone is a command line interface that lets you synchronize files and directories to and from anywhere. It aims to help you save and copy your data to / from any cloud storage providers or local filesystems in an simple, fast and scalable way. In its latest version, 1.36, Rclone supports SFTP protocol and can be used with C14 to manage files and directories you want to archive.

    Thanks to @njcw who worked on the Rclone's SFTP support for the last weeks, you can now add remote files from AWS S3, BackBlaze B2, Dropbox, Google Cloud Storage and much more to your C14 archives with ease.

    This blog post shows you how easy it is to use Rclone with C14 to add and manage remote files in your C14 archives

    https://blog.online.net/2017/03/24/c...from-anywhere/

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,203
    Rclone is a command line program to sync files and directories to and from

    • Google Drive
    • Amazon S3
    • Openstack Swift / Rackspace cloud files / Memset Memstore
    • Dropbox
    • Google Cloud Storage
    • Amazon Drive
    • Microsoft One Drive
    • Hubic
    • Backblaze B2
    • Yandex Disk
    • SFTP
    • The local filesystem

    Features

    • MD5/SHA1 hashes checked at all times for file integrity
    • Timestamps preserved on files
    • Partial syncs supported on a whole file basis
    • Copy mode to just copy new/changed files
    • Sync (one way) mode to make a directory identical
    • Check mode to check for file hash equality
    • Can sync to and from network, eg two different cloud accounts
    • Optional encryption (Crypt)
    • Optional FUSE mount (rclone mount)

    Links


    Share and Enjoy.

    Tweet




    Links.

    Rclone forum.
    Github project.
    Google+ page.
    Donate.
    @njcw





  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    17,203

    The End of Backup

    Andres Rodriguez
    March 22, 2017

    No one loves their backup, and when I was CTO of the New York Times, I was no exception. Traditional data protection has only survived this long because the alternative is worse: losing data is one of those job ending—if not career ending—events in IT. Backup is like an insurance policy. It provides protection against an exceptional and unwanted event.

    But like insurance policies, backups are expensive, and they don’t add any additional functionality. Your car doesn’t go any faster because you’re insured and your production system doesn’t run any better with backup. As many IT professionals have discovered too late, backups are also unreliable, a situation made even worse by the fact that bad backups typically aren’t discovered until there’s a need to restore. At that point, IT is really out of luck.

    Fortunately, backup as we have known it is ending. Significant improvements in virtualization, synchronization and replication have converged to deliver production systems that incorporate point-in-time recovery and data protection as an integral component. These new data protection technologies are no longer engaged only when a system fails. Instead, they run constantly within live production systems.

    With technology as old and entrenched as backup, it helps to identify its value, and then ask whether we can get the same result in a better way. Backup accomplishes two distinct and crucial jobs. First, it captures a point-in-time version or snapshot of a data set. Second, it writes a copy of that point-in-time version of the data to a different system, location or preferably both. Finally, when IT wants to restore, IT must find the right version and then copy the data back to a fresh production system. When backup works, it protects us against point-in-time corruptions such as accidentally deleted files, ransomware attacks or complete system meltdowns.

    Ensuring that backups restore as advertised, in my experience, requires periodic testing and a meticulous duplication of failures without affecting production systems. Many IT teams lack the resources or the cycles to make sure their backups are really functioning, and backups can fail in ways that are hard to detect. These failures may have no impact on a production system until something goes wrong. And when the backup fails to restore, everything goes wrong.

    Modern data protection relies on a technology trifecta: virtualization, synchronization and replication. Together, they address the critical flaws in traditional backups.

    • Virtualization separates live, changing data from stable versions of that data.
    • Synchronization efficiently moves the changes from one stable version to the next to a replication core.
    • Replication then spreads identical copies of those versions across target servers distributed across multiple geographic locations.



    Essentially, this describes the modern “cloud,” but these technologies have been around and evolving for decades.

    Because this approach merges live data with protected versions of the data, it dramatically increases the utility and the overall reliability of the system. Operators can slice into the version stream of the data in order to fork a DevTest instance of their databases. They can create multiple live instances of the production front-end to synchronize files across geographies and provide instant recoverability to previous versions of a file. Most importantly, because modern data protection does not need to copy the data as a separate process, it eliminates the risk of this process failing inadvertently and silently. In this way, data protection becomes an integrated component of a healthy production system.

    Two distinct approaches to this new breed of data protection have emerged: SAN and NAS. The SAN world relies on block-level virtualization and is dominated by companies like Actifio, Delphix and Zerto. With blocks, the name of the game is speed. The workloads tend to be databases and VMs, and the production system’s performance cannot be handicapped in any significant way. The SAN vendors can create point-in-time images of volumes at the block level and copy them to other locations. Besides providing data protection and recovery functionality, these technologies make it much easier to develop, upgrade and test databases before launching them into production. And while this ability to clone block volumes is compelling for DevTest, in the world of files, it changes everything.

    The NAS world relies on file-level virtualization to accomplish the same goal. Files in the enterprise depend on the ability to scale, so it’s critical to support file systems that can exceed the physical storage footprint of a device. Vendors have used caching and virtualization in order to scale file systems beyond the limitations of any one hardware device. NAS contenders capture file-level versions and synchronize them against cloud storage—a.k.a object-storage—backends. Essentially, these are powerful file replication backends running across multiple geographic locations, operating as a service backed by the likes of Amazon, Microsoft and Google. The benefit of these systems is not only their unlimited scale but the fact that the files are being protected automatically. The file-versions are synchronized into the cloud storage core. The cloud, in this sense, takes the place of the inexpensive but ultimately unreliable traditional media for storing backups. This shift to cloud is not only more reliable, but it can dramatically reduce RPO (restore point objectives) from hours to minutes.

    And there is more. Unlike SAN volumes, NAS file volumes can be active in more than one location at the same time. Think file sync & share, but at the scale of the datacenter and branch offices. This approach is already being used by media, engineering and architecture firms to collaborate on large projects across geographies. It can also simplify disaster recovery operations from one site to another, as any active-passive configuration is a more restricted, more basic version of these active-active NAS deployments.

    We are entering a new era for data protection. Simply put, backup is ending, and, based on my experience as a CTO and the many conversations I’ve had with our customers, that is a good thing. We are moving away from a process that makes data restores a failure mode scenario to one where data is protected continuously. In doing so, we are not only taking the risk out of data protection. We are launching exciting new capabilities that make organizations more productive.

    http://www.datacenterknowledge.com/a...end-of-backup/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •