Resultados 1 a 8 de 8

Tópico: Sobre o IDC19

  1. #1
    Aspirante a Evangelist
    Data de Ingresso
    Nov 2010
    Localização
    São Paulo
    Posts
    386

    Sobre o IDC19

    Pessoal boa tarde, quero deixar aqui minha impressão sobre o dc idc19.com.br.
    Estamos usando a mais de um mês uma máquina linux centos 6 com cpanel virtualização kvm.
    100% de uptime.
    Para quem não tem muito trafego as maquinas em kvm são bem estáveis e recomendo e muito, segue minha configuração de teste:

    Linuc centos 6.0.x 64 bits
    Cpanel

    1.5Gb Ram

    72Gb espaço em disco

    Tráfego 6M para 200Gb.

    vnc habilitado = reinstalar sistema opreracional, particionar e etc..

    Scripts instalados:

    Cxe - configserver exploit
    Cxf = Config server firewall
    Cse= Config server explorer
    Cmm = Config server mail manage
    Cmq = Config server mail queues
    Assp - deluxe = anti spam
    Linux malware detect
    Roda atualmente 12 clientes com aproximadamente 4 bases mysql cada um deles.

    Fica uma exelente dica para quem desejar um servidor virtualizado em KVM aqui no Brasil

    Abraços a todos.
    Carlos Nunes
    Analista de sistemas
    Desenvolvimento de Soluções para web.
    Criarnaweb E-Solutions
    www.criarnaweb.com.br
    https://br.linkedin.com/in/nunescarlos

  2. #2
    Moderador
    Data de Ingresso
    Oct 2010
    Localização
    Rio de Janeiro
    Posts
    2,679
    Peguei um entry level OpenVZ ontem pra testar (e fazer de MX in para evitar EUA), mas a velocidade é decepcionante:

    --2014-03-15 00:04:26-- http://linuxbuilds.icewarp.com:32000...NTU1204.tar.gz
    Resolving linuxbuilds.icewarp.com (linuxbuilds.icewarp.com)... 82.113.48.147
    Connecting to linuxbuilds.icewarp.com (linuxbuilds.icewarp.com)|82.113.48.147|:32000... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 256024321 (244M) [application/octet-stream]
    Saving to: `IceWarpServer-11.0.1.0_x64_20140312_UBUNTU1204.tar.gz'

    100%[================================================== =================================>] 256,024,321 49.8K/s in 55m 18s

    2014-03-15 00:59:48 (75.3 KB/s) - `IceWarpServer-11.0.1.0_x64_20140312_UBUNTU1204.tar.gz' saved [256024321/256024321]

    FINISHED --2014-03-15 00:59:48--
    Total wall clock time: 55m 22s
    Downloaded: 1 files, 244M in 55m 18s (75.3 KB/s)
    e

    ~# wget freevps.us/downloads/bench.sh -O - -o /dev/null|bash
    CPU model : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
    Number of cores : 4
    CPU frequency : 2399.971 MHz
    Total amount of ram : 256 MB
    Total amount of swap : 0 MB
    System uptime : 1:09,

    Download speed from CacheFly: 352KB/s
    Download speed from Coloat, Atlanta GA: 267KB/s
    Download speed from Softlayer, Dallas, TX: 354KB/s
    Download speed from Linode, Tokyo, JP: 339KB/s
    Download speed from i3d.net, Rotterdam, NL: 352KB/s
    Download speed from Leaseweb, Haarlem, NL: 354KB/s
    Download speed from Softlayer, Singapore: 352KB/s
    Download speed from Softlayer, Seattle, WA: 352KB/s
    Download speed from Softlayer, San Jose, CA: 354KB/s
    Download speed from Softlayer, Washington, DC: 355KB/s
    I/O speed : 4.5 MB/s

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,023
    Download speed from CacheFly: 352KB/s
    Download speed from Softlayer, Dallas, TX: 354KB/s
    Download speed from i3d.net, Rotterdam, NL: 352KB/s
    Download speed from Leaseweb, Haarlem, NL: 354KB/s
    Download speed from Softlayer, Singapore: 352KB/s
    Download speed from Softlayer, Seattle, WA: 352KB/s
    Download speed from Softlayer, San Jose, CA: 354KB/s
    Download speed from Softlayer, Washington, DC: 355KB/s

    Talvez o 6Mbps seja 3Mbps inbound + 3Mbps outbound

    Mas o seu plano OpenVZ pode ser 3Mbps ...
    Última edição por 5ms; 14-03-2014 às 22:00.

  4. #4
    Web Hosting Master
    Data de Ingresso
    Apr 2012
    Posts
    667
    I/O speed : 4.5 MB/s
    wow, é um pendrive shared?!?

  5. #5
    Moderador
    Data de Ingresso
    Oct 2010
    Localização
    Rio de Janeiro
    Posts
    2,679
    Por 10 doletas não esperava muito mais que isso...

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,023
    Citação Postado originalmente por cresci Ver Post
    Por 10 doletas não esperava muito mais que isso...
    No longo prazo, uma micro instância reservada da Amazon custa pouco e é muito mais do que isso

  7. #7
    Moderador
    Data de Ingresso
    Oct 2010
    Localização
    Rio de Janeiro
    Posts
    2,679
    Citação Postado originalmente por 5ms Ver Post
    No longo prazo, uma micro instância reservada da Amazon custa pouco e é muito mais do que isso
    O meu problema com eles é a confiabilidade. Se usar remote storage/EBS é um saco, é lento e sem garantia nenhuma de serviço. E mesmo o EBS eles podem "perder" e dane-se você. Se usar local storage é pedir para perder tudo no primeiro crash de hypervisor. Fora o saco de ter que reconfigurar tudo.

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,023
    Local storage é área temporária. Não dá para confiar 1 nanosegundo. A storage S3 (e Glacier) da Amazon é confiável mas EBS, ironicamente, nem tanto.


    Amazon EBS Availability and Durability

    Amazon EBS volumes are designed to be highly available and reliable. At no additional charge to you, Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. For more details, see the Amazon EC2 and EBS Service Level Agreement.

    The durability of your volume depends both on the size of your volume and the percentage of the data that has changed since your last snapshot. As an example, volumes that operate with 20 GB or less of modified data since their most recent Amazon EBS Snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%, where failure refers to a complete loss of the volume. This compares with commodity hard disks that typically fail with an AFR of around 4%, making EBS volumes 10 times more reliable than typical commodity disk drives.
    EBS volumes have redundancy built-in, which means that they will not fail if an individual drive fails or some other single failure occurs. But they are not as redundant as S3 storage which replicates data into multiple availability zones: an EBS volume lives entirely in one availability zone. This means that making snapshot backups, which are stored in S3, is important for long-term data safeguarding.
    Product Details


    S3 - Data Durability and Reliability

    Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. The service redundantly stores data in multiple facilities and on multiple devices within each facility. To increase durability, Amazon S3 synchronously stores your data across multiple facilities before returning SUCCESS. In addition, Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data. Unlike traditional systems which can require laborious data verification and manual repair, Amazon S3 performs regular, systematic data integrity checks and is built to be automatically self-healing.

    Amazon S3 provides further protection via Versioning. You can use Versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for every version stored.

    Amazon S3’s standard storage is:

    Backed with the Amazon S3 Service Level Agreement.
    Designed for 99.999999999% durability and 99.99% availability of objects over a given year.
    Designed to sustain the concurrent loss of data in two facilities.

    Reduced Redundancy Storage (RRS)

    Reduced Redundancy Storage (RRS) is a storage option within Amazon S3 that enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. It provides a cost-effective, highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced. The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage, and thus is even more cost effective.

    Reduced Redundancy Storage is:

    • Backed with the Amazon S3 Service Level Agreement.

    • Designed to provide 99.99% durability and 99.99% availability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.01% of objects.

    • Designed to sustain the loss of data in a single facility.

    Amazon Glacier

    Amazon S3 enables you to utilize Amazon Glacier’s extremely low-cost storage service as a storage option for data archival. Amazon Glacier stores data for as little as $0.01 per gigabyte per month, and is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance.

    Like Amazon S3’s other storage options (Standard or Reduced Redundancy Storage), objects stored in Amazon Glacier using Amazon S3’s APIs or Management Console have an associated user-defined name. You can get a real-time list of all of your Amazon S3 object names, including those stored using the Amazon Glacier option, using the Amazon S3 LIST API. Objects stored directly in Amazon Glacier using Amazon Glacier’s APIs cannot be listed in real-time, and have a system-generated identifier rather than a user-defined name. Because Amazon S3 maintains the mapping between your user-defined object name and the Amazon Glacier system-defined identifier, Amazon S3 objects that are stored using the Amazon Glacier option are only accessible through Amazon S3’s APIs or the Amazon S3 Management Console. To restore Amazon S3 data that was stored in Amazon Glacier via the Amazon S3 APIs or Management Console, you first initiate a restore job using the Amazon S3 APIs or Management Console. Restore jobs typically complete in 3 to 5 hours. Once the job is complete, you can access your data through an Amazon S3 GET request.

    The Amazon Glacier storage option is:

    • Backed with the Amazon S3 Service Level Agreement.

    • Designed for 99.999999999% durability and 99.99% availability of objects over a given year.

    • Designed to sustain the concurrent loss of data in two facilities.
    Product Details

    Por essas e por outras que para mim VM é MS Azure e não tem mais ninguém.
    Última edição por 5ms; 15-03-2014 às 15:40.

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •