Resultados 1 a 9 de 9
  1. #1
    {topmember}
    Data de Ingresso
    Nov 2010
    Localização
    Rio de Janeiro
    Posts
    596

    Maior ataque cibernético da História

    Fonte: 'Maior ataque cibern

    Segundo o informe devido a Cyberbunker - ou acredito a clientes que ele hospeda.

    'Maior ataque cibernético da História' atinge internet em todo o mundo
    Especialistas temem que o ataque possa causar problemas em bancos e serviços de email

    BBC | 27/03/2013 12:35:01

    A internet ficou mais lenta ao redor do mundo nesta quarta-feira devido ao que especialistas em segurança chamaram de maior ciberataque da História.

    Uma briga entre um grupo que luta contra o avanço do spam e uma empresa que abriga sites deflagrou ataques cibernéticos que atingiram a estrutura central da rede.

    O episódio teve impacto em serviços como o Netflix - e especialistas temem que possa causar problemas em bancos e serviços de email. Cinco polícias nacionais de combate a crimes cibernéticos estão investigando os ataques.

    O grupo Spamhaus, que tem bases em Londres e Genebra, é uma organização sem fins lucrativos que tenta ajudar provedores de email a filtrar spams e outros conteúdos indesejados.

    Para conseguir seu objetivo, o grupo mantém uma lista de endereços que devem ser bloqueados - uma base de dados de servidores conhecidos por serem usados para fins escusos na internet.

    Recentemente, o Spamhaus bloqueou servidores mantidos pelo Cyberbunker, uma empresa holandesa que abriga sites de qualquer natureza, com qualquer conteúdo - à exceção de pornografia ou material relacionado a terrorismo.

    Sven Olaf Kamphuis, que diz ser um porta-voz da Cyberbynker, disse em mensagem que o Spamhaus estava abusando de seu poder, e não deveria ser autorizado a decidir "o que acontece e o que nao acontece na internet".

    O Spamhaus acusa a Cyberbunker de estar por trás dos ataques, em cooperação com "gangues criminosas" do Leste da Europa e da Rússia.

    A Cyberbunker não respondeu à BBC quando contactada de forma direta.

    'Trabalho imenso'

    Steve Linford, executivo-chefe do Spamhaus, disse à BBC que a escala do ataque não tem precedentes.

    "Estamos sofrendo este ciberataque por ao menos uma semana". "Mas estamos funcionando, não conseguiram nos derrubar. Nosso engenheiros estão fazendo um trabalho imenso em manter-nos de pe. Este tipo de ataque derruba praticamente qualquer coisa".

    Linford disse à BBC que o ataque estava sendo investigado por cinco polícias cibernéticas no mundo, mas afirmou que não poderia dar mais detalhes, já que as polícias envolvidas temem se alvos de ataques também.

    Os autores da ofensiva usaram uma tática conhecida como Negação Distribuída de Serviço (DDoS, na sigla em inglês), que inunda o alvo com enormes quantidades de tráfego, em uma tentativa de deixá-lo inacessível.

    Os servidores do Spamhaus foram escolhidos como alvo.

    Linford disse ainda que o poder do ataque é grande o suficiente para derrubar uma estrutura de internet governamental.

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Localização
    /sc/rionegrinho
    Posts
    1,036
    Será que foi por isso que eu tive uns 30 minutos de lentidão essa semana?
    Alexandre Silva Hostert

    Veezon
    Gerenciamento de Servidores


    http://veezon.com.br
    http://br.linkedin.com/in/alexandreveezon

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,002
    Esses texto aí deve ter sido redigido pelo autor da coluna de horóscopo da BBC.

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,002
    O Maior Ataque Cibernético da História atingiu o WHT-BR


    Os seguintes erros ocorreram com seu comando

    Este fórum exige que você aguarde 30 segundos entre posts consecutivos. Por favor, tente novamente em 3088 segundos.

  5. #5
    Louco pelo WHT Brasil
    Data de Ingresso
    Mar 2012
    Posts
    163
    Uma simples empresa sofre negação de serviço e a internet fica lenta, esses reporteres.
    Gabriel Santos - hasore

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,002
    A veteran Reuters reporter related a piece of advice given by his editor: "It's not just what you print that makes you an authoritative and trusted source for news, but what you don't print."

    He wasn't talking about censorship, he was talking about what separates journalism from stenography and propaganda: sceptical scrutiny. The professionalism of the craft isn't simply learning to write or broadcast what other people tell you. Crucially it is the ability to delve, interrogate and challenge, and checking out stories you've discovered through your own curiosity, or robustly testing what other people tell you is true.

    Scepticism was in short supply this week when breathless claims about the collapse of the internet were published in such reputable publications as the New York Times, the BBC and even technical journal Ars Technica, all falling prey to the hyped-up drama of a DDoS attack against Spamhaus, a group that tracks spammers, and their alleged attacker Cyberbunker, a Dutch hosting company Spamhaus had blacklisted.

    Ars Technica described the attack as at "a scale that's threatening to clog up the internet's core infrastructure and make access to the rest of the internet slow or impossible". "If a Tier 1 provider fails, that risks breaking the entire internet," it continued.
    There is risk everywhere. Being alive carries the risk of death. It's no good just saying what might happen (that's the role of a screenwriter or novelist), what matters is the likelihood of it happening. The "risk" of the entire internet breaking from such an attack is very small. That should have killed off the worst of the scaremongering headlines and alerted the sceptical reporter that something was afoot.

    A lot of people have a lot to gain from peddling scare stories about cyber "warfare". As with any type of politics it's important to know precisely who is making the claims and what their interests are.

    In whose interest is it to hype up the collapse of the internet from a DDoS attack? Why, the people who provide cyber security services of course. And looking at the reporting, almost all the sources are directly involved and have a vested interest. The claims about the scale of the attack are from CloudFlare, the anti-DDoS firm hired by Spamhaus to ward off the attack. Eschewing subtlety they blogged about the event: "The DDos that Almost Broke the Internet".

    As soon as you have a source with a direct involvement, scepticism should be your guide. Sadly, reporters don't always have the time or space for scepticism, and increasingly they are judged only on their ability to fill space at speed. In this environment there is no incentive to challenge a good yarn.

    While the infrastructure of the internet might not be easy for reporters to understand, simply juxtaposing quotes from opposing sides isn't all there is to journalism. Yes, this was a big attack in terms of traffic directed against one website (approx 300Gbps), but the internet seemed to cope just fine.

    Even if you knew nothing about technology, you could have done what Sam Biddle did at Gizmodo and simply asked some challenging, sceptical questions such as:

    • Why wasn't my internet slow?
    • Why didn't anyone notice this over the course of the past week, when it began?
    • Why isn't anyone without a financial stake in the attack saying the attack was this much of a disaster?
    • Why haven't there been any reports of Netflix outages, as the New York Times and BBC reported?
    • Why do firms that do nothing but monitor the health of the web, like Internet Traffic Report, show zero evidence of this Dutch conflict spilling over into our online backyards?
    This story wasn't just a failure to understand technology. It was a failure of basic journalism practice. To be willing to not write the story if it didn't stack up.

    This is the danger of the "dark age of journalism", as it has been called. The training of the old Reuters reporter is replaced by one of political and corporate collusion. The separation between newsrooms and public relations agencies growing ever thinner as reporters rush to fill space at all costs, regardless of truth.

    Even after she'd written the piece in the New York Times, tech reporter Nicole Perlroth tweeted how she was still getting targeted by corporate PRs to cover the "story": "Hi Nicole, News is just breaking on the biggest cyber-attack in history. Are you planning on covering?"

    The collapse of journalism combined with complex, fast-changing technology offers a wealth of opportunity for propagandists. In the soil of ignorance, fear can easily be sown. So it is with cyberwarfare.
    How a cyberwar was spun by shoddy journalism | Heather Brooke
    Última edição por 5ms; 31-03-2013 às 14:14.

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,002

    Overhyped Spamhaus DDoS attack based on old flaw

    This article was authored by John C. Tanner, and was originally posted on telecomasia.net.

    Last week, news broke that the world’s largest DDoS (distributed denial of service) attack in the history had taken place, almost crippling the internet. Only it might not have been quite as large as reports made it out to be. And it was something that could have been stopped by a fix that’s been around for over a decade.

    Anti-spam organization Spamhaus found itself subject to a DDoS attack that actually started March 18, but hit unprecedented scale last week when the attackers generated over 300 Gbps worth of traffic, making it the largest such attack ever recorded, according to CloudFlare, which helped mitigate the attack.

    CloudFlare billed it as “The DDoS That Almost Broke the Internet”, and numerous media reports described it in similar fashion. The New York Times, for example, said the attack was “causing widespread congestion and jamming crucial infrastructure around the world”.

    However, with the Internet essentially failing to collapse or come anywhere close to it for most users, now there’s disagreement over just how “widespread” the impact was. Take this quote from Technology Review:

    “Just the production costs of CNN discussing this were probably higher than the damage this thing might have caused,” says Radu Sion, a computer scientist at Stony Brook University.

    An open email from Richard Steenbergen, chief technology officer of nLayer Communications (one of the network providers used by CloudFlare) sent to gadget blog Gizmodo, agreed that the scale of the attack was somewhat exaggerated:

    I wouldn’t call it “record smashing” or “game changing” in any special way. It’s just another large attack, maybe 10-15% larger than other similar ones we’ve seen in the past.

    However, Steenbergen pointed out that a DDoS attack at a scale of 300 Gbps is a big deal simply because no single network has that much lit capacity to handle it.

    Also, he pointed out that the scale of the attack was achieved by going after CloudFlare’s bandwidth providers (including nLayer), which led them to public internet exchange points (IXPs). While that enabled them to generate huge amounts of traffic, it did so using IXPs that represent “more of the ‘long tail’ of networks”, rather than the private point-to-point links that carry most internet traffic:

    So, what you actually saw here was an attack affecting a large number of smaller networks, with something which was really a completely unrelated and unintended side-effect of the original attack.

    Meanwhile, the real issue seems to be that the exploit used by the attackers – open DNS resolvers – has been known for over a decade. As CloudFlare explained in October 2012:


    The best practice, if you’re running a recursive DNS resolver is to ensure that it only responds to queries from authorized clients. In other words, if you’re running a recursive DNS server for your company and your company’s IP space is 5.5.5.0/24 (i.e., 5.5.5.0 – 5.5.5.255) then it should only respond to queries from that range. If a query arrives from 9.9.9.9 then it should not respond.


    The problem is, many people running DNS resolvers leave them open and willing to respond to any IP address that queries them.

    The Internet Engineering Task Force spelled out a technical method to fix this issue back in 2000, but many web companies have never implemented it, TR reports:

    “Misconfigurations are rampant across the Internet,” says Mike Smith, director of the computer-security response team at Akamai, the Web optimization company based in Cambridge, Massachusetts. “There are tools and configuration guides and best practices for ISPs. But people need to use them and know that this is a problem.”

    Last week, the Open Resolver Project publicly released the full list of the 21.7 million open resolvers online in an effort to shut them down. Matthew Prince of CloudFlare said in a blog post that the Spamhaus attack “made clear that the bad guys have the list of open resolvers and they are getting increasingly brazen in the attacks they are willing to launch.”

    Overhyped Spamhaus DDoS attack based on old flaw | Telecom Ramblings

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    15,002
    Re: That Internet War Apocalypse Is a Lie

    Hi Sam,

    My company is one of the primary providers for Cloudflare, and was one
    of the first to be attacked over the current Spamhaus/Cyberbunker
    debacle. Your latest piece is interesting, and while a lot of the hype
    and fear over these attacks IS IMHO unjustified, there are a few major
    details that you're missing.

    First off I can confirm a few basic facts, namely that we really did
    receive a ~300 Gbps attack directed at Cloudflare, and later
    specifically targeted at pieces of our core infrastructure. This is
    definitely on the large end of the scale as far as DoS attacks go, but
    I wouldn't call it "record smashing" or "game changing" in any special
    way. It's just another large attack, maybe 10-15% larger than other
    similar ones we've seen in the past, and I'm certain we will continue
    to see even larger ones in the future as global traffic levels
    increase. What made this particular attack notable is where it was
    targeted, which greatly increased the number of people who noticed it.

    In defense of the claims in other articles, there is a huge difference
    between "taking down the entire Internet" and "causing impact to
    notable portions of the Internet". My company, most other large
    Internet carriers, and even the largest Internet exchange points, all
    deliver traffic at multi-terabits-per-second rates, so in the grand
    scheme of things 300 Gbps is certainly not going to destroy the
    Internet, wipe anybody off the map, or even show up as more than a blip
    on the charts of global traffic levels. That said, there is absolutely
    NO network on this planet who maintains 300 Gbps of active/lit but
    unused capacity to every point in their network. This would be
    incredibly expensive and wasteful, and most of us are trying to run
    for-profit commercial networks, so when 300 Gbps of NEW traffic
    suddenly shows up and all wants to go to ONE location, someone is going
    to have a bad day.

    But, having a bad day on the Internet is nothing new. These are the
    types of events we deal with on a regular basis, and most large network
    operators are very good at responding quickly to deal with situations
    like this. In our case, we worked with Cloudflare to quickly identify
    the attack profile, rolled out global filters on our network to limit
    the attack traffic without adversely impacting legitimate users, and
    worked with our other partner networks (like NTT) to do the same. If
    the attacks had stopped here, nobody in the "mainstream media" would
    have noticed, and it would have been just another fun day for a few
    geeks on the Internet.

    The next part is where things got interesting, and is the part that
    nobody outside of extremely technical circles has actually bothered to
    try and understand yet. After attacking Cloudflare and their upstream
    Internet providers directly stopped having the desired effect, the
    attackers turned to any other interconnection point they could find,
    and stumbled upon Internet Exchange Points like LINX (in London),
    AMS-IX (in Amsterdam), and DE-CIX (in Frankfurt), three of the largest
    IXPs in the world.

    An IXP is an "interconnection fabric", essentially just a large
    switched LAN, which acts as a common meeting point for different
    networks to connect and exchange traffic with each other. Every member
    connects a router, and is given a single IP address out of a common IP
    block to facilitate the interconnection. For example, one of LINX's
    main blocks is a single /22, and every member has an IP within that
    block. When two networks want to connect with each other, they set up a
    BGP session between their IPs, and the traffic is switched across the
    LAN just like it would be in any other switched network.

    The downside of this architecture is that these IP blocks are real,
    routable IPs, which can sometimes be reached from the outside world.
    It's usually against the rules of the individual IXPs to redistribute
    those blocks into the global table, but it's a common misconfiguration
    that still happens all the time, meaning anyone on the Internet can
    send traffic to those router IPs. When one of these IP addresses shows
    up in traceroute and attackers target it, it results in a large amount
    of traffic being unexpectedly dumped into this IXP LAN. The "quick fix"
    for this is for the IXP operators to chase down everyone who is
    redistributing the IXP block to the global table.

    Note that the vast majority of global Internet traffic does NOT travel
    over these types of public IXPs, but rather goes via direct private
    interconnections between specific networks. Typically IXP traffic
    represents more of the "long tail" of networks who are peering with
    each other, i.e. they're used by a large number of generally smaller
    networks, or by larger networks who are looking to offload some of
    their "lower speed" interconnections. Collectively it still adds up to
    a lot of traffic, but the really "big" pipes that carry most of the
    Internet traffic are all private point-to-point links (called PNIs).

    So, what you actually saw here was an attack affecting a large number
    of smaller networks, with something which was really a completely
    unrelated and unintended side-effect of the original attack. It's not
    going to take down the Internet, but it's certainly a recipe for having
    a lot of people talking about it.

    Hopefully that clears up a bit of the situation.

    -Richard A Steenbergen

    P.S. You're all lucky I didn't change this URL into a redirect to
    goatse. Revised a tiny bit to provide more detail on a few points
    that were requested.

    Some misc resources for those interested in learning more about
    Internet interconnection:

    http://www.nanog.org/meetings/nanog5...ng-nanog51.pdf
    http://www.nanog.org/meetings/nanog4...s_N47_Tues.pdf
    http://www.nanog.org/meetings/nanog4...te_N47_Sun.pdf
    Última edição por 5ms; 02-04-2013 às 12:27.

  9. #9
    Louco pelo WHT Brasil
    Data de Ingresso
    Mar 2012
    Posts
    163

    Critical denial-of-service flaw in BIND software puts DNS servers at risk

    The BIND software maintainers encourage server administrators to disable regular expression support or install patches as soon as possible

    A flaw in the widely used BIND DNS (Domain Name System) software can be exploited by remote attackers to crash DNS servers and affect the operation of other programs running on the same machines.
    The flaw stems from the way regular expressions are processed by the libdns library that's part of the BIND software distribution. BIND versions 9.7.x, 9.8.0 up to 9.8.5b1 and 9.9.0 up to 9.9.3b1 for UNIX-like systems are vulnerable, according to a security advisory published Tuesday by the Internet Systems Consortium (ISC), a nonprofit corporation that develops and maintains the software. The Windows versions of BIND are not affected.
    BIND is by far the most widely used DNS server software on the Internet. It is the de facto standard DNS software for many UNIX-like systems, including Linux, Solaris, various BSD variants and Mac OS X.
    The vulnerability can be exploited by sending specifically crafted requests to vulnerable installations of BIND that would cause the DNS server process -- the name daemon, known as "named" -- to consume excessive memory resources. This can result in the DNS server process crashing and the operation of other programs being severely affected.
    "Intentional exploitation of this condition can cause denial of service in all authoritative and recursive nameservers running affected versions," the ISC said. The organization rates the vulnerability as critical.
    One workaround suggested by the ISC is to compile BIND without support for regular expressions, which involves manually editing the "config.h" file using instructions provided in the advisory. The impact of doing this is explained in a separate ISC article that also answers other frequently asked questions about the vulnerability.
    The organization also released BIND versions 9.8.4-P2 and 9.9.2-P2, which have regular expression support disabled by default. BIND 9.7.x is no longer supported and won't receive an update.
    "BIND 10 is not affected by this vulnerability," the ISC said. "However, at the time of this advisory, BIND 10 is not 'feature complete,' and depending on your deployment needs, may not be a suitable replacement for BIND 9."
    According to the ISC, there are no known active exploits at the moment. However, that might soon change.
    "It took me approximately ten minutes of work to go from reading the ISC advisory for the first time to developing a working exploit," a user named Daniel Franke said in a message sent to the Full Disclosure security mailing list on Wednesday. "I didn't even have to write any code to do it, unless you count regexes [regular expressions] or BIND zone files as code. It probably will not be long before someone else takes the same steps and this bug starts getting exploited in the wild."
    Franke noted that the bug affects BIND servers that "accept zone transfers from untrusted sources." However, that is just one possible exploitation scenario, said Jeff Wright, manager of quality assurance at the ISC, Thursday in a reply to Franke's message.
    "ISC would like to point out that the vector identified by Mr. Franke is not the only one possible, and that operators of *ANY* recursive *OR* authoritative nameservers running an unpatched installation of an affected version of BIND should consider themselves vulnerable to this security issue," Wright said. "We wish, however, to express agreement with the main point of Mr. Franke's comment, which is that the required complexity of the exploit for this vulnerability is not high, and immediate action is recommended to ensure your nameservers are not at risk."
    This bug could be a serious threat considering the widespread use of BIND 9, according to Dan Holden, director of the security engineering and response team at DDoS mitigation vendor Arbor Networks. Attackers might start targeting the flaw given the media attention surrounding DNS in the recent days and the low complexity of such an attack, he said Friday via email.
    Several security companies said earlier this week that a recent distributed denial-of-service (DDoS) attack targeting an anti-spam organization was the largest in history and affected critical Internet infrastructure. The attackers made use of poorly configured DNS servers to amplify the attack.
    "There is a fine line between targeting DNS servers and using them to perform attacks such as DNS amplification," Holden said. "Many network operators feel that their DNS infrastructure is fragile and often they go through additional measures to protect this infrastructure, some of which exacerbate some of these problems. One such example is deploying inline IPS devices in front of DNS infrastructure. Designing appropriate filters to mitigate these attacks with stateless inspection is near impossible."
    "If operators are relying on inline detection and mitigation, very few security research organizations are proactive about developing their own proof-of-concept code on which to base a mitigation upon," Holden said. "Thus, these types of devices will very rarely get protection until we see semi-public working code. This gives attackers a window of opportunity that they may very well seize."
    Also, historically DNS operators have been slow to patch and this may definitely come into play if we see movement with this vulnerability, Holden said.


    Read more: Critical denial-of-service flaw in BIND software puts DNS servers at risk - PC Advisor
    Última edição por hasore; 02-04-2013 às 20:32.
    Gabriel Santos - hasore

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •