Resultados 1 a 3 de 3
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    [EN] ONU: Robos podem desestabilizar o planeta com guerras, desemprego, crime organiz

    Daniel Boffey
    27 September 2017



    The UN has warned that robots could destabilise the world ahead of the opening of a headquarters in The Hague to monitor developments in artificial intelligence.

    From the risk of mass unemployment to the deployment of autonomous robotics by criminal organisations or rogue states, the new Centre for Artificial Intelligence and Robotics has been set the goal of second-guessing the possible threats.

    It is estimated that 30% of jobs in Britain are potentially under threat from breakthroughs in artificial intelligence, according to the consultancy firm PwC. In some sectors half the jobs could go. A recent study by the International Bar Association claimed robotics could force governments to legislate for quotas of human workers.

    Meanwhile nations seeking to develop autonomous weapons technology, with the capability to independently determine their courses of action without the need for human control, include the US, China, Russia and Israel.

    Irakli Beridze, senior strategic adviser at the United Nations Interregional Crime and Justice Research Institute, said the new team based in the Netherlands would also seek to come up with ideas as to how advances in the field could be exploited to help achieve the UN’s targets. He also said there were great risks associated with developments in the technology that needed to be addressed.

    “If societies do not adapt quickly enough, this can cause instability,” Beridze told the Dutch newspaper de Telegraaf. “One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organisations and governments. We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this we want to start concrete projects. We will not be a talking club.”

    In August more than 100 robotics and artificial intelligence leaders, including the billionaire head of Tesla, Elon Musk, urged the UN to take action against the dangers of the use of artificial intelligence in weaponry, sometimes referred to as “killer robots”.

    They wrote: “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

    Last year Prof Stephen Hawking warned that powerful artificial intelligence would prove to be “either the best or the worst thing ever to happen to humanity”.

    An agreement was sealed with the Dutch government earlier this year for the UN office, which will have a small staff in its early stages, to be based in The Hague.

    Beridze said: “Various UN organisations have projects on robotic and artificial intelligence research, such as the expert group on autonomous military robots of the convention on conventional weapons. These are temporary initiatives.

    “Our centre is the first permanent UN office for this theme. We look at both the risks and the benefits.”

    https://www.theguardian.com/technolo...nemployment-un

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    ICYMI: Elon Musk leads 116 experts calling for outright ban of killer robots

    Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now.

    Samuel Gibbs
    20 August 2017

    Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots.

    Tesla’s Elon Musk is leading a group of 116 specialists from across 26 countries who are calling for the ban on autonomous weapons.

    The UN recently voted to begin formal discussions on such weapons which include drones, tanks and automated machine guns. Ahead of this, the group of founders of AI and robotics companies have sent an open letter to the UN calling for it to prevent the arms race that is currently under way for killer robots.

    In their letter, the founders warn the review conference of the convention on conventional weapons that this arms race threatens to usher in the “third revolution in warfare” after gunpowder and nuclear arms.

    The founders wrote: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

    “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

    Experts have previously warned that AI technology has reached a point where the deployment of autonomous weapons is feasible within years, rather than decades. While AI can be used to make the battlefield a safer place for military personnel, experts fear that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

    The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne on Monday, has the backing of high-profile figures in the robotics field and strongly stresses the need for urgent action, after the UN was forced to delay a meeting that was due to start Monday to review the issue.

    The founders call for “morally wrong” lethal autonomous weapons systems to be added to the list of weapons banned under the UN’s convention on certain conventional weapons (CCW) brought into force in 1983, which includes chemical and intentionally blinding laser weapons.

    Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales in Sydney, said: “Nearly every technology can be used for good and bad, and artificial intelligence is no different. It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis.

    “However, the same technology can also be used in autonomous weapons to industrialise war. We need to make decisions today choosing which of these futures we want.”

    Musk, one of the signatories of the open letter, has repeatedly warned for the need for pro-active regulation of AI, calling it humanity’s biggest existential threat, but while AI’s destructive potential is considered by some to be vast it is also thought be distant.

    Ryan Gariepy, the founder of Clearpath Robotics said: “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

    This is not the first time the IJCAI, one of the world’s leading AI conferences, has been used as a platform to discuss lethal autonomous weapons systems. Two years ago the conference was used to launch an open letter signed by thousands of AI and robotics researchers including Musk and Stephen Hawking similarly calling for a ban, which helped push the UN into formal talks on the technologies.

    The UK government opposed such a ban on lethal autonomous weapons in 2015, with the Foreign Office stating that “international humanitarian law already provides sufficient regulation for this area”. It said that the UK was not developing lethal autonomous weapons and that all weapons employed by UK armed forces would be “under human oversight and control”.

    Science fiction or science fact?

    While the suggestion of killer robots conjures images from science fiction such as the Terminator’s T-800 or Robocop’s ED-209, lethal autonomous weapons are already in use. Samsung’s SGR-A1 sentry gun, which is reportedly technically capable of firing autonomously but is disputed whether it is deployed as such, is in use along the South Korean border of the 2.5m-wide Korean Demilitarized Zone.

    The fixed-place sentry gun, developed on behalf of the South Korean government, was the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. But it is not the only autonomous weapon system in development, with prototypes available for land, air and sea combat.

    The UK’s Taranis drone, in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy. The unmanned combat aerial vehicle, about the size of a BAE Hawk, the plane used by the Red Arrows, had its first test flight in 2013 and is expected to be operational some time after 2030 as part of the Royal Air Force’s Future Offensive Air System, destined to replace the human-piloted Tornado GR4 warplanes.

    Russia, the US and other countries are currently developing robotic tanks that can either be remote controlled or operate autonomously. These projects range from autonomous versions of the Russian Uran-9 unmanned combat ground vehicle, to conventional tanks retrofitted with autonomous systems.

    The US’s autonomous warship, the Sea Hunter built by Vigor Industrial, was launched in 2016 and, while still in development, is intended to have offensive capabilities including anti-submarine ordnance. Under the surface, Boeing’s autonomous submarine systems built on the Echo Voyager platform are also being considered for long-range deep-sea military use.

    https://www.theguardian.com/technolo...us-weapons-war

    The Guardian view on robots as weapons: the human factor
    13 April 2015

    The future is already here, said William Gibson. It’s just not evenly distributed.

    One area where this is obviously true is the field of lethal autonomous weapon systems, as they are known to specialists – killer robots to the rest of us. Such machines could roam a battlefield, on the ground or in the air, picking their own targets and then shredding them with cannon fire, or blowing them up with missiles, without any human intervention. And if they were not deployed on a battlefield, they could turn wherever they were in fact deployed into a battlefield, or a place of slaughter.

    A conference in Geneva, under the auspices of the UN, is meeting this week to consider ways in which these machines can be brought under legal and ethical control. Optimists reckon that the technology is 20 to 30 years away from completion, but campaigners want it banned well before it is ready for deployment. The obvious question is whether it is not already too late. A report by Human Rights Watch in 2012 listed a frightening number of almost autonomous and wholly lethal weapons systems deployed around the world, from a German automated system for defending bases in Afghanistan, by detecting and firing back at incoming ordnance, through to a robot deployed by South Korea in the demilitarised zone, which uses sensing equipment to detect humans as far as two miles away as it patrols the frontier, and can then kill them from a very safe distance.

    All those systems rely on a human approving the computer’s actions, but at a speed which excludes the possibility of consideration: often there is as little as half a second in which to press or not to press the lethal button. Half a second is – just – inside the norm of reaction times, but military aircraft are routinely built to be so manoeuvrable that the human nervous system cannot react quickly enough to make the constant corrections necessary to keep them in the air. If the computers go down, so does the plane. The killer cyborg future is already present in such machines.

    In some ways, this is an ethical advantage. Machines cannot feel hate, and they cannot lie about the causes of their actions. A programmer might in theory reconstruct the precise sequence of inputs and processes that led a drone to act wrongly and then correct the program. A human war criminal will lie to himself as well as to his interrogators. Humans cannot be programmed out of evil.

    Although the slope to killer robots is a slippery one, there is one point we have not reached. No one has yet built weapons systems sufficiently complex that they make their own decisions about when they should be deployed. This may never happen, but it would be unwise to bet that way. In the financial markets we already see the use of autonomous computer programs whose speed and power can overwhelm a whole economy in minutes. The markets, in that sense, are already amoral. Robots may be autonomous, but they cannot be morally responsible as humans must be. The ambition to control them is as profoundly human as it is right.

    https://www.theguardian.com/commenti...eapons-systems

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,473

    'I've seen what the world will look like in five to 10 years'


    Antonio García Martinez, former Facebook product manager, became terrified of technology and quit his lush executive job to live a recluse life with a bucket toilet and assault weapon in the woodlands north of Seattle, where he says Canada is 'just a swim or a kayak away'



    Sage Lazzaro
    4 August 2017

    The rise of robots and artificial intelligence has sent a prominent ex Facebook executive fleeing to the remote woods to hideout in belief that society is nearing collapse.

    Antonio García Martinez, a former Facebook product manager and Silicon Valley author, became terrified of technology and quit his job to live a life as a recluse with a bucket toilet and assault weapon in the woodlands north of Seattle.

    'You may not believe it but it's coming, and it's coming in the form of a self-driving truck that's going to run you over,' he told the BBC in an upcoming two-part BBC documentary 'Secrets of Silicon Valley'.

    The show explores whether the technology being developed in San Francisco will lead to a utopian or dystopian future.

    Martinez says he is certain that robots are coming for humans' jobs.

    'Within 30 years, half of humanity won't have a job,' he said.

    'It could get ugly - there could be a revolution.'

    Martinez claims this will lead to revolt, mass chaos and armed conflict, also adding that bullets will become the currency of America.

    'You don't realize it but we're in a race between technology and politics, and technologists are winning.'

    'They're way ahead.'

    'They will destroy jobs and disrupt economies before we even react to them and we really should be thinking about that.'

    Martinez has insight into Silicon Valley and the technological revolution matched only by the industry's biggest players.

    In the last six years alone, he's been an advisor to Twitter, the CEO/founder of AdGrok (a venture-backed startup acquired by Twitter) and a product manager for Facebook, where he ran their targeted ads program.

    Prior, Martinez was a strategist for Goldman Sachs.

    In 2016, he published the New York Times best-seller Chaos Monkeys, an autobiographical exposé of life inside the Silicon Valley tech bubble.

    'I've seen what the world will look like in five to 10 years,' Martinez said.


    'It could get ugly - there could be a revolution' Martinez said. 'There are 300 million guns in this country, one for every man, woman and child, and they're mostly in the hands of those who are getting economically displaced'


    In the documentary - which airs August 6 at 8PM, he says other former Silicon Valley insiders have also fled civilization to live off the land in fear of a future doomed by technology.

    It also features artificial intelligence pioneer Jeremy Howard, who said: 'People aren't scared enough.'

    'They're saying 'Don't worry about it, there will always be more jobs'' he said.

    'It's founded on this purely historical thing of there has been a revolution before, it was called the Industrial Revolution, and after it there were still enough jobs, therefore this new, totally different, totally unrelated revolution will also have enough jobs.'

    'It's a ludicrously short-sighted, meaningless argument which incredibly smart people are making.'

    He also said that if this course continues, it could lead to 'massive social unrest' in which a 'tiny class of society' would own 'all of the capital and all of the data' and despise everyone else for being worthless.

    The program's host, Jamie Bartlett of the Centre for the Analysis of Social Media, examines the so-called disruptors, such as Airbnb and Uber - both of which have found themselves the target or massive regulatory battles and citizen protests due to the way they've changed the economies of their industries.

    'I want to discover what the reality is behind Silicon Valley’s utopian vision,' he said.

    'The tech gods are selling us all a better future, but Silicon Valley's promise to build a better world relies on tearing up the world as it is - they call it 'disruption.'

    'The mantra of Silicon Valley is that disruption is always good, and through smartphones and digital technology we can create more efficient, more convenient, faster services and everyone wins from that, but behind that beautifully designed app or that slick platform there's a quite brutal form of capitalism unfolding and it's leaving some of the poorest people in society behind.'

    He says Silicon Valley's promise to build a better world could lead to detrimental consequences and 'inflict a nightmare future on millions of us.'

    'The big secret in Silicon Valley is that the next wave of disruption could tear apart the way capitalism works, and as a result the way we live our lives could be utterly transformed,' he said.

    'I want to discover what the reality is behind Silicon Valley’s utopian vision.'



    The impending doom associated with technology and whether or not robots will conquer humans has been a hot topic as of late with two of the tech world's most powerful beasts going head-to-head over the question.

    On one side, you have Tesla, SpaceX and Paypal founder Elon Musk, who has admitted he's 'terrified' of AI, while on the other you have Facebook founder Mark Zuckerberg with a more optimistic view.

    Both have earned billions of dollars from the digital revolution, but they hold diametrically opposed views on where it's leading.

    Recently, the two clashed on social media, with Zuckerberg calling Musk's rhetoric on the topic 'irresponsible' and Musk saying Zuckerberg has 'limited' knowledge.

    It started after Zuckerberg, the world's fifth richest person, rebuked what he called 'naysayers' who drum up 'doomsday scenarios' about AI.

    Speaking in a live online broadcast from his garden in California as he cooked a barbecue, Zuckerberg was asked about Musk's views following his recent warning to U.S. state governors that AI 'poses a fundamental risk to the existence of human civilization' and must be regulated.

    Zuckerberg replied: 'It's really negative, and in some ways, I think it is pretty irresponsible.'

    He went on to say he was an 'optimist', adding: 'In the next five to ten years, AI is going to deliver so many improvements in the quality of our lives . . . if you're arguing against AI, then you're arguing against safer cars that aren't going to have accidents and you're arguing against being better able to diagnose people when they are sick.'

    Musk hit back on Twitter, saying: 'I've talked to Mark about this. His understanding of the subject is limited.'

    Zuckerberg leapt back onto Facebook to defend himself, flagging up a study by his own research team to justify AI's potential 'to make the world better.'

    At the center of their dispute is Artificial Intelligence (AI) - the term which describes the development of computer systems able to perform tasks that normally require human intelligence.

    These tasks include skills such as visual perception, speech recognition and translation between languages.

    The application of AI - dubbed the 'march of the machines' - is already entrenched in our daily lives, be it on mobile phones or Amazon's 'virtual home assistant' Alexa (which can respond to verbal commands), battlefield robots, delivery drones and driverless cars.

    And that's just the start - AI has the potential to do so much more, from automating jobs that once required human input and decision-making to helping doctors spot cancer at its earliest stages by photographs and scans.

    However, Musk and many of the world's most respected scientists and computer engineers - including brilliant British minds such as Professor Stephen Hawking and Lord Rees, the former president of the Royal Society - believe there may be a terrible price to pay if we let machines think for us.

    They fear a digital-led Armageddon in which super-intelligent computers soon out-think humans.

    Science fiction could become science fact when machines that have no concept of human values autonomously decide that our presence is a barrier to their own development and that human beings should be got rid of.

    Musk has, for some time now, warned against this so-called 'Robocalypse'.

    On the other hand, Zuckerberg, has loudly proclaimed his zealous belief in the power and reliability of AI.

    In recent years, with the availability of ever more powerful computer hardware and microchips, tech companies have invested billions in AI development.

    This has led to what is known as 'deep learning', a process by which computer systems 'teach' themselves by crunching vast amounts of data available online, rather than having to be guided by a human.

    As a result, computers are thinking independently more and more like a human brain.

    Such technology is already being used by internet search engines to detect spam emails and credit card fraud, to recognize voice commands spoken into phones and to activate online bank accounts.

    Now there is a growing division of opinion about whether the dawn of intelligent machines is a blessing for us — or a curse.



    http://www.dailymail.co.uk/sciencete...-30-years.html

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •