Resultados 1 a 3 de 3
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005

    [EN] Artificial intelligence is not going to save the Internet from fake news

    In the fight against fake news, artificial intelligence is waging a battle it cannot win

    Dave Gershgorn
    November 22, 2016

    It’s become clear that the algorithms Facebook and Google designed to deliver news to their users have failed. But while fake news is a headache for those tech giants right now, the underlying research question—whether and how machines tell truth from lies on the internet—is one that will persist as long as the world wide web stays an open forum.

    Facebook and Google’s sizable machine learning divisions have created algorithms that effectively surface information that users want to see. But they’ve been unable to actually understand or vet that info—and in fact, experts across the tech industry say it’s unrealistic to expect any AI or machine learning algorithm to do this task well.

    State-of-the-art language processing today

    All our best efforts so far are built on research in natural language processing, which teaches AI to read a piece of text, understand the concepts within, and provide insight about its meaning. “Modern machine learning for natural language processing is able to do things like translate from one language to another, because everything it needs to know is in the sentence its processing,” says Ian Goodfellow, a researcher at OpenAI. On the other hand, identifying claims, tracing information through potentially hundreds of sources, and making a judgment on how truthful a claim could be based on a diversity of ideas—all that relies on a holistic understanding of the world, the ability to bridge concepts that aren’t connected by exact words or semantic meaning.

    For now, AIs that can simply succeed at question-and-answer games are considered state of the art. As recently as 2014, it was bleeding edge when Facebook’s AI could read a short passage about the plot of the Lord of the Rings, and tell if Frodo had the Ring or not.

    The Stanford Question Answering Dataset, or SQuAD is a new benchmarking competition that measures how good AIs are at this sort of task. To a human, the test would seem pretty simple: read the Wikipedia page about Super Bowl 50, and then answer questions like “How many appearances have the Denver Broncos made in the Super Bowl?” (The answer is 8.)

    The top SQuAD prize this year was won by a team from Salesforce’s recently opened AI research center: their AI could accurately answer factual questions posed from Wikipedia articles about 80% of the time. (The win was by a slim margin—it about 2% more accurate than its competitors from Microsoft and the Allen Institute of AI.)

    But parsing a few paragraphs of text for factuality is nowhere near the complex fact-checking machines AI designers are after. “It is incredibly hard to know the whole state of the world to identify whether a fact is true or not,” says Richard Socher, head of Salesforce Research. “Even if we had a perfect way to encompass and encode all the knowledge of the world, the whole point of news is that we’re adding to that knowledge.”

    The novelty of news stories, Socher says, means the information needed to verify something newly published as fact might not be available online yet. A small but credible source could publish something true that the AI marks as false simply because there is no other corroboration on the internet—even if that AI is powerful enough to constantly read and understand all the information ever published.

    How does a human check facts?

    Humans have always been the gold standard when it comes to fact checking. Carolyn Ryan, senior politics editor for the New York Times, calls the act “the greatest reader service that we do.” Rigorous interrogation of truth is the primary function of any news source seeking to gain the public’s trust, but the internet’s open platform has brought a torrent of websites that don’t subscribe to journalistic ideals. So, millions of readers trying separate truth from lies visit dedicated fact-checking sites like Snopes.com and PolitiFact, which employ researchers and writers to debunk falsehoods on the internet.

    Whatever the fact, the checking process typically starts the same, says Kim LaCaria, content manager for Snopes. The first order of business is combing through a story to see if its sources actually support the claim of the article. For example, the story of a purported link between Hillary Clinton and the suicide of an FBI agent was backed by false, misleading evidence, including fake links and a made-up address. Even if the sources aren’t entirely fictitious, sometimes there’s a piece of information that’s being misrepresented or taken out of context.

    “There’s information and then there’s how it’s presented, and those two aren’t always the same,” LaCaria says. “So the claim might not match the information.”

    Snopes researchers scour the internet to collect as much contextual information as they can, like when the events in the story supposedly happened, who was involved, and when it started to gain traffic online. They also hunt down original sources, and try to call or email the person who first made the claim in question. After all that information is in, they make a judgement call. The process can happen quickly or take days. “A seemingly simple claim can take hours, and a seemingly complex one 15 minutes and there’s no real pattern to that,” says LaCaria.

    (cont)

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005
    “When 8/10 people say it’s fake news, is it fake news?”

    Ilya Sutskever, research director for OpenAI, says a fact-checking AI could be designed to tackle the job the same way humans do now. “One possible approach is to have a system which would have a sophisticated and detailed understanding of the meaning of text—which is something that cannot be done yet today—then it would read many different articles from many different news sources and would look for inconsistencies, the same way a human whose full-time job is determining whether something is fake or not.” says Sutskever.

    But even humans have differing opinions about which facts are true, says Socher. “When 8/10 people say it’s fake news, is it fake news? What about 7/10?”

    And no matter how good the algorithm, Socher says, if it is trained by humans, it will inevitably take on these same biases.


    Current AI projects to try to weed out fake news

    One proposal is to program a sort of shortcut: teach an algorithm how to trust some things and not others, so it doesn’t have to read the entire internet every time it’s fact-checking a story.

    Ozlo, a personal assistant startup, thinks it could do just that by training its algorithm to parse the trustworthiness of restaurant reviews.

    The company’s personal assistant (also named Ozlo) reads reviews and ratings of restaurants across sites—from the professionals like Zagat to the sea of armchair pundits on Yelp. Then, through Ozlo’s chatbot interface, it tracks which sites and reviewers users find trustworthy, and even figures out elements of language that indicate a reliable review. Its goal is to cut through all the reviewer opinion and identify real facts—so it can show the right restaurants to the right people, like if one restaurant has good fish but terrible salads, or dirty bathrooms but good food.

    Ozlo doesn’t work entirely without human input: the system reports back to a team of engineers and human trainers who can verify the information it’s learned. “I think we’ll always use trainers, and we’ll always use data from people. I think that’s the way you keep it from turning into a Nazi, right?” Ozlo CEO Charles Jolley says, referring to Microsoft’s Tay. The Microsoft AI experiment earlier this year learned by interacting with English-speakers people on Twitter, Kik and GroupMe—and within 24 hours of being live tweeted out “Hitler was right.” The Chinese and Japanese versions of Tay are still running without a hitch.

    Ozlo is now starting to expand outside its little world of restaurants. By feeding tons of news stories from different sources through the already-established framework, Jolley hopes that the bot will figure out how to rank news sites and articles the same way it does restaurants and reviews. Of course, news is very different from restaurant or movie reviews—and Ozlo has an infinitesimally smaller number of users than Facebook; if Ozlo has a 1% error rate when identifying fake news, the mistakes would impact far fewer people and be much easier to identify and snuff out.

    Meanwhile, the scourge of fake news distributed on social media has triggered numerous other projects by hackers and third-party coders trying to solve the problem.

    One of these tools won a Princeton University hackathon this fall, sponsored by Google. The software, called FiB, checks articles posted on Facebook against websites known as reputable; it also runs the site hosting the article in question through databases of websites known for malware or phishing. If the post has photos and tweets, the tool uses AI to convert any words from the images into text. The software is free on the Chrome Web Store for your browser and also open-source, so developer types can tinker with it themselves.

    Another piece of software called Fake News Alert from New York Magazine’s Brian Feldman aims to limit Facebook’s fake news problem by simply checking whether a site is on a curated list of fake news sites. If you have the software installed (also available as a Google Chrome browser extension) an alert pops up after you click a link that directs to any site on the list.

    These sorts of approaches might work to limit fake news originating from outside of the mainstream media. But, LaCaria says, more and more often, major news outlets like CNN and BBC will hop on a story that’s trending but not necessarily true. These stories would technically be “verified,” and would get through the filters.

    On Nov. 17, US president-elect Trump appeared to claim credit for convincing the chairman of Ford Motors not to outsource jobs to Mexico. In reality, the company had never intended to do so in the first place. But initially, Trump’s equivocation was reported by mainstream and trusted media sources as a real story, and though most of these outlets followed up by debunking it, it was too late: the fake news had been picked up and spread across right-wing media outlets and across social media platforms. By every measure we have today, AI would have similarly failed that test.

    But LaCaria says she thinks tools like FiB are still on the right track. Publications like Buzzfeed News and the New York Times break news more often than small blogs on the internet—meaning they’re adding new, trustworthy information to the internet. However, they don’t have a monopoly on authenticity. A tool that can understand the full spectrum of reliability online would get us much of the way to weeding out lies.

    “It’s almost like everything gets weighed when it comes to the truth,” LaCaria says.

    Code that can reason like a human

    Any fact-checking system must be judged on the scale of the internet, where even a 1% error rate can mean hundreds of millions of people misled. With that metric in mind, it’s hard to imagine any software doing the job short of the AI community’s golden goose: a general intelligence, code capable of reasoning like a human.

    Like many tough problems in AI research, general artificial intelligence has seemed within human grasp since the field’s inception in the 1950s. But the fact remains that we’re not much closer to solving the problem. Eric Horvitz, Microsoft Research’s Managing Director, has joked that many of the same questions thought to be easily answered at the 1956 Dartmouth University conference that launched the field of AI research could win grants today in 2016.

    Facebook and Google are now trying to solve their problem with financial sticks, both enacting changes to prevent fake sites from making money through their respective ad networks. But treating fake news as an economic problem won’t work for others without huge advertising networks to manipulate.

    For now, however, it seems the onus still falls to the truth’s last line of defense: the reader.

    http://qz.com/843110/can-artificial-...-news-problem/

  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010
    Posts
    18,005

    Microsoft, Google and Facebook Are Remaking Themselves Around AI

    Cade Metz
    11.21.16

    Fei-Fei Li is a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

    Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

    Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

    Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

    The New Cloud

    Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

    Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

    When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

    The New Microsoft

    Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

    The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

    “With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

    The New Brains

    In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

    Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

    The New Education

    The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

    As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

    Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

    Update: This story has been updated to clarify Fei-Fei Li’s move to Google. She will remain on the faculty at Stanford after joining Google.

    https://www.wired.com/2016/11/google...ing-around-ai/

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens
  •