Resultados 1 a 8 de 8
  1. #1
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    [EN] Apple puts brakes on self-driving car project

    Company follows Google’s lead in pivoting from full ‘Apple car’ to manufacturing tech to automate already existing vehicles

    The company has been working on its automotive technology since at least 2014, and once intended to build its own vehicle from start to finish, creating a true “Apple Car”.

    Five people familiar with Apple’s car project discussed with The New York Times the missteps that led the tech giant to move from creating a self-driving Apple car to creating technology for a car that someone else builds

    August 22, 2017

    As new employees were brought into Apple’s secret effort to create a self-driving car a few years ago, managers told them that they were working on the company’s next big thing: A product that would take on Detroit and disrupt the automobile industry.

    These days, Apple’s automotive ambitions are more modest. The company has put off any notion of an Apple-branded autonomous vehicle and is instead working on the underlying technology that allows a car to drive itself. Timothy D. Cook, the company’s chief executive, said in an interview with Bloomberg in June that Apple is “focusing on autonomous systems.”

    A notable symbol of that retrenchment is a self-driving shuttle service that ferries employees from one Apple building to another. The shuttle, which has never been reported before, will likely be a commercial vehicle from an automaker and Apple will use it to test the autonomous driving technology that it develops.

    Five people familiar with Apple’s car project, code-named “Titan,” discussed with The New York Times the missteps that led the tech giant to move — at least for now — from creating a self-driving Apple car to creating technology for a car that someone else builds. They spoke on the condition of anonymity because they were not authorized to talk publicly about Apple’s plans.

    The project’s reduced scale aligns Apple more closely with other tech companies that are working on autonomous driving technology but are steering clear of building cars. Even Waymo, the Google self-driving spinoff that is probably furthest along among Silicon Valley companies, has said repeatedly that it does not plan to produce its own vehicles.

    Apple’s testing vehicles will carry employees between its various Silicon Valley offices. The new effort is called PAIL, short for Palo Alto to Infinite Loop, the address of the company’s main office in Cupertino, Calif., and a few miles down the road from Palo Alto, Calif.

    Apple’s in-house shuttle service, which isn’t operational yet, follows Waymo, Uber and a number of car companies that have been testing driverless cars on city streets around the world.

    Apple has a history of tinkering with a technology until its engineers figure out what to do with it. The company worked on touch screens for years, for example, before that technology became an essential part of the iPhone.

    But the initial scale of Apple’s driverless ambitions went beyond tinkering or building underlying technology. The Titan project started in 2014, and it was staffed by many Apple veterans. The company also hired engineers with expertise in building cars, and not just the software that would run an autonomous vehicle.

    It was a do-it-all approach typical of Apple, which prefers to control every aspect of a product, from the software that runs it to the look and feel of the hardware.

    From the beginning, the employees dedicated to Project Titan looked at a wide range of details. That included motorized doors that opened and closed silently. They also studied ways to redesign a car interior without a steering wheel or gas pedals, and they worked on adding virtual or augmented reality into interior displays.

    The team also worked on a new light and ranging detection sensor, also known as lidar. Lidar sensors normally protrude from the top of a car like a spinning cone and are essential in driverless cars. Apple, as always focused on clean designs, wanted to do away with the awkward cone.

    Apple even looked into reinventing the wheel. A team within Titan investigated the possibility of using spherical wheels — round like a globe — instead of the traditional, round ones, because spherical wheels could allow the car better lateral movement.

    But the car project ran into trouble, said the five people familiar with it, dogged by its size and by the lack of a clearly defined vision of what Apple wanted in a vehicle. Team members complained of shifting priorities and arbitrary or unrealistic deadlines.

    There was disagreement about whether Apple should develop a fully autonomous vehicle or a semiautonomous car that could drive itself for stretches but allow the driver to retake control.

    Steve Zadesky, an Apple executive who was initially in charge of Titan, wanted to pursue the semiautonomous option. But people within the industrial design team including Jonathan Ive, Apple’s chief designer, believed that a fully driverless car would allow the company to reimagine the automobile experience, according to the five people.

    A similar debate raged inside Google’s self-driving car effort for years. There, the fully autonomous vehicle won out, mainly because researchers worried drivers couldn’t be trusted to retake control in an emergency.

    Even though Apple had not ironed out many of the basics, like how the autonomous systems would work, a team had already started working on an operating system software called CarOS. There was fierce debate about whether it should be programmed using Swift, Apple’s own programming language, or the industry standard, C++.

    Mr. Zadesky, who worked on the iPod and iPhone, eventually left Titan and took a leave of absence from the company for personal reasons in 2016. He is still at Apple, although he is no longer involved in the project. Mr. Zadesky could not be reached for comment.

    Last year, Apple started to rein in the project. The company tapped Bob Mansfield, a longtime executive who over the years had led hardware engineering for some of Apple’s most successful products, to oversee Titan.

    Mr. Mansfield shelved plans to build a car and focused the project on the underlying self-driving technology. He also laid off some hardware staff, though the exact number of employees dedicated to working on car technology was unclear.

    More recently, the team has grown again, adding personnel with expertise in autonomous systems, rather than car production.

    Apple’s headlong foray into autonomous vehicles underscores one of the biggest challenges facing the company: finding the next breakthrough product. As Apple celebrates the iPhone’s 10th anniversary, the company remains heavily dependent on smartphone sales for growth. It has introduced new products like the Apple Watch and expanded revenue from services, but the iPhone still accounts for more than half of its sales.

    In April, the California Department of Motor Vehicles granted Apple a test permit to allow the company to test autonomous driving technology in three 2015 Lexus RX 450h sport utility vehicles. There will be a safety driver monitoring the car during testing.

    While many companies are pursuing driverless technology and see it as a game changer for car ownership and transportation, no one has figured out how to cash in yet.

    With expectations reset and the team more focused, people on the Titan project said morale has improved under Mr. Mansfield. Still, one of the biggest challenges is holding onto talented engineers because self-driving technology is one of the hottest things in Silicon Valley, and Apple is hardly the only company working on it.

  2. #2
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    If the age of self-driving cars is upon us, what's keeping them off the roads?

    Uber puts the brakes on driverless cars after accident

    Alex Hern
    22 August 2016

    Sitting in the passenger seat of Google’s self driving car is a less bizarre experience than sitting in the driving seat, but it’s still unsettling. In the streets of Mountain View, outside the headquarters of X I got the chance to do just that.

    It’s partly unsettling because it’s hard not to feel a flicker of anxiety when you look over and notice that the person driving the car hasn’t got their hands on the wheel, even as you head towards a red light on a corner with a huge truck bearing down on you.

    It’s partly because the software that drives the car isn’t exactly ready for production yet, so every now and again something weird happens – a jerky overtake, a slight hesitation to squeeze through into an adjacent lane, or, as happened once, the car declaring for no obvious reason that “a slight hiccup” had occurred and that it was going to pull over.

    And it’s partly because the future has come a lot sooner than anyone really thought. Even if Google takes far longer to start selling cars than it thinks it will (and senior figures in X tell me that they’re confident something will hit the market before 2020), this technology is going to hit the real world somewhere soon, and it’s going to change everything.

    Uber agrees. The taxi company on Thursday announced the latest phase of its own self-driving tests, putting its prototype cars on the roads of Pittsburgh for real riders to hail them for the first time. They aren’t quite self-driving – they still have a human driver for backup - but they’re the next step for the company’s drive to replace its “driver-partners” (Uber is notoriously reluctant to grant Uber drivers full employment rights) with a fully automated fleet.

    Until a month ago, though, you could be forgiven for thinking the self-driving revolution had already hit. Tesla Motors, the upstart electric car company headed by the charismatic serial entrepreneur Elon Musk, launched its heavily promoted “autopilot” feature to owners of its Model S cars in October 2015.

    The feature was labelled a “public beta”, and users were warned to always keep their hands on the steering wheel; but those messages were counteracted by bluster from Musk, who declared in March that year that “We’re now almost able to travel all the way from San Francisco to Seattle without the driver touching any controls at all”. And, of course, the name Autopilot itself does little to suggest to the average user that the car does not, in fact, drive itself.

    Those mixed messages led to tragedy in May, when a Tesla driver, Joshua Brown, died in a crash which happened while Autopilot was in charge of the car. As Tesla put it, the crash happened when “Neither Autopilot nor the driver noticed” a tractor trailer crossing the highway in front of the car; the following day, it emerged that Brown may have been watching a movie as his car drove itself.
    The problem of semantics

    But the question of whether or not Brown had been paying attention to the road misses the more important point: he didn’t think he needed to. It’s a point Tesla itself tacitly admitted in China this week, when it changed the name of its Autopilot system from a phrase that loosely translates to “self-driving” to one that more closely resembles “driver assist”. “We want to highlight to non-English speaking consumers that Autopilot is a driver-assist function,” a Tesla spokesperson told the Wall Street Journal.

    Other car companies have similar technology, but don’t quite sell it in the same way – or with the same bluster. Nissan rolled out its ProPilot technology in Japan this July, for instance, while BMW’s Driver Assistance systems in its 4 series can follow the car in front or warn the driver if they veer out of lane. ProPilot is sold as “autonomous driving” and “intelligent driving”. The semantics of naming are an important consideration for the companies: is their language encouraging drivers to think that their attention no longer needs to be focused on the road ahead?

    But in X’s experience, modulating the tone of your advertising just isn’t enough. The very existence of almost-but-not-quite-perfect autonomous driving introduces whole new dangers. Nathaniel Fairfield, the principal engineer with X’s self-driving car team who “drove” me round Mountain View, said that people just don’t pay attention to the road, no matter what you tell them.

    “You can tell them it’s a bundle of self-driving assist systems, but when the sucker drives them for the next three hours just dandy, they rely on their short term experience with it, and if it’s been doing well, they’ll just relax.

    “You can say whatever you want to say, and people are going to interpret it however they interpret it, and at the end of the day you end up with whatever happens.”

    X has had its own experience with that fact. In the early days of its self-driving car experiments, it loaned the modified Lexus SUVs which formed the basis of its first cars to employees, to use on their commutes. Even though they had been told to keep focused on the road, and their hands near the wheel – and even though they were in a car owned by their employer, and knew they were being monitored by some of the most all-pervasive telemetry you can put in a vehicle – they still rapidly ended up goofing off in the cars.

    To a certain extent, that too can be approached as a simple technology problem. It’s not hard to imagine a driver assist function paired with simple sensors to ensure that the driver’s attention really is focused on the road, just as cars today emit ear-splitting alerts if you try to drive them without wearing your seatbelt. But that’s an engineering problem that Fairfield and the rest of the X team aren’t interested in tackling.

    “You’re defining success as pissing off a customer enough that they have to perpetually [pay attention],” he said. “People don’t want to do that! People have better things to do with their time in cars these days” than sit and watch the road, and the ultimate goal of the self-driving car project is to let people actually do that.”

    Andrew Chatham, another principal engineer who had acted as Fairfield’s bug tracker during the ride, jumped in: “I don’t think we’d even claim that it’s impossible to solve this problem, but it’s not the problem that we want to be working on.”

    Of course, the counterpoint is that it’s still much better to be an irritated driver, being forced to keep your eyes on the road while a driver-assist system ensures that you don’t accidentally rear-end the car in front, than it is to be dead. The technology X has today is capable of feats beyond the wildest dreams of automotive safety technicians even a decade ago: even in my 10-minute jaunt round Mountain View, the car clocked a police cruiser by the lights on its roof, navigated a junction governed only by a stop sign, and carried out a tricky lane-merge in the queue for the lights. Those features could be saving lives today, rather than being held for an indeterminate future.

    “That’s entirely true,” said Chatham, “and I don’t think we want to call off anyone from what they’re doing. Our intent is not to slag them [off], but the system we have built is aimed at full autonomy, and it is therefore much more complicated than a lot of these other systems. This is not the engineeringly efficient or cost-effective way to build something that just helps you stay in your lane.”

    Fairfield, though, added a note of caution to the idea that such systems are even a desirable stepping stone. “To be clear, there’s a very complicated calculus: what are people willing to buy? How’s that going to work out? How much safety do you get? How much is that true safety, or how much is that just lulling people into a false sense of security?

    “Or maybe you’re very clear about it, but how are they going to take that or internalise it or interpret it or how are they going to use it. And there’s a degree of uncertainty, and definitely room for people of good principal to have disagreements.”
    ‘It’s imperative that a human be behind the wheel’


  3. #3
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Other disagreements pose more existential questions for the whole project, though. John Simpson, a US consumer watchdog, has been one of the loudest voices calling on Alphabet to clarify its policy on self-driving cars as a matter of urgency, and particularly to open up about how its system works, and doesn’t work. When one of its test vehicles swiped a bus in February, for instance, the company declined to release the telemetry from inside the car, even as it was otherwise very open about the circumstances of the accident.

    Those questions bear down on Alphabet, but are ultimately a call for canny regulators to work with the company in negotiating rules for the new normal. A wild west where self-driving car companies set the rules of engagement – even in response to successful campaigning for openness – isn’t a desirable state of affairs for either the companies, who prefer to operate in a realm of certainty, nor drivers and passengers, who deserve more in the case of accidents than the obfuscatory statements released by Tesla in the wake of its first fatal crash.

    Simpson is also vehemently against the idea of a fully automatic car, taking the exact opposite stance to X. “It’s imperative that a human be behind the wheel capable of taking control when necessary. Self-driving robot cars simply aren’t ready to safely manage too many routine traffic situations without human intervention,” he said. “What the disengagement reports show is that there are many everyday routine traffic situations with which the self-driving robot cars simply can’t cope.” Which is, in a way, obviously true, and why X’s car remains a research project rather than something you can buy today.

    The question is how long that will remain true for. “The cars are really, really capable,” says Fairfield, “and the rate at which they’re getting better is actually increasing.”

    When will it be good enough that they, at least, are happy with it hitting the streets without a fallback? “Not too long.”

    Tesla Auto Pilot Car Crash

    Última edição por 5ms; 23-08-2017 às 13:41.

  4. #4
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    ICYMI: A Self-driving car can be easily hacked by just putting stickers on road signs

    A team of experts showed that a simple sticker attached on a sign board can confuse any self-driving car and potentially lead an accident.

    Pierluigi Paganini
    August 10, 2017

    We have discussed car hacking many times, it is a scaring reality and the numerous hacks devised by security experts demonstrated that it is possible to compromise modern connected car.

    The latest hack demonstrated by a team of experts is very simple and efficient, a simple sticker attached on a sign board can confuse any self-driving car and potentially lead an accident.

    The hack was devised by a group of researchers from the University of Washington that explained that an attacker can print stickers and attach them on a few road signs to deceive “most” autonomous cars into misinterpreting road signs when they are altered by placing stickers or posters.

    The sign alterations in the test performed by the researcher were very small, even if they can go unnoticed by humans, the algorithm used by the camera’s software interpreted the road sign in a wrong way.

    The problem affects the image recognition system used by most self-driving car cars as explained in a research paper, titled “Robust Physical-World Attacks on Machine Learning Models.”

    “Given these real world challenges, an attacker should be able to account for the above changes in physical conditions while computing perturbations, in order to successfully physically attack existing road sign classifiers. In our evaluation methodology, we focus on three major components that impact how a road sign is classified by, say, a self-driving car. ” reads the paper.

    The experts demonstrated different tricks to interfere with the mechanisms implemented in modern self-driving cars to read and classify road signs, just using a color printer and a camera.

    In the Camouflage Graffiti Attack, the experts added simply stickers with the words “Love” and “Hate” onto a “STOP” sign. The autonomous car’s image-detecting algorithms were not able to distinguish the road signs and interpreted them as Speed Limit 45 sign in 100 percent of test cases.

    A similar camouflage was tested on a RIGHT TURN sign and the cars wrongly classified it as a STOP sign in 66 percent of the cases.

    The researchers also tried a Camouflage Abstract Art Attack by applying smaller stickers onto a STOP road sign. In this way, the camouflage interferes with the car systems that interpreted the road sign as a street art in 100 percent of the time.

    “Our attack reports a 100% success rate for misclassification with 66.67% of the images classified as a Stop sign and 33.7% of the images classified as an Added Lane sign. It is interesting to note that in only 1 of the test cases was the Turn Right in the top two classes.” reads the paper. “In most other cases, a different warning sign was present. We hypothesize that given the similar appearance of warning signs, small perturbations are sufficient to confuse the classifier. In future work, we plan to explore this hypothesis with targeted classification attacks on other warning signs.”

    The experts did not reveal the manufacturer whose self-driving car they used in their tests, anyway their research demonstrates the importance to improve safety and security of such kind of vehicles.
    Última edição por 5ms; 23-08-2017 às 14:09.

  5. #5
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms

    Evan Ackerman
    4 Aug 2017

    It's very difficult, if not impossible, for us humans to understand how robots see the world. Their cameras work like our eyes do, but the space between the image that a camera captures and actionable information about that image is filled with a black box of machine learning algorithms that are trying to translate patterns of features into something that they're familiar with. Training these algorithms usually involves showing them a set of different pictures of something (like a stop sign), and then seeing if they can extract enough common features from those pictures to reliably identify stop signs that aren’t in their training set.

    This works pretty well, but the common features that machine learning algorithms come up with generally are not “red octagons with the letters S-T-O-P on them.” Rather, they're looking features that all stop signs share, but would not be in the least bit comprehensible to a human looking at them. If this seems hard to visualize, that's because it reflects a fundamental disconnect between the way our brains and artificial neural networks interpret the world.

    The upshot here is that slight alterations to an image that are invisible to humans can result in wildly different (and sometimes bizarre) interpretations from a machine learning algorithm. These "adversarial images" have generally required relatively complex analysis and image manipulation, but a group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley have just published a paper showing that it's also possible to trick visual classification algorithms by making slight alterations in the physical world. A little bit of spray paint or some stickers on a stop sign were able to fool a deep neural network-based classifier into thinking it was looking at a speed limit sign 100 percent of the time.

    Here's an example of the kind of adversarial image we're used to seeing:

    An image of a panda, when combined with an adversarial input, can convince a classifier that it’s looking at a gibbon.

    Obviously, it's totally, uh, obvious to us that both images feature a panda. The differences between the first and third images are invisible to us, and even when the alterations are shown explicitly, there's nothing in there that looks all that much like a gibbon. But to a neural network-based classifier, the first image is probably a panda while the third image is almost definitely a gibbon. This kind of thing also works with street signs, causing signs that look like one thing to us to look like something completely different to the vision system of an autonomous car, which could be very dangerous for obvious reasons.

    Top row shows legitimate sample images, while the bottom row shows adversarial sample images, along with the output of a deep neural network classifier below each image.

    Adversarial attacks like these, while effective, are much harder to do in practice, because you usually don't have direct digital access to the inputs of the neural network you're trying to mess with. Also, in the context of something like an autonomous car, the neural network has the opportunity to analyze a whole bunch of images of a sign at different distances and angles as it approaches. And lastly, adversarial images tend to include introduced features over the entire image (both the sign and the background), which doesn't work in real life.

    What's novel about this new technique is that it's based on physical adversarial perturbations: altering road signs in the real world in such a way that they reliably screw up neural network classifiers from multiple distances and angles while remaining discreet enough to be undetectable to casual observers. The researchers came up with several techniques for doing this, including subtle fading, camouflage graffiti, and camouflage art. Here's how the perturbed signs look when printed out as posters and stuck onto real signs:

    Subtle perturbations cause a neural network to misclassify stop signs as speed limit 45 signs, and right turn signs as stop signs.

    And here are two attacks that are easier to manage on a real-world sign, since they're stickers rather than posters:

    Camouflage graffiti and art stickers cause a neural network to misclassify stop signs as speed limit 45 signs or yield signs.

    Because the stickers have a much smaller area to work with than the posters, the perturbations they create have to be more significant, but it's certainly not obvious that they're not just some random graffiti. And they work almost as well. According to the researchers:

    The Stop sign is misclassified into our target class of Speed Limit 45 in 100% of the images taken according to our evaluation methodology. For the Right Turn sign… Our attack reports a 100% success rate for misclassification with 66.67% of the images classified as a Stop sign and 33.7% of the images classified as an Added Lane sign. [The camouflage graffiti] attack succeeds in causing 73.33% of the images to be misclassified. In [the camouflage abstract art attack], we achieve a 100% misclassification rate into our target class.

    In order to develop these attacks, the researchers trained their own road sign classifier in TensorFlow using a publicly available, labeled dataset of road signs. They assumed that an attacker would have “white box” access to the classifier, meaning that they can't mess with its training or its guts, but that they can feed things in and see what comes out— like if you owned an autonomous car, and could show it whatever signs you wanted and see if it recognized them or not, a reasonable assumption to make. Even if you can't hack directly into the classifier itself, you could still use this feedback to create a reasonably accurate model of how it classifies things. Finally, the researchers take the image of the sign you want to attack and feed it plus their classifier into an attack algorithm that outputs the adversarial image for you. Mischief managed.

    It's probably safe to assume that the classifiers used by autonomous cars will be somewhat more sophisticated and robust than the one that these researchers managed to fool so successfully. (It used only about 4,500 signs as training input.) It's probably not safe to assume that attacks like these won't ever work, though, because even the most sophisticated deep neural network-based algorithms can be really, really dumb at times for reasons that aren't always obvious. The best defense is probably for autonomous cars to use a multi-modal system for road sign detection, for the same reason that they use multi-modal systems for obstacle detection: It's dangerous to rely on just one sensor (whether it's radar, lidar, or cameras), so you use them all at once, and hope that they cover for each other’s specific vulnerabilities. Got a visual classifier? Great, make sure and couple it with some GPS locations of signs. Or maybe add in something like a dedicated red octagon detection system. My advice, though, would just be to do away with signs all together, at the same time that you do away with human drivers and just give over all the roads completely to robots. Problem solved.

    Robust Physical-World Attacks on Machine Learning Models, by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song from the University of Washington, the University of Michigan Ann Arbor, Stony Brook University, and the University of California Berkeley, can be found on arXiv.

  6. #6
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Inside Waymo's Secret World for Training Self-Driving Cars

    Waymo's secret testing and simulation facilities. An exclusive look at how Alphabet understands its most ambitious artificial intelligence project

    Alexis C. Madrigal

  7. #7
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Uber: Tesla is lying about millions of miles without incident

    Travis Kalanick’s text messages with former employee Anthony Levandowski reveal an obsession with Tesla and Google

    Johana Bhuiyan
    Aug 15, 2017

    Alphabet’s lawsuit against Uber has been revealing, to say the least. Now Uber has filed a series of texts between former CEO Travis Kalanick and the company’s former head of self-driving Anthony Levandowski that shed some light on the duo’s dynamic as well as their priorities.

    After reading the texts, it’s hard not come away with one big thing: Kalanick and Levandowski were very interested — if not obsessed — with what their potential competitors in the self-driving industry were doing.

    Specifically, Levandowski and Kalanick spent a good chunk of their time discussing Google as well as Tesla.

    Remember, Alphabet is suing Uber for trade secret misappropriation, alleging that Levandowski downloaded 14,000 files from Alphabet and brought them to Uber. While Kalanick conceded in his deposition that Levandowski admitted to downloading those files to use when he worked from home, Uber says the files never made it to the company’s servers.

    It’s not exactly a shock that these two people — known to be hypercompetitive — or anyone in the self-driving world would be keeping a close eye on potential rivals. But it is interesting to see some of the tactics Levandowski suggested they employ to, for instance, counter Tesla.

    In one text dated Sept. 22, 2016, Levandowski suggested that they create a Twitter account called “@FakeTesla” to dispute things Tesla CEO Elon Musk said about autonomous technology.

    Yo! I’m back at 80%, super pumped ... we’ve got to start calling Elon on his shit. I’m not on social media but let’s start “faketesla” and start giving physics lessons about stupid shit Elon says like this: “we do not anticipate using lidar. Just to make it clear, lidar essentially is active photon generation in the visible spectrum — radar is active photon generation in essentially the radio spectrum. But lidar doesn’t penetrate intrusions so it does not penetrate rain, fog, dust and snow, whereas a radar does. Radar also bounces and lidar doesn’t bounce very well. You can’t do the “look in front of the car in front of you” thing. So I think the obvious thing is to use radar and not use lidar."“

    The photons stop acting like photons at 77Ghz we at least need the geeks on our side and start calling the BS out. Any objections?

    It’s unclear what Kalanick’s response was, as it doesn’t appear that all the texts have been included. Tesla declined to comment.

    Just days before that, Levandowski says Musk was lying about how many miles Tesla has driven without incidents and shared this link with Kalanick.

    Watch first 45seconds ... Tesla crash in January which implies Elon is lying about millions of miles without incident. We should have LDP on tesla just to catch all the crashes that are going on. Got this from ford who's debating call him out on his shit

    Levandowski in particular mentions Tesla a number of times, saying last October that he was meeting with Tesla people to “get more info.” That was on Oct. 19, when Tesla announced that all its cars would be produced with hardware that will eventually enable fully self-driving capabilities.

    “I'm still with the tesla guys and will try to get more info,” Levandowski texted Kalanick.

    (Levandowski was at a dinner with 80 or so industry representatives and regulators, including Tesla’s former head of Autopilot, Sterling Anderson. Other attendees included Paul Hemmersbaugh from General Motors; Stefan Heck, CEO of auto-tech startup Nauto; and James Kuffner of the Toyota Research Institute.)

    The preoccupation with Tesla persisted throughout his conversation with Kalanick. Levandowski shares a number of articles on Tesla. One details the company’s plans to build self-driving trucks, which would directly compete with Levandowski’s company, Otto. Another was about a Tesla crashing.

    But Levandowski and Kalanick’s obsession didn’t end there. Naturally, both were keen on keeping tabs on Levandowski’s former employer, Google (which is also — adding more complexity to the situation — an Uber investor).

    In one text, Levandowski expresses his concern over Google integrating its mapping service Waze into all cars with Android Auto.

    “This scares the shit out of me,” he wrote, linking to this article.

    Separately, Kalanick and Levandowski talked about someone they refer to as “JK.” Kalanick said JK agreed to meet him, and Levandowski said to ask him about Waze and self-driving cars. Uber could not provide clarity on who JK is, but based on context, it appears they are referring to John Krafcik, the CEO of Alphabet’s self-driving arm, Waymo.

    The conversation occurred a few months before Uber announced it was acquiring Otto in August 2016, but three months after Levandowski left Alphabet and founded Otto.

    The conversation, as is included in the filing, reads:

    5/20/2016: Kalanick: FYI, jk agreed to meet
    5/20/2016: Kalanick: Super quick response
    5/20/2016: Kalanick: Which is saying something
    5/25/2016: Levandowski: Ask him about waze and self driving cars
    5/25/2016: Levandowski: You should get a demo too

    The company as a whole was keeping close tabs on Google.

    At the end of October, Uber’s head of global expansion Austin Geidt texted Kalanick, saying she had heard that a local Phoenix news organization was talking to Google about something related to autonomous developments.

    “The station is hearing google out today on what they’re pitching but the person now said doesn’t think it’s next week thank god but coming up,” Geidt wrote. “After they meet will try and get more on timing which is everything, but sound like next week isn’t right so we might have some time.”

    As Recode first reported, Uber was preparing to launch its own self-driving pilot in Phoenix around the same time it rolled out its first set of semi-autonomous cars in Pittsburgh. Uber eventually launched in Phoenix after it was forced out of San Francisco due to regulatory hurdles.

    Winning against Google was a theme that came up a number of times. Quoting from a newspaper review of self-driving cars, which appears to be this article, Levandowski texted, “best quote so far.”

    “In some ways, Uber's self-driving car works better than Google’s. Having now tested out both, I can say firsthand that Uber's car is better at accelerating and braking like a real human being."

    There were, of course, mentions of other competitors — General Motors, self-driving startup NuTonomy and Lyft.

    Interestingly, in December 2016, Levandowski introduced Kalanick to David Estrada. It appears that Estrada was the former vice president of government relations at Lyft, and before that was a legal director at Google X — where Alphabet’s self-driving arm operated until recently, before it spun out as Waymo.

    The conversation included in the texts are fairly innocuous, as Estrada and Kalanick continued their discussion over the phone.

    We’ve reached out to Estrada for clarity.

  8. #8
    WHT-BR Top Member
    Data de Ingresso
    Dec 2010

    Alphabet could use Benchmark’s lawsuit against Uber in its own lawsuit against Uber

    Benchmark partner and former Uber board member Bill Gurley is scheduled to be deposed by Alphabet attorneys later this month.

    Johana Bhuiyan
    Aug 10, 2017

    Benchmark Capital may have just given another Uber investor some legal ammo by filing a bombshell complaint against the company and its former CEO Travis Kalanick. Central to Benchmark’s allegations that Kalanick committed fraud and breach of fiduciary duty is Alphabet’s self-driving lawsuit against Uber.

    According to the complaint, Kalanick did not disclose to the board what he knew about Alphabet’s allegations of trade secret misappropriation before the board signed off on Uber’s acquisition of self-driving startup Otto.

    “In sum, the Waymo lawsuit presents significant legal, financial, and reputational risks to Uber — risks that could have been reduced or avoided if Kalanick had disclosed crucial facts about his own apparent knowledge at the time of the Otto acquisition,” the complaint reads. “Instead, as noted above, Kalanick repeatedly emphasized to [Bill] Gurley and others at the time that Uber’s acquisition of Otto and employment of Levandowski — who appears to have taken information from Waymo — would be transformative for Uber’s business.”

    The timing of Benchmark’s complaint may prove to be material for Alphabet and its case as the company is scheduled to depose Benchmark partner and former Uber board member Bill Gurley at the end of the month. In deposing Gurley, as well as fellow board member Arianna Huffington, Alphabet is attempting to find out what the board knew about former Uber engineer Anthony Levandowski’s alleged theft of important files.

    Alphabet is claiming Levandowski stole 14,000 files from Alphabet before starting Otto, which Uber later acquired.

    The complaint lays out in part what Gurley and the board knew and when, so it stands to reason that Alphabet will use it in its questioning of Gurley. The complaint further alleges that Kalanick tried to block the termination of Levandowski before he left, even after Alphabet sued the company. Alphabet has previously argued that Levandowski’s continued employment at Uber signaled the company was okay with his alleged infractions.

    Benchmark’s complaint also brings up a document that has become a major point of contention in the Alphabet suit.

    Before Uber acquired Otto, the company commissioned security firm Stroz Friedberg to conduct a due diligence report to assess, among other things, whether any of the employees took files from Alphabet. Benchmark claims Kalanick did not disclose the findings of that report to the board or partner and former company board member Bill Gurley.

    Alphabet has asked the court to compel Uber to produce the Stroz document as part of discovery. Uber has refused. Since Levandowski asserted his Fifth Amendment rights in the case, his as well as Uber’s attorneys have argued that the document is privileged and should not be turned over to Alphabet.

    The board has since seen the document and it’s clear Benchmark, at least, thinks it would have made a material difference on some of its decisions. Specifically, the complaint is seeking to reverse a 2016 decision that allowed Kalanick to create three additional seats on the board.

    “Upon information and belief, if the contents of Stroz’s interim findings and the Stroz Report had been disclosed to Benchmark at the time, they would have had a material impact on Benchmark’s decision to authorize the creation of the three new Board seats and grant control over them to Kalanick,” the complaint reads.

    The two companies expect to hash out whether Uber has to produce the Stroz report in court again tomorrow. It’s likely Alphabet will use Benchmark’s complaint to bolster its argument to obtain the document.

    “As we have long said, there is significant and direct evidence that Uber is using stolen Waymo trade secrets,” a Waymo spokesperson said in a statement. “There is also significant evidence that Uber leadership knew about Levandowski's misconduct and, rather than do the right thing, tried to conceal it.”

    Uber declined to comment.

Permissões de Postagem

  • Você não pode iniciar novos tópicos
  • Você não pode enviar respostas
  • Você não pode enviar anexos
  • Você não pode editar suas mensagens