Uber's Self-Driving Car Accident: Did AI Fail Us?
On March 12, MIT Technology Review ran a story that started like this: "It is the year 2023, and self-driving cars are finally navigating our metropolis streets. For the commencement time, one of them has hit and killed a pedestrian, with huge media coverage. A high-profile lawsuit is probable, but what laws should apply?"
Everything well-nigh the prediction was correct, except for the date. Exactly one week afterwards the commodity was published, a self-driving Uber hit and killed a pedestrian in Tempe, Arizona, while performance in autonomous way.
Though the incident is still being investigated, the commotion that ensued is an indication of how far we are from successfully integrating artificial intelligence into our critical tasks and decisions.
In many cases, the trouble isn't with AI but with our expectations and agreement of it. According to Wired, nearly 40,000 people died in road incidents final yr in the United states of america alone—six,000 of whom were pedestrians. But very few (if any) made headlines the way the Uber incident did.
One of the reasons the Uber crash acquired such a commotion is that we more often than not take high expectations of new technologies, fifty-fifty when they're still in development. Under the illusion that pure mathematics drives AI algorithms, we tend to trust their decisions and are shocked when they make mistakes.
Even the safe drivers behind the wheel of self-driving cars permit their guard downwards. Footage from the Uber incident showed the driver to exist distracted, looking down seconds earlier the crash happened.
In 2022, the driver of a Tesla S Model operating in Autopilot manner died after the vehicle crashed into a truck. An investigation found the driver may have been watching a Harry Potter moving picture at the time of the collision.
Expectations of perfection are high, and disappointments are powerful. Critics were quick to bring Uber'south entire self-driving car project into question after the incident; the visitor has temporarily suspended self-driving auto testing in the aftermath.
AI Isn't Human
Among the criticisms that followed the crash was that a human driver would have hands avoided the incident.
"[The pedestrian] wasn't jumping out of the bushes. She had been making clear progress across multiple lanes of traffic, which should have been in [Uber's] organisation purview to option up," ane expert told CNN.
She'due south right. An experienced man driver likely would have spotted her. But AI algorithms aren't human.
Deep learning algorithms found in self-driving cars use numerous examples to "learn" the rules of their domain. Equally they spend time on the road, they classify the data they get together and learn to handle different situations. Simply this doesn't necessarily hateful they utilize the same determination-making process as human being drivers. That's why they might perform better than humans in some situations and fail in those that seem piffling to humans.
A perfect example is the paradigm-classification algorithm, which learns to recognize images past analyzing millions of labeled photos. Over the years, image classification has become super-efficient and outperforms humans in many settings. This doesn't mean the algorithms empathise the context of images the same way that humans do, though.
For instance, research by experts at Microsoft and Stanford University found that a deep learning algorithm trained with images of white cats believed with a high degree of conviction that a photo of a white dog represented a cat, a mistake a human being child could easily avoid. And in an infamous case, Google's image classification algorithm mistakenly classified people of dark skin color as gorillas.
These are called "border cases," situations that AI algorithms haven't been trained to handle, usually because of a lack of data. The Uber accident is still under investigation, but some AI experts propose information technology could exist some other edge case.
Deep learning has many challenges to overcome before information technology can exist applied in critical situations. But its failures shouldn't deter u.s.. We must adjust our perceptions and expectations and comprehend the reality that every great engineering fails during its development. AI is no different.
Source: https://sea.pcmag.com/opinion/20316/ubers-self-driving-car-accident-did-ai-fail-us
Posted by: tidwellhisquam.blogspot.com
0 Response to "Uber's Self-Driving Car Accident: Did AI Fail Us?"
Post a Comment