Who's responsible when an AI kills someone?
With the recent news that an autonomous Uber vehicle killed a woman crossing the street in Tempe, Arizona, this ethical question is very timely. Here is an element of response published in the MIT Technology Review:
Criminal liability usually requires an action and a mental intent (in legalese an actus rea and mens rea). Kingston says Hallevy explores three scenarios that could apply to AI systems.
The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent. But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.
The whole article is interesting as it delves deeper in all the possible scenarios.
A video game-playing AI beat Q*bert in a way no one’s ever seen before
Whatever the case, this doesn’t seem to be an exploit that any human has discovered before. If the AI agent could think, it would probably be wondering why it’s supposed to bother jumping on all these boxes when it’s found a much more efficient way to score points.
[Source: The Verge]
Google’s new AI algorithm predicts heart disease by looking at your eyes
Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.
Just like the first two technological revolutions (steam, electricity), the third one (software) we are experiencing now has just begun.
[Source: Google’s new AI algorithm predicts heart disease by looking at your eyes]
End of the road for journalists? Robot reporter Dreamwriter from China’s Tencent churns out perfect 1,000-word news story - in 60 seconds
It seems like we managed to reveal a glimmer of self-awareness in robots.
Here’s Quartz on the matter:
Bringsjord programmed three robots to think that two of them had been given a special “dumbing pill” that would not allow them to speak. Their task was to identify which robots received the pill. When the Nao robot on the right tried to speak, it heard its voice, and its voice alone. That’s when it waved its hand and said: “I know now. I was able to prove that I was not given a dumbing pill.