(I wanted my 1400th post to be on a geeky science subject, so here it is!)
How do you program regret, guilt, or remorse? How do you build a conscience? Read this article in Wired: Robo-Ethicists Want to Revamp Asimov’s 3 Laws
Two years ago, a military robot used in the South African army killed nine soldiers after a malfunction. Earlier this year, a Swedish factory was fined after a robot machine injured one of the workers (though part of the blame was assigned to the worker). Robots have been found guilty of other smaller offenses such as an incorrectly responding to a request. So how do you prevent problems like this from happening? Stop making psychopathic robots, say robot experts.
"If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath," says Josh Hall, a scientist who wrote the book "Beyond AI: Creating the Conscience of a Machine".
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The story Liar! introduced a subtler definition of harm, with a telepathic robot unable to tell people anything except what they wanted to hear, for fear of harming them. Later stories included a Zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. The Zeroth law supercedes the First law, allowing a robot to sacrifice humans for the good of humanity, creating a horrible grey area where a robot could decide to turn humans into mindless sheep for the continuation of the species. The research paper linked in the article is an interesting read and part 5 deals with Asimov’s Three Laws.