Asimov’s Three Laws of Robots Meets Ayn Rand
Here are Asimov’s three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)
The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)
The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?
Previous comments... You are currently on page 3.
Sentient of volitional?
The Laws are not programmed onto the positronic brain in words or any language that a human could understand. The Laws are impressed into the fabric of the positronic brain which manifest as a set of potentials that guide the core decision process of the positronic brain and all subsequent behavior.
The robot is never faced with a singular 'decision', the Three Laws are so deep in the vital algorithms that the robot cannot help but to act in compliance with the laws.
The earliest robots were prone to conflicting potentials, but these shortcomings were corrected by the scientists at 'US Robotics and Mechanical Men' in later generations of robots.
The novels also breakdown into three distinct eras of robotics; the time of Susan Calvin, the time of early space exploration, the time Elijah Bailey and the Spacers. The Foundation novels are set so far in the future that Earth has become a legend and it's actual location has been forgotten in the mist of time
Asimov actually wrote a story in which a robot lied to its owner to spare her the mental anguish which the truth would have caused.
"As I see law 1: a robot would not perform or allow mercy killing of a human or voluntary euthanasia whatever the amount of pain and suffering being experienced."
As I see it (having read all of Asimov's robot stories), a robot confronted with a human's incurable suffering would be faced with an impossible dilemma, and its positronic brain would burn out.
Humans and robots differ in memory. Humans are apt to forgetting things, which produces all manner of behaviors and choices that a robot with an internal bank of history would not logically make. As a result of this eccentricity, the presumption in the Zeroth and First Laws that a robotic mind can actually predict to any degree of certainty the outcome of human behavior is fundamentally flawed. I would contend that it is more likely for a computer to be able to accurately forecast the weather a month in advance than for it to predict even a single human's behavior a week in advance. The outcome of this is that any action instigated by reason of the First Law alone - let alone the Zeroth Law - is going to be of immediate action or inaction only due to imminent circumstances. Think of it as a chess game, but where you just keep adding pieces to the board. One of the side-effects of humans' ignorance and forgetfulness is that we have the ability to deal with only the circumstances at hand rather than trying to deal with both that AND the historical aspects as well. A computer that relied on history for future decision-making in the realm of predicting even a single human's behavior would quickly run out of memory and processing power trying to see even a couple of hours ahead due to the magnitude of inputs. If the computer were not carefully programmed to concentrate on one specific issue and limited to a very short time-frame for consideration, it would quickly seize up in a permanent logic loop.
To me, while Asimov's stories are an interesting look into a possible future, I look at the reality of decision-making and realize that the sheer volume of data that would have to be handled to predict human behavior - especially en masse - with any high degree of certainty is so ridiculously huge that no machine could handle it.
I would also bring up one scenario that I didn't see posed in any of Asimov's writings, but which I think provides an interesting theoretical question: what happens to a robot who needs to be shut down to undergo repairs? Is the robot to operate on faith that it will actually get turned back on after repairs are complete? Would that also have to be programmed in as a Law of Robotics? How is the robot to deal with the potential for demise when anyone could simply be using this as a pretense? How much can the robot rely on internal diagnostics to detect flaws in its behavior?
Yours statement is a matter of practicality, not philosophy. Being a programmer by profession I can tell you that I can program a machine to do lots of things that you can't count on a human to do 100% of the time. Barring a hardware error or a programming error the machine will always do what its coding tells it to do.
The robot is not intended to to employ emotions at all, solely reason, and that reason is dictated by programming.
As for protecting Hitler or Stalin, it's not a conundrum. The robot does not make value judgements. It would save them.
The Zeroth law can introduce some big problems because it requires predicting the future. Assume that a robot comes upon a man hanging from the edge of a tall building. The robot has knowledge that the person is a serial killer. Given laws 1 to 3 the robot would save the serial killer. Given law 0 he has a problem and some serious computing to do. My guess is that he would save the man and restrain him until the legal system could take over.
Asimov's robots ended up sentient. How they could be expected to obey any laws after that is unclear to me since sentience implies free will, doesn't it?
The robot novels have been on my list to read so I guess I'd better get to it!
Commenting on the sentiment above, there is nothing immoral with the 3 robot laws. If a person's hierarchical values system moves him to act according to the laws nobody would call him immoral. And since it's a machine that operates solely by the set of instructions given to it by humans there is no element of force involved by imposing the laws on robots.
Asimov was a careful thinker, but he was, of course writing fiction. Moreover, science fiction allows a degree of fantasy that is not present in Rand's works except for Anthem.
I'd always thought of Anthem as science fiction, but Rand herself called it a poem.
I believe your projection of Rand's outlook is correct. Robots are tools which cannot employ both Reason and Emotions.
How about requiring leaders of governments to follow the laws of robotics.
0. "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
A familiar type of law which justifies robots killing or enslaving any number of humans to protect "humanity." (for the "greater good" of course)
The insanity of this well demonstrates the collective minded.
As I see law 1: a robot would not perform or allow mercy killing of a human or voluntary euthanasia whatever the amount of pain and suffering being experienced.
Was it Godel? It is not possible to teach a robot the meaning of words. Words as defined in say a dictionary must be self-referential. Meaning must come from a breakout, for humans this is the totality of perception which is not the same as what a machine can detect. We are a very long way from robots that can have or pretend to have human perceptions.
Going back to Asimov's laws, I have seen discussion as to whether they can be made foolproof. I think the answer is no.
The practical answer is to limit the power of robots and machines so they can do damage but not too much- same as for persons, and groups of persons, Do not give too much power.