Asimov’s Three Laws of Robots Meets Ayn Rand

Posted by dbhalling 9 years, 9 months ago to Culture
68 comments | Share | Flag

Here are Asimov’s three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)

The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?
SOURCE URL: http://www.dailymail.co.uk/sciencetech/article-2731768/Robots-need-learn-value-human-life-dont-kill-Future-droids-murder-kindness-engineer-claims.html


Add Comment

FORMATTING HELP


FORMATTING HELP

  • Comment hidden. Undo