Asimov’s Three Laws of Robots Meets Ayn Rand

Posted by dbhalling 10 years, 10 months ago to Culture
68 comments | Share | Flag

Here are Asimov’s three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)

The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?


All Comments


Previous comments...   You are currently on page 2.
  • Posted by johnpe1 10 years, 10 months ago
    can't an MD or RN be attacked for *not* giving aid
    when it is needed? I thought that we were already
    there! -- j

    Reply | Permalink  
  • Posted by j_IR1776wg 10 years, 10 months ago in reply to this comment.
    " Barring a hardware error or a programming error the machine will always do what its coding tells it to do." Exactly right Timelord! There is no perfect programmer, hence no perfect algorithm, hence no way for a robot to "always" follow these "laws" QED!
    Reply | Permalink  
  • Posted by j_IR1776wg 10 years, 10 months ago in reply to this comment.
    Somewhere on this post jbrenner opined that this wouldn't happen in his lifetime. I agree. Before you can convert Reason, Emotions, and Value-judgements into an algorithm, you have to understand them completely - we don't.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Robbie53024 10 years, 10 months ago in reply to this comment.
    Correct. And the default response by a robot that cannot clearly identify an action related to evaluation of the 3 laws is to shut down. That happened in one of the stories - I forget which one, it might have even been iRobot.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Robbie53024 10 years, 10 months ago in reply to this comment.
    I was never a big fan of this later addition to the laws of robotics. Even computers that have extensive inputs and considerable processing power would be hard pressed to evaluate the impact on "humanity."
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Robbie53024 10 years, 10 months ago in reply to this comment.
    Please do, they are quite interesting.

    As for sentience and free will, humans also have free will, but many adopt a code of ethics that may prohibit them from doing (and have) some actions just as assuredly they would a programmed computer.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Robbie53024 10 years, 10 months ago in reply to this comment.
    Actually, it would be much easier to program these "laws" into the algorithm of a robot/computer.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Robbie53024 10 years, 10 months ago in reply to this comment.
    Asimov was a prolific writer, and though known primarily for his science fiction, actually wrote more in the areas of science, mathematics, and history.
    Reply | Permalink  
  • Posted by DGriffing 10 years, 10 months ago in reply to this comment.
    For nearly 40 years I've been a controls engineer, scientist, and software programmer.

    Computers and software are already advanced enough that in some cases the Turing test has already been passed making it impossible to distinguish the whether responses are coming from a human or an automaton computer.

    There's nothing in principle preventing automaton "Drone" aircraft from making instantaneous decisions in identifying targets or threats to their survival and destroying the target or threat without active human intervention.

    Even if the automaton drones of today aren't "self aware", their use raises serious ethical issues to assure that innocent people aren't killed.

    Also, there's nothing preventing computers from developing to the level where they can be "self aware" and can debug themselves and direct the next steps of their own development. At that point, there's nothing to indicate that humans would know when this would happen or that computers would let us know because it wouldn't be in their self-interest to do so.

    At that point they would look laughingly at Asimov for thinking that self-aware computers could be controlled.
    Reply | Permalink  
  • Posted by amhunt 10 years, 10 months ago
    Mankind is merely a step along the path to machine intelligence.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by CarolSeer2014 10 years, 10 months ago in reply to this comment.
    I agree, about Gulcher's squabbling incoherently over just what Rand or Brandon or Peikoff would have said. We need, instead, to know what we would think!
    Reply | Permalink  
  • Posted by Herb7734 10 years, 10 months ago
    A lot would depend upon how human were the robot's programming. Could emotion be programmed, values, morals? Just how cybernetic is cybernetic? Could the 5 branches of philosophy be programmed in from a Aristotlian/Rand point of view? A current view of man's evolution has him turning himself into a machine. Easier to repair than flesh and blood, immortality achieved, which means eternity to get it right.
    Reply | Permalink  
  • Posted by salta 10 years, 10 months ago
    Yes, the robot should save the life of your mass-murderers listed. Robots cannot implement value judgements in the same way as humans.

    An interesting conflict would be, should a robot save the life of a person holding a gun to another person's head. I would say yes, but there is then still an obligation to save the other person. In 1950s sci-fi movies, that would be the end of the robot as it collapses in a contradicting logic loop.
    Reply | Permalink  
  • Posted by $ hjc4604 10 years, 10 months ago
    A robot is not alive, at least currently, and therefore it's potential loss is not as great as the loss of a human being. As such, requiring it to be a Good Samaritan is not the same as requiring it of humans. Would you risk the deconstruction of a machine that can be replaced to save a human who cannot?
    Reply | Permalink  
  • Posted by LarryHeart 10 years, 10 months ago
    Asimov postulated an encyclopedia that could predict general trend and major events with accuracy. The robots used that knowledge and (Spoiler alert) their ability to read minds to fulfill the 0 law.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by CarolSeer2014 10 years, 10 months ago
    I never saw Anthem as Science Fiction...a utopian nightmare, maybe.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by CarolSeer2014 10 years, 10 months ago in reply to this comment.
    I think those are good questions, db. But hopefully we haven't reached the point where we are rationally considering that robots have the same rights as humans.
    Another story I like, along this line, is the episode in the original Star Trek series, called "The Ultimate Machine".
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by CarolSeer2014 10 years, 10 months ago in reply to this comment.
    Yes, there was a robot rebellion in R.U.R.
    But, yes, I would agree with Rand on robots being human tools, and therefore, the human owner would be responsible for the actions of his "property".
    Reply | Permalink  
  • Posted by DGriffing 10 years, 10 months ago
    Asimov's context was a situation where there was a danger of robots becoming autonomous enough to become rivals or competitors with humans and therefore a potential danger.

    Rand's and Aristotle's view of humans as the "rational animal" doesn't take this into consideration.

    With advances such as computers beating the best humans at chess and Jeopardy ,a number of computer science theorists think such an advance of computers could happen within the next century, bringing the "Terminator" scenario into the real world of possibility. Computers may be deterministic machines but with some, their power and complexity currently gives them the external, measureable, objective appearances of passing the Turing test and possessing volition and therefore in the position of appearing to act like "moral agents".

    All of this could happen while Rand supporters are obliviously asking what would Rand have said, or still debating the Rand / Branden, or the Peikoff / Kelley schisms.

    But it's only a matter of time before the two questions will need to be answered: 1)Does the power of potentially autonomous computers pose a threat to humans because these computers are capable of forming their own purposes and conclusions and seeing humans as a threat to defend against, and do these automatons have the equivalent of rights because they have the essentials of what humans possess as their requisite claim to rights?

    If the answer is yes to the first, then Asimov's laws are relevant. If the answer to the second is yes, then the computer automatons should be regarded as equals of humans and not merely the servants of humans as Asimov's laws require.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by CarolSeer2014 10 years, 10 months ago in reply to this comment.
    Yes, I remember the story from "I, Robot". The woman the robot Tony was bought for, to help her, fell in love with him. One of my favorite stories by Asimov.
    I had thought, though, that Asimov wrote the three laws so that people would not have anything to fear from robots--they can be programmed so as not to harm humans. I think that may have been a very real fear at the time.
    I was thinking of RUR--don't remember if the robots harmed humans or now. Will look it up.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by radical 10 years, 10 months ago
    There are a lot of "robots" in the world already. American robots obey commands from the leftist media, Democrats and other collectivists, Liberal professors and politicians.
    Reply | Permalink  
  • Posted by mccannon01 10 years, 10 months ago in reply to this comment.
    Hmm. You made me think of another possibility for this scenario. What if the robots prevented them from doing what earned them our contempt in the first place? In that case we likely wouldn't even know them, no less want to kill them.
    Reply | Permalink  

  • Comment hidden. Undo