Asimov’s Three Laws of Robots Meets Ayn Rand

Posted by dbhalling 9 years, 8 months ago to Culture
68 comments | Share | Flag

Here are Asimov’s three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)

The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?


All Comments

  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Robbie53024 9 years, 8 months ago in reply to this comment.
    A robot, or a human, could only act on what they observed and could extrapolate about based on observation. Neither the robot, nor a human, can read a person's mind or act on information that was unobservable, thus, an action regarding Hitler would call for observation of action or understanding of intention. A robot, under the 3 laws would default to action only on what was observed, and then in a manner that did not harm any human, regardless of harm to other humans. Thus, even if observing or understanding the intention of one human to harm another would call on the robot to stop that action without harming any human. In the case of Hitler, that would call for disabling the ability to harm, not in shooting Hitler. This is a rational and good outcome, not a failing.
    Reply | Permalink  
  • Posted by johnpe1 9 years, 8 months ago in reply to this comment.
    I spent years working with knowledgeware, a
    software program which writes software. after the
    tedious process of defining business rules in a
    precise way, we would run the thing and find out
    that we needed to provide more detail. between
    that exercise and the specification of manufacturing
    processes in workstream, the successor to "opt",
    I got thoroughly seasoned. sophisticated computer
    programs are hard to write.

    it may be awhile before we learn whether
    self-awareness implies free will. -- j

    Reply | Permalink  
  • Posted by j_IR1776wg 9 years, 8 months ago in reply to this comment.
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    By this law the robot would have not shot Hitler. But by inaction the robot caused the deaths of 50 million humans. Hence the law must be contradictory and contradictions are not allowed in the Gulch. Neither Man nor Robot can comply with it.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by CarolSeer2014 9 years, 8 months ago in reply to this comment.
    But how would any man or any robot know what was going to happen in the future? Was there supposed to an explication of all possible futures on the part of the one with the rifile, in one instant?
    Reply | Permalink  
  • Posted by j_IR1776wg 9 years, 8 months ago in reply to this comment.
    I read a story a long time ago that at the end of WW1 a British soldier walked into a forest clearing and found himself face-to-face with an unarmed German soldier. He leveled his rifle, thought about it, and told the soldier to scram. The German was Adolph Hitler. What would a militarized robot programmed according to Azimov's laws have done? Should the British soldier have been tried 30 years later for not shooting Hitler?
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by CarolSeer2014 9 years, 8 months ago in reply to this comment.
    What difference would it make, Timelord, whether it was programmed or a part of their construction. I've had a course in COBOL, just one, so didn't tell me too much.
    Reply | Permalink  
  • Posted by j_IR1776wg 9 years, 8 months ago in reply to this comment.
    Agreed. I began with COBOL which virtually never
    abended. AI is an attempt to replicate our brains.
    Reply | Permalink  
  • Posted by Timelord 9 years, 8 months ago in reply to this comment.
    Sure, but adopting the code of ethics was done freely. Their choice to do/not do within their ethical framework is still exercising free will.

    And those ethics can be modified for rational or irrational reasons.
    Reply | Permalink  
  • Posted by Timelord 9 years, 8 months ago in reply to this comment.
    Another thought: there are many perfect algorithms. With well defined data sets you can search and sort with 100% certainty and write a program that will never fail. But those algorithms are simple. When you're talking about AI nothing is simple.
    Reply | Permalink  
  • Posted by Timelord 9 years, 8 months ago in reply to this comment.
    I'm not sure. That's a complicated philosophical question. So I'll posit that to be truly self-aware you would probably have to have free will. A modern computer of our times, if asked what it was, might answer "Just a machine." But in today's technology that would be meaningless because the "self-awareness" would just be a programmed response. I think for a machine to become truly self-aware then it must progress beyond its programming.
    Reply | Permalink  
  • Posted by Timelord 9 years, 8 months ago in reply to this comment.
    Yes, but remember we're talking sci-fi, not reality. If you want to enjoy a sci-fi novel then you have to be willing to accept its version of reality.

    Another commenter explained that the 3 laws weren't programmed into the positronic brains but were in some way a part of their very construction. I'll find out what Asimov intended when I read the novels.
    Reply | Permalink  
  • Posted by $ jlc 9 years, 8 months ago in reply to this comment.
    I have read that there is a law in Australia that requires someone passing a stalled vehicle to stop and offer help.

    Jan
    Reply | Permalink  
  • Posted by $ jlc 9 years, 8 months ago in reply to this comment.
    MikeMarotta - Most folks do not even remember Cordwainer Smith's excellent works. Point. (I find such contrasts as this thread a crucible in which some illuminating ideas can surface.)

    Jan
    Reply | Permalink  
  • Posted by $ jbrenner 9 years, 8 months ago in reply to this comment.
    There is quite an interesting book entitled "Moral Machines: Teaching Robots Right from Wrong" by Wendell Wallach and Colin Allen. I got an autographed copy from Wendell Wallach.
    Reply | Permalink  
  • Posted by $ MikeMarotta 9 years, 8 months ago
    Frank Herbert's "Dune" versus Atlas Shrugged... Larry Niven's "Kzinti: Rational Cats versus Rational Man" Cordwainer Smith versus Ayn Rand (speaking of cats...) William Gibson's Cyberpunk verus Galt's Gulch. A. E. Van Vogt's "World of Null-A" versus Aristotle and Ayn Rand. Carl Sagan endorsed "Scooby-Doo" versus Mysticism.
    Reply | Permalink  
  • Posted by 9 years, 8 months ago in reply to this comment.
    The point was not that the robots have the same rights, it was that a robot should be constrained to only do those things that would be consistent with the natural rights of its owner.
    Reply | Permalink  

  • Comment hidden. Undo