Asimov’s Three Laws of Robots Meets Ayn Rand

Posted by dbhalling 9 years, 8 months ago to Culture
68 comments | Share | Flag

Here are Asimov’s three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(by the way you can thank Jbrenner for this)

The first question that comes to mine is: Are we violating any of Asimov’s Laws already? The second question is: should my robot save Osama, Stalin, Mao, Charles Manson, etc. when inaction would cause them to die? In law there is no duty to be a good Samaritan (Although some socialist altruists are trying to change this). In other words you cannot be prosecuted for your inaction, when you could have saved someone. I think the inaction part would cause all sorts of problems.
I think Rand would say that Robots are human tools and as a result they should not do anything a human should not do morally; they should follow the orders of their owners as long as they are consistent with Natural Rights. What do you think?
SOURCE URL: http://www.dailymail.co.uk/sciencetech/article-2731768/Robots-need-learn-value-human-life-dont-kill-Future-droids-murder-kindness-engineer-claims.html


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by gafisher 9 years, 8 months ago
    Bear in mind that Asimov wrote these 'laws' not for technology development but for story development. As such, they offer an endless array of potential conflicts, many of which writer Asimov explored to great advantage. His robot stories are fun to read, and like most good science fiction writers Asimov used them to consider some very real human issues.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by j_IR1776wg 9 years, 8 months ago
    Since no human could possibly obey these "laws" why would anyone think that robots could be programmed to obey them? We cannot teach Humans to act according to their values 100% of the time. Data in Star Trek TNG was fiction and so are these "laws"

    I believe your projection of Rand's outlook is correct. Robots are tools which cannot employ both Reason and Emotions.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by Timelord 9 years, 8 months ago
      "Since no human could possibly obey these "laws" why would anyone think that robots could be programmed to obey them?"

      Yours statement is a matter of practicality, not philosophy. Being a programmer by profession I can tell you that I can program a machine to do lots of things that you can't count on a human to do 100% of the time. Barring a hardware error or a programming error the machine will always do what its coding tells it to do.

      The robot is not intended to to employ emotions at all, solely reason, and that reason is dictated by programming.

      As for protecting Hitler or Stalin, it's not a conundrum. The robot does not make value judgements. It would save them.

      The Zeroth law can introduce some big problems because it requires predicting the future. Assume that a robot comes upon a man hanging from the edge of a tall building. The robot has knowledge that the person is a serial killer. Given laws 1 to 3 the robot would save the serial killer. Given law 0 he has a problem and some serious computing to do. My guess is that he would save the man and restrain him until the legal system could take over.

      Asimov's robots ended up sentient. How they could be expected to obey any laws after that is unclear to me since sentience implies free will, doesn't it?

      The robot novels have been on my list to read so I guess I'd better get to it!
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by johnpe1 9 years, 7 months ago
        sentience is self-awareness;;; free will is quite
        another thing, is it not? -- j

        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by Timelord 9 years, 7 months ago
          I'm not sure. That's a complicated philosophical question. So I'll posit that to be truly self-aware you would probably have to have free will. A modern computer of our times, if asked what it was, might answer "Just a machine." But in today's technology that would be meaningless because the "self-awareness" would just be a programmed response. I think for a machine to become truly self-aware then it must progress beyond its programming.
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by johnpe1 9 years, 7 months ago
            I spent years working with knowledgeware, a
            software program which writes software. after the
            tedious process of defining business rules in a
            precise way, we would run the thing and find out
            that we needed to provide more detail. between
            that exercise and the specification of manufacturing
            processes in workstream, the successor to "opt",
            I got thoroughly seasoned. sophisticated computer
            programs are hard to write.

            it may be awhile before we learn whether
            self-awareness implies free will. -- j

            Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by j_IR1776wg 9 years, 7 months ago
        " Barring a hardware error or a programming error the machine will always do what its coding tells it to do." Exactly right Timelord! There is no perfect programmer, hence no perfect algorithm, hence no way for a robot to "always" follow these "laws" QED!
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by Timelord 9 years, 7 months ago
          Yes, but remember we're talking sci-fi, not reality. If you want to enjoy a sci-fi novel then you have to be willing to accept its version of reality.

          Another commenter explained that the 3 laws weren't programmed into the positronic brains but were in some way a part of their very construction. I'll find out what Asimov intended when I read the novels.
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by j_IR1776wg 9 years, 7 months ago
            I read a story a long time ago that at the end of WW1 a British soldier walked into a forest clearing and found himself face-to-face with an unarmed German soldier. He leveled his rifle, thought about it, and told the soldier to scram. The German was Adolph Hitler. What would a militarized robot programmed according to Azimov's laws have done? Should the British soldier have been tried 30 years later for not shooting Hitler?
            Reply | Mark as read | Parent | Best of... | Permalink  
            • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
            • Posted by CarolSeer2014 9 years, 7 months ago
              But how would any man or any robot know what was going to happen in the future? Was there supposed to an explication of all possible futures on the part of the one with the rifile, in one instant?
              Reply | Mark as read | Parent | Best of... | Permalink  
              • Posted by j_IR1776wg 9 years, 7 months ago
                1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

                By this law the robot would have not shot Hitler. But by inaction the robot caused the deaths of 50 million humans. Hence the law must be contradictory and contradictions are not allowed in the Gulch. Neither Man nor Robot can comply with it.
                Reply | Mark as read | Parent | Best of... | Permalink  
                • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
                • Posted by Robbie53024 9 years, 7 months ago
                  A robot, or a human, could only act on what they observed and could extrapolate about based on observation. Neither the robot, nor a human, can read a person's mind or act on information that was unobservable, thus, an action regarding Hitler would call for observation of action or understanding of intention. A robot, under the 3 laws would default to action only on what was observed, and then in a manner that did not harm any human, regardless of harm to other humans. Thus, even if observing or understanding the intention of one human to harm another would call on the robot to stop that action without harming any human. In the case of Hitler, that would call for disabling the ability to harm, not in shooting Hitler. This is a rational and good outcome, not a failing.
                  Reply | Mark as read | Parent | Best of... | Permalink  
          • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
          • Posted by CarolSeer2014 9 years, 7 months ago
            What difference would it make, Timelord, whether it was programmed or a part of their construction. I've had a course in COBOL, just one, so didn't tell me too much.
            Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by Timelord 9 years, 7 months ago
          Another thought: there are many perfect algorithms. With well defined data sets you can search and sort with 100% certainty and write a program that will never fail. But those algorithms are simple. When you're talking about AI nothing is simple.
          Reply | Mark as read | Parent | Best of... | Permalink  
      • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
      • Posted by Robbie53024 9 years, 7 months ago
        Please do, they are quite interesting.

        As for sentience and free will, humans also have free will, but many adopt a code of ethics that may prohibit them from doing (and have) some actions just as assuredly they would a programmed computer.
        Reply | Mark as read | Parent | Best of... | Permalink  
    • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
    • Posted by Robbie53024 9 years, 7 months ago
      Actually, it would be much easier to program these "laws" into the algorithm of a robot/computer.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by j_IR1776wg 9 years, 7 months ago
        Somewhere on this post jbrenner opined that this wouldn't happen in his lifetime. I agree. Before you can convert Reason, Emotions, and Value-judgements into an algorithm, you have to understand them completely - we don't.
        Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Solver 9 years, 8 months ago
    There was also invented, the zeroth law, which superseded all of the other laws of robotics,

    0. "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

    A familiar type of law which justifies robots killing or enslaving any number of humans to protect "humanity." (for the "greater good" of course)
    The insanity of this well demonstrates the collective minded.
    Reply | Mark as read | Best of... | Permalink  
    • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
    • Posted by Robbie53024 9 years, 7 months ago
      I was never a big fan of this later addition to the laws of robotics. Even computers that have extensive inputs and considerable processing power would be hard pressed to evaluate the impact on "humanity."
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ Snezzy 9 years, 8 months ago
    Daily Mail piece appears to be blather masquerading as philosophy.

    Asimov was a careful thinker, but he was, of course writing fiction. Moreover, science fiction allows a degree of fantasy that is not present in Rand's works except for Anthem.

    I'd always thought of Anthem as science fiction, but Rand herself called it a poem.
    Reply | Mark as read | Best of... | Permalink  
    • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
    • Posted by Robbie53024 9 years, 7 months ago
      Asimov was a prolific writer, and though known primarily for his science fiction, actually wrote more in the areas of science, mathematics, and history.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Lucky 9 years, 8 months ago
    Watson's proposition is wrong, she confuses 'compassion' a word not in Asimov's laws, with the word that Asimov does use- injure. A robot lawyer could maybe say that suffering is mental harm so will kill the human to stop the harm. Such a robot lawyer will argue that what comes after the comma ranks ahead of what comes before. This not what Asimov would have said, or the view of most.

    As I see law 1: a robot would not perform or allow mercy killing of a human or voluntary euthanasia whatever the amount of pain and suffering being experienced.

    Was it Godel? It is not possible to teach a robot the meaning of words. Words as defined in say a dictionary must be self-referential. Meaning must come from a breakout, for humans this is the totality of perception which is not the same as what a machine can detect. We are a very long way from robots that can have or pretend to have human perceptions.

    Going back to Asimov's laws, I have seen discussion as to whether they can be made foolproof. I think the answer is no.
    The practical answer is to limit the power of robots and machines so they can do damage but not too much- same as for persons, and groups of persons, Do not give too much power.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by Rex_Little 9 years, 8 months ago
      "A robot lawyer could maybe say that suffering is mental harm. . ."

      Asimov actually wrote a story in which a robot lied to its owner to spare her the mental anguish which the truth would have caused.

      "As I see law 1: a robot would not perform or allow mercy killing of a human or voluntary euthanasia whatever the amount of pain and suffering being experienced."

      As I see it (having read all of Asimov's robot stories), a robot confronted with a human's incurable suffering would be faced with an impossible dilemma, and its positronic brain would burn out.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
      • Posted by CarolSeer2014 9 years, 8 months ago
        Yes, there was a robot rebellion in R.U.R.
        But, yes, I would agree with Rand on robots being human tools, and therefore, the human owner would be responsible for the actions of his "property".
        Reply | Mark as read | Parent | Best of... | Permalink  
      • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
      • Posted by CarolSeer2014 9 years, 8 months ago
        Yes, I remember the story from "I, Robot". The woman the robot Tony was bought for, to help her, fell in love with him. One of my favorite stories by Asimov.
        I had thought, though, that Asimov wrote the three laws so that people would not have anything to fear from robots--they can be programmed so as not to harm humans. I think that may have been a very real fear at the time.
        I was thinking of RUR--don't remember if the robots harmed humans or now. Will look it up.
        Reply | Mark as read | Parent | Best of... | Permalink  
    • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
    • Posted by Robbie53024 9 years, 7 months ago
      Correct. And the default response by a robot that cannot clearly identify an action related to evaluation of the 3 laws is to shut down. That happened in one of the stories - I forget which one, it might have even been iRobot.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by CTYankee 9 years, 8 months ago
    I am a student of Asimov. Allow me to paraphrase: Asimov spoke in significant detail about the Three Laws.

    The Laws are not programmed onto the positronic brain in words or any language that a human could understand. The Laws are impressed into the fabric of the positronic brain which manifest as a set of potentials that guide the core decision process of the positronic brain and all subsequent behavior.

    The robot is never faced with a singular 'decision', the Three Laws are so deep in the vital algorithms that the robot cannot help but to act in compliance with the laws.

    The earliest robots were prone to conflicting potentials, but these shortcomings were corrected by the scientists at 'US Robotics and Mechanical Men' in later generations of robots.

    The novels also breakdown into three distinct eras of robotics; the time of Susan Calvin, the time of early space exploration, the time Elijah Bailey and the Spacers. The Foundation novels are set so far in the future that Earth has become a legend and it's actual location has been forgotten in the mist of time
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ blarman 9 years, 8 months ago
    In my view, the Laws of Robotics are inherently flawed. I will attempt to explain my reasoning.

    Humans and robots differ in memory. Humans are apt to forgetting things, which produces all manner of behaviors and choices that a robot with an internal bank of history would not logically make. As a result of this eccentricity, the presumption in the Zeroth and First Laws that a robotic mind can actually predict to any degree of certainty the outcome of human behavior is fundamentally flawed. I would contend that it is more likely for a computer to be able to accurately forecast the weather a month in advance than for it to predict even a single human's behavior a week in advance. The outcome of this is that any action instigated by reason of the First Law alone - let alone the Zeroth Law - is going to be of immediate action or inaction only due to imminent circumstances. Think of it as a chess game, but where you just keep adding pieces to the board. One of the side-effects of humans' ignorance and forgetfulness is that we have the ability to deal with only the circumstances at hand rather than trying to deal with both that AND the historical aspects as well. A computer that relied on history for future decision-making in the realm of predicting even a single human's behavior would quickly run out of memory and processing power trying to see even a couple of hours ahead due to the magnitude of inputs. If the computer were not carefully programmed to concentrate on one specific issue and limited to a very short time-frame for consideration, it would quickly seize up in a permanent logic loop.

    To me, while Asimov's stories are an interesting look into a possible future, I look at the reality of decision-making and realize that the sheer volume of data that would have to be handled to predict human behavior - especially en masse - with any high degree of certainty is so ridiculously huge that no machine could handle it.

    I would also bring up one scenario that I didn't see posed in any of Asimov's writings, but which I think provides an interesting theoretical question: what happens to a robot who needs to be shut down to undergo repairs? Is the robot to operate on faith that it will actually get turned back on after repairs are complete? Would that also have to be programmed in as a Law of Robotics? How is the robot to deal with the potential for demise when anyone could simply be using this as a pretense? How much can the robot rely on internal diagnostics to detect flaws in its behavior?
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ blarman 9 years, 8 months ago
      Oh, and on top of just memory issues, one must also incorporate the irrationality of emotion into the mix - further complicating the decision-making and evaluation process.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Temlakos 9 years, 8 months ago
    If you follow the history of Asimov's Foundation Universe, you'll find eventually the robots, during the days of Elijah Baley and his descendant, developed, on their own, a Zeroth Law: A robot may not bring a general injury to humanity as a whole, nor, through in action, allow humanity to suffer harm. This led eventually to the founding of "Gaia" and to the instigation of the two Foundations.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ MikeMarotta 9 years, 7 months ago
    Frank Herbert's "Dune" versus Atlas Shrugged... Larry Niven's "Kzinti: Rational Cats versus Rational Man" Cordwainer Smith versus Ayn Rand (speaking of cats...) William Gibson's Cyberpunk verus Galt's Gulch. A. E. Van Vogt's "World of Null-A" versus Aristotle and Ayn Rand. Carl Sagan endorsed "Scooby-Doo" versus Mysticism.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ jlc 9 years, 7 months ago
      MikeMarotta - Most folks do not even remember Cordwainer Smith's excellent works. Point. (I find such contrasts as this thread a crucible in which some illuminating ideas can surface.)

      Jan
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Herb7734 9 years, 8 months ago
    A lot would depend upon how human were the robot's programming. Could emotion be programmed, values, morals? Just how cybernetic is cybernetic? Could the 5 branches of philosophy be programmed in from a Aristotlian/Rand point of view? A current view of man's evolution has him turning himself into a machine. Easier to repair than flesh and blood, immortality achieved, which means eternity to get it right.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by DGriffing 9 years, 8 months ago
    Asimov's context was a situation where there was a danger of robots becoming autonomous enough to become rivals or competitors with humans and therefore a potential danger.

    Rand's and Aristotle's view of humans as the "rational animal" doesn't take this into consideration.

    With advances such as computers beating the best humans at chess and Jeopardy ,a number of computer science theorists think such an advance of computers could happen within the next century, bringing the "Terminator" scenario into the real world of possibility. Computers may be deterministic machines but with some, their power and complexity currently gives them the external, measureable, objective appearances of passing the Turing test and possessing volition and therefore in the position of appearing to act like "moral agents".

    All of this could happen while Rand supporters are obliviously asking what would Rand have said, or still debating the Rand / Branden, or the Peikoff / Kelley schisms.

    But it's only a matter of time before the two questions will need to be answered: 1)Does the power of potentially autonomous computers pose a threat to humans because these computers are capable of forming their own purposes and conclusions and seeing humans as a threat to defend against, and do these automatons have the equivalent of rights because they have the essentials of what humans possess as their requisite claim to rights?

    If the answer is yes to the first, then Asimov's laws are relevant. If the answer to the second is yes, then the computer automatons should be regarded as equals of humans and not merely the servants of humans as Asimov's laws require.
    Reply | Mark as read | Best of... | Permalink  
    • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
    • Posted by CarolSeer2014 9 years, 8 months ago
      I think those are good questions, db. But hopefully we haven't reached the point where we are rationally considering that robots have the same rights as humans.
      Another story I like, along this line, is the episode in the original Star Trek series, called "The Ultimate Machine".
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by 9 years, 7 months ago
        The point was not that the robots have the same rights, it was that a robot should be constrained to only do those things that would be consistent with the natural rights of its owner.
        Reply | Mark as read | Parent | Best of... | Permalink  
    • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
    • Posted by CarolSeer2014 9 years, 8 months ago
      I agree, about Gulcher's squabbling incoherently over just what Rand or Brandon or Peikoff would have said. We need, instead, to know what we would think!
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by DGriffing 9 years, 7 months ago
        For nearly 40 years I've been a controls engineer, scientist, and software programmer.

        Computers and software are already advanced enough that in some cases the Turing test has already been passed making it impossible to distinguish the whether responses are coming from a human or an automaton computer.

        There's nothing in principle preventing automaton "Drone" aircraft from making instantaneous decisions in identifying targets or threats to their survival and destroying the target or threat without active human intervention.

        Even if the automaton drones of today aren't "self aware", their use raises serious ethical issues to assure that innocent people aren't killed.

        Also, there's nothing preventing computers from developing to the level where they can be "self aware" and can debug themselves and direct the next steps of their own development. At that point, there's nothing to indicate that humans would know when this would happen or that computers would let us know because it wouldn't be in their self-interest to do so.

        At that point they would look laughingly at Asimov for thinking that self-aware computers could be controlled.
        Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Timelord 9 years, 8 months ago
    "... as a result they should not do anything a human should not do morally"

    Commenting on the sentiment above, there is nothing immoral with the 3 robot laws. If a person's hierarchical values system moves him to act according to the laws nobody would call him immoral. And since it's a machine that operates solely by the set of instructions given to it by humans there is no element of force involved by imposing the laws on robots.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by salta 9 years, 8 months ago
    Yes, the robot should save the life of your mass-murderers listed. Robots cannot implement value judgements in the same way as humans.

    An interesting conflict would be, should a robot save the life of a person holding a gun to another person's head. I would say yes, but there is then still an obligation to save the other person. In 1950s sci-fi movies, that would be the end of the robot as it collapses in a contradicting logic loop.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ hjc4604 9 years, 8 months ago
    A robot is not alive, at least currently, and therefore it's potential loss is not as great as the loss of a human being. As such, requiring it to be a Good Samaritan is not the same as requiring it of humans. Would you risk the deconstruction of a machine that can be replaced to save a human who cannot?
    Reply | Mark as read | Best of... | Permalink  
  • Posted by LarryHeart 9 years, 8 months ago
    Asimov postulated an encyclopedia that could predict general trend and major events with accuracy. The robots used that knowledge and (Spoiler alert) their ability to read minds to fulfill the 0 law.
    Reply | Mark as read | Best of... | Permalink  
  • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
  • Posted by radical 9 years, 8 months ago
    There are a lot of "robots" in the world already. American robots obey commands from the leftist media, Democrats and other collectivists, Liberal professors and politicians.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by wiggys 9 years, 8 months ago
    a robot is a robot, made by man for a purpose designated by man. robots are not humans nor will they ever be. so all of the laws do not apply.
    Reply | Mark as read | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo