Now The Military Is Going To Build Robots That Have Morals

Posted by Zenphamy 10 years, 11 months ago to Philosophy
61 comments | Share | Flag

So, can this be done and if so, where does Objective Philosophy fit in the determinations to be made? Who's going to determine which ethical and moral principles form the base of such programming?

From the article: "Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,” Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent."


All Comments


Previous comments...   You are currently on page 3.
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Maphesdus 10 years, 11 months ago
    Objectivist philosophy as a combat routine: "Do not fire unless fired upon."
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Hiraghm 10 years, 11 months ago in reply to this comment.
    "Little Robot Lost" I think it was called. it was a conflict between the 1st and 2nd laws.
    Reply | Permalink  
  • Posted by $ AJAshinoff 10 years, 11 months ago in reply to this comment.
    Robots should never be used to take a life under any circumstances. AI should never be used to make that type of decision. He had a book, I can't recall the name, where a robot on Mercury had to make a choice between the laws to save a life. Very interesting.
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Robbie53024 10 years, 11 months ago
    More than half a century ago, Isaac Asimov tackled this issue and developed the 3 Laws of Robotics. To whit -
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    These would be hard-coded into all micro-processors and could not be violated else the robot self-destruct.
    Reply | Permalink  

  • Comment hidden. Undo