Questions About Artificial Intelligence

Posted by khalling 9 years ago to Philosophy
52 comments | Share | Best of... | Flag

"To challenge the basic premise of any discipline, one must begin at the beginning. In ethics, one must begin by asking: What are values? Why does man need them?

“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.

I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”

To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness

Questions regarding objectivist ethics. Discuss.


All Comments


Previous comments...   You are currently on page 2.
  • Posted by $ CBJ 9 years ago in reply to this comment.
    ". . . what are the ethical impact to ourselves of enslaving another consciousness?" We do that every day with the lower animals. And we build machines with varying degrees of autonomy. I'm more concerned with the ethical implications of building machine/animal hybrids and, eventually, machine/human hybrids, like the Borg on Star Trek.
    Reply | Permalink  
  • Posted by CTYankee 9 years ago in reply to this comment.
    FYI: Asimov introduced the Three Laws in a 1942 story, citing them as originating from 'The Robotics Handbook" more than a century (2058) in the future...

    Without contemplating the advances brought to us by Moore's Law, he made a remarkably accurate prediction. We still don't know enough about AI, or it's natural counterpart to say fr sure. It's going to be an interesting 42 years :^)
    Reply | Permalink  
  • Posted by Herb7734 9 years ago
    Consciousness is the keystone upon which life and the existence of intelligence depends. Any attempts at artificial intelligence remains robotic without self-awareness and consciousness. But, we cannot create consciousness, even if we create intelligence. Why? Be cause we don't know WTF consciousness is.
    Reply | Permalink  
  • Posted by davidmcnab 9 years ago
    The key here is the phrases "which cannot be affected by anything, which cannot be changed in any respect". Straight away, that rules out any kind of intelligence, artificial or otherwise. Set it to work on a kitchen appliance production line, because that's all it will be good for.
    Reply | Permalink  
  • Posted by wiggys 9 years ago
    If you really want to know what artificial intelligence is go to where it exists WASHINGTON D.C.!!!!!!!!!!!!!!!!!
    Reply | Permalink  
  • Posted by term2 9 years ago in reply to this comment.
    What if there are too many of them, like in IROBOT.. Scary thought actually. They could turn from our servants into our overlords that controlled our lives...
    Reply | Permalink  
  • Posted by term2 9 years ago in reply to this comment.
    I didnt understand the fear of robots until I read this article in WIRED. Once the robots start thinking for themselves and making decisions that the programmers didnt program in, how would one make sure that Asimov's laws were followed? What if the robot decided that a particular "person" wasnt human and then treated it as a non human (maybe Obama needs to worry as one could argue that his philosophy makes him not human at all since he seems to treat the rest of us as non-humans)
    Reply | Permalink  
  • Posted by jimjamesjames 9 years ago in reply to this comment.
    Let me amend "I'll stick with this, Asimov’s “Three Laws of Robotics” with the caveat that robots will probably evolve like teenagers. They'll get to a point where they "think" they know it all and it's downhill for humanity after that.
    Reply | Permalink  
  • Posted by term2 9 years ago in reply to this comment.
    What if its a robot that learns what to do on its own through interaction with the environment. That IS coming.
    Imagine the difficulty of training a robot through coding to recognize a CAT. Whats happening now is that the robot observes pictures of 1000 cats and then decides on its own how to determine if another picture or a live thing is a cat.
    Reply | Permalink  
  • Posted by term2 9 years ago
    The real issue will be when robots are NOT programmed specifically, but learn on their own (like human children). Good article in wired magazine this month on how computer coding is become a dying art- to be replaced by computers that observe and learn autonomously. One might not even be able to determine why such a robot acted in a given situation.

    This is the real singularity fear people should have. That kind of robot would have no morality other than what IT learned on its own.
    Reply | Permalink  
  • Posted by mia767ca 9 years ago
    if it can be imagined (by the intelligent human mind)...it can can be achieved...

    a machine or construct with awareness???...that can affect reality???...can you imagine it???
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by Hot_Black_Desiato 9 years ago
    I love this discussion. Robots could be considered A-Moral. I would have to approach this from who was the robots creator? what was programmed?

    Was the robot programmed by another robot using 100% logic? i.e. the borg?

    Was the robot programmed like Samaritan or "The Machine" on Person of Interest?

    Garbage in, garbage out, is the primary premise of all AI, which ironically is the same as with our brain?

    Does the Robot has a primary premise or hard coded goal? Refer to Start Trek "The Movie" or the episode of the original series with Nomad?

    Isaac Asimov provides highly enlighted views on programming robots and turning them loose with their "Prime Directive."

    I-Robot, and "Robin Williams, Man of the Century" two great movies to place some context. Also, the Terminator series and "The Matrix" series are wonderful examples of potential, negative impact of powerful machines turned loose.

    I like the difference between "Knowledge" and "Wisdom." Zenphamy mentions this in his comment.

    Knowledge, the taking in of information.
    Wisdom the practical application of knowledge.

    Personally, I like Isaac Asimov.

    Quoting again "To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."

    That premise in itself would make the Robot a giant useless paperweight.
    Reply | Permalink  
  • Posted by jimjamesjames 9 years ago
    I'll stick with this, Asimov’s “Three Laws of Robotics”:
    1. A robot may not injure a human being, or,
    through inaction, allow a human being to come to
    harm.
    2. A robot must obey the orders given it by human
    beings except where such orders would conflict
    with the First Law.
    3. A robot must protect its own existence as long as
    such protection does not conflict with the First or
    Second Law. (Asimov 1984)
    Reply | Permalink  
  • Posted by 9 years ago in reply to this comment.
    even before conscience, one has to have the ability to perceive. I do not see that AI will ever have perception separate from that which man manipulates.
    Reply | Permalink  

  • Comment hidden. Undo