Questions About Artificial Intelligence

Posted by khalling 9 years ago to Philosophy
52 comments | Share | Best of... | Flag

"To challenge the basic premise of any discipline, one must begin at the beginning. In ethics, one must begin by asking: What are values? Why does man need them?

“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.

I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”

To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness

Questions regarding objectivist ethics. Discuss.


All Comments

  • Posted by $ puzzlelady 9 years ago in reply to this comment.
    Yes, I'll be at the Atlas Summit/Freedom Fest and look forward to meeting you and maybe some of the others. We enjoyed meeting Dale last year.
    Reply | Permalink  
  • Posted by $ puzzlelady 9 years ago
    Great question, khalling. Thanks for stimulating so many good thoughts from the Gulchers. Here are my two cents.

    To be absolutely objective and rational, there can be no such thing as an immortal, indestructible robot except in science fiction. In fiction anything goes. If robots are built by humans, they are machines and need human maintenance and repairs. The more advanced our technological inventions, the more can go wrong with them. Even machines with built-in self-diagnostics need infrastructure, a planet of mines and factories and energy sources. "The Matrix" had the novel plot of humans used as living batteries to power the machines. And if there were machines with self-preservation programmed into them, then their own survival against, say, an earth-shattering asteroid or sun-extinguishing black hole, or even just human sabotage, would be their "value".

    Such robots would have been built in the image of their creators, who themselves had evolved naturally to a level of sentience that enabled them to construct mechanical embodiments of their own survival efficacy. Would sufficiently complex machines then acquire not only a survival need but a mythology about their creators who should be worshipped in an interpretation of Asimov's first law?

    Rand's formulation of the concept of values as pertaining only to living things and their struggle between life and death was spot-on. Hence come all of mankind's yearnings for immortality, for an afterlife, for an idealized being that has existed forever and will exist forever. If such a being did exist, what values could it possibly have, since its existence is not at stake? Maybe to stave off boredom by creating Universes as playthings, with evolving lifeforms and intelligences of whom it could demand worship and obedience?

    We can ask how a lifeform can have evolved with emerging consciousness, and even that question is rooted in a stage of expansion from early organic processes of self-preservation at the microorganism level, with nervous systems that detect danger and react in adaptive ways. As humans’ nervous systems and complex brains grew to develop language, retain information, combine percepts into concepts and concepts into more complex thought systems, a stage of development was reached where the inquisitiveness of the creature’s detection mechanism reached the point of observing not only the surrounding environment but its own mental functioning as an object for observation: self-awareness and self-consciousness. This can be seen most fundamentally, for example, in a child’s toilet training, of being made aware first of physical functioning and awareness of self, and interfacing mentally and emotionally with the envelope of rules of behavior of the group in which the child is embedded. That sets the pattern for imitation and replication, and self-control and self-direction.

    Consciousness, then, can be defined as the human software run through the brain’s operating system as encoded in the DNA. The computer metaphor is fitting, since human brains designed computers in their own image and logic.

    And the drive to create artificial intelligence is a natural outgrowth of the life directive to expand, to build toward ever greater complexity along a continuum to infinity, what human minds can understand as perfection and omniscience and omnipotence. Whether we humans can ever build a machine on that level I leave to the science fiction imagination. Machines are not a life form, even though we can call living things a type of machine, self-built from inborn blueprints.

    I welcome the development of advanced computers as tools and auxiliary memory storage, provided human brains can keep up with their maintenance. Without skilled technicians, machines break down. The sport that hackers make of messing up the systems is an encouraging sign that robots will not get the upper hand. You can teach them almost anything but human attributes such as self-esteem, respect, objective ethics, imagination, individualism and love.

    What I would like to see develop from all of our strivings to build better and better machines is a world where there are no threats to our safety and happiness from our fellow humans; where we can all cooperate towards conquering the natural dangers, whether from microbes that can wipe out half our population or from meteor collisions that can wipe out most of life or from even just shortages of energy on which life depends and that could be solved by finding and building reliable, permanent sources rather than expropriating other humans. No matter how wonderful and intelligent are the robots we can construct, until the war meme is eradicated, they will just become fancier weapons.
    Reply | Permalink  
  • Posted by $ blarman 9 years ago in reply to this comment.
    Well said. And the crux of the question you ask directly applies to khalling's query, for if a metaphysical component does exist, in order to create a truly sentient being we would have to have the understanding and control of sentience in order to introduce such into a physical shell. If the metaphysical does not exist, we wouldn't be creating consciousness at all anyway - except by complete accident (see "Short Circuit").

    Can we create a computer capable of vast assembly of facts? Absolutely. Can we create a computer that can then act on those facts? Yes, but the real question which was raised by Zenphamy and which I agree with is in the determination of its value set.
    Reply | Permalink  
  • Posted by CircuitGuy 9 years ago
    " try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. "
    If we can imagine this,we can imagine technology that would make human beings indestructible and have everything they want. Then value wouldn't exist for humans. I wonder if this is why people picked for the first story in the Christian bible the expulsion from paradise.
    Reply | Permalink  
  • Posted by Zenphamy 9 years ago in reply to this comment.
    And I often think the 'knowing what' is even becoming more distant than what was assumed or defined 50 yrs ago, and even later. I did a little work on so called 'expert' systems in the late 70's early 80's and the confusion I encountered was a little astounding, to me.
    Reply | Permalink  
  • Posted by ProfChuck 9 years ago
    Intelligence and the property of being self aware both have survival benefits. If we are to accept the fundamental principals of evolution no physical or, in this case, psychological property exists if it does not serve to facilitate the continued existence of the species or at least do nothing to hinder that existence. The issue of self awareness, if that's a word, is a challenging one. We understand what it means and we each have personal experience that validates the concept but is it possible to prove to someone else that you are self aware? In the absence of a Vulcan type "Mind Meld" it is not clear how one would go about providing such proof. In the case of machine intelligence how would we prove that a machine was self aware considering that it would be programed to argue that it did? I have worked in the field of AI off and on for nearly 50 years where the principal goal was the design an autonomous spacecraft. I have watched the Turing Test become more and more sophisticated but after 50 years I am no closer to knowing what intelligence or being self aware really is.
    Reply | Permalink  
  • Posted by $ WilliamShipley 9 years ago in reply to this comment.
    We're still around. Lately almost everything has been related to politics -- and everyone's down to their last nerve. Hasn't been as much fun.
    Reply | Permalink  
  • Posted by Zenphamy 9 years ago in reply to this comment.
    We think we see possible signs of forms of consciousness in 'some' so called lower animals, but that may very well be the result of our failure to yet accurately define what consciousness is. As to Borgs, the addition of mechanical parts to humans is happening and will continue to progress. I have no doubt of that at all. But remember that the problem with the Borgs was the hive mind--not necessarily the joining of mechanical and biological.
    Reply | Permalink  
  • Posted by Zenphamy 9 years ago in reply to this comment.
    I see no need to address as you put it, "a metaphysical component, e.g. a soul"... If such a thing (more a non-thing) existed or is, we could never replicate it because of it's supposed definition.

    The change that's happening in neurology that I mentioned above boils down to recognition from various specialties, that the analogies of the brain to computers is an error, or at least very incomplete. As I also mentioned, I see no impossibility in our eventual ability to build a computing system with as much or more intelligence as the human brain/mind. But that won't address the question arising from the topic of the post--assigning value and making judgements that fit an ethic that is also self determined and directed--at least not that we might recognize.

    I just happen to think that there exists truths and facts, maybe even functions within the neurological system and the mind that develops in it that we simply don't understand yet and I think part of that is the determination of a more exact definition and understanding of consciousness itself.
    Reply | Permalink  
  • Posted by lrshultis 9 years ago
    Rand was dealing with human life and values. Values do not necessarily imply consciousness or even awareness. Plants and most non human animals have values but have no self awareness as to why they seek them. Even autonomous machines would have values which had to be gained. Just a power source would be vital and need to be periodically supplied from the environment.
    For most of the time humans have been on Earth, there has been no conscious awareness of values. That is something that has to be learned through thinkers in religion at first and later in philosophy through rational thought.
    It would be no more possible to know whether some seemingly conscious robot is self aware any more then it is to know whether an animal is self aware, humans notwithstanding. There are no standards to judge the matter other than considering one's own awareness of self awareness. Even if a machine could pass the Turing Test, there would be doubt in the matter.
    Reply | Permalink  
  • Posted by DrZarkov99 9 years ago
    We are still discovering that many lower life forms than humans exhibit self awareness, or what we call consciousness. When we don't really have a solid scientific basis for determining what consciousness actually is, and are fixated on an anthropomorphic definition, will we actually know when an AI entity achieves its own unique form of awareness?

    We don't clearly understand how our own consciousness functions, and our efforts to emulate human intelligence in machines do not map the way our brain works. Once machines are enabled to develop adaptive software and break the bonds of human restrictions, we may be surprised to find there's a new definition of consciousness alien to our own.
    Reply | Permalink  
  • Posted by $ jdg 9 years ago
    I don't buy that last paragraph at all.

    Suppose we were to upload your mind, your consciousness, into that robot.[1] I'm sure that your outlook on life would change in some ways, because you'd be invincible, or at least indestructible. Nevertheless, I expect that you would choose some goals and pursue them. Even though your existence would no longer be in question, your enjoyment of it still would be, and I expect you would find at least some trades with others worthwhile. You would still be a person. (Those who doubt this, please read Minsky's Society of Mind.)

    For what it's worth, though I lean in the direction of transhumanism, I am not in any hurry to try to create artificial intelligences, because while I'm sure they will have goals, I'm not at all sure they will be willing to live-and-lef-live with humans.
    --
    [1] The transhumanists actually hope to do this eventually, though they do not expect their 'bots to be indestructible. I have no opinion yet on whether it will be possible.
    Reply | Permalink  
  • Posted by $ CBJ 9 years ago in reply to this comment.
    I don't see any artificial intelligence there, just lots of natural stupidity.
    Reply | Permalink  
  • Posted by $ WilliamShipley 9 years ago in reply to this comment.
    Certainly there is a physical component to perception where physical reality is converted into electrical and chemical signals for transmission to the brain. That physical component can be constructed with even more capabilities than humans already. AI can see a broader spectrum, hear a wider range of sounds. Smell is still quite difficult, especially matching such excellent capabilities as the family dog.

    The portion of perception that is based on processing the data is still beyond us, but as we understand how we work, we will be able to mimic it.

    Many years ago, I wrote a primitive chess playing program. It was amazing how fast it became necessary to play chess against it rather than try to predict the result of the algorithms I had written.
    Reply | Permalink  
  • Posted by $ CBJ 9 years ago in reply to this comment.
    Who cares? We're toast in a billion years anyway. I plan to put my remaining 999,999,927 years to good use while I still can. :-)
    Reply | Permalink  
  • Posted by $ WilliamShipley 9 years ago in reply to this comment.
    The three laws are the result of discussions between Asimov and John W. Campbell, his editor. Asimov credits Campbell with stating them, Campbell claims they were inherent in Asimov's stories and he just pointed them out.

    While they are nice literary concepts and something to guide us, actually implementing them would be incredibly difficult with many opportunities for exceptions.
    Reply | Permalink  
  • Posted by $ WilliamShipley 9 years ago in reply to this comment.
    The answer to the question as to whether a robot can have consciousness lies in another, long debated question: are we entirely physical beings or is there a metaphysical component, e.g. a soul present?

    If you believe that we are entirely physical beings than the nature of how we operate is a real physical phenomena capable of being perceived and eventually replicated. If it is possible, however difficult the process is, then eventually we will be able create to create an intelligence that indistinguishable from our own.

    To deal with Rand's example we have to go further and imagine that this creation is also indestructible and immortal both of which are absolute concepts that in real terms may never be actualized.

    The only way that we would be unable to physically replicate the human mind is if it has a non-physical component not capable of being perceived by our senses or understood by rational analysis.
    Reply | Permalink  
  • Posted by $ CBJ 9 years ago in reply to this comment.
    What's really scary is that this might be an improvement over our current situation.
    Reply | Permalink  

  • Comment hidden. Undo