Questions About Artificial Intelligence

Posted by khalling 7 years, 10 months ago to Philosophy
52 comments | Share | Best of... | Flag

"To challenge the basic premise of any discipline, one must begin at the beginning. In ethics, one must begin by asking: What are values? Why does man need them?

“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.

I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”

To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness

Questions regarding objectivist ethics. Discuss.


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by ProfChuck 7 years, 10 months ago
    Intelligence and the property of being self aware both have survival benefits. If we are to accept the fundamental principals of evolution no physical or, in this case, psychological property exists if it does not serve to facilitate the continued existence of the species or at least do nothing to hinder that existence. The issue of self awareness, if that's a word, is a challenging one. We understand what it means and we each have personal experience that validates the concept but is it possible to prove to someone else that you are self aware? In the absence of a Vulcan type "Mind Meld" it is not clear how one would go about providing such proof. In the case of machine intelligence how would we prove that a machine was self aware considering that it would be programed to argue that it did? I have worked in the field of AI off and on for nearly 50 years where the principal goal was the design an autonomous spacecraft. I have watched the Turing Test become more and more sophisticated but after 50 years I am no closer to knowing what intelligence or being self aware really is.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by Zenphamy 7 years, 10 months ago
      And I often think the 'knowing what' is even becoming more distant than what was assumed or defined 50 yrs ago, and even later. I did a little work on so called 'expert' systems in the late 70's early 80's and the confusion I encountered was a little astounding, to me.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by DrZarkov99 7 years, 10 months ago
    We are still discovering that many lower life forms than humans exhibit self awareness, or what we call consciousness. When we don't really have a solid scientific basis for determining what consciousness actually is, and are fixated on an anthropomorphic definition, will we actually know when an AI entity achieves its own unique form of awareness?

    We don't clearly understand how our own consciousness functions, and our efforts to emulate human intelligence in machines do not map the way our brain works. Once machines are enabled to develop adaptive software and break the bonds of human restrictions, we may be surprised to find there's a new definition of consciousness alien to our own.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by Herb7734 7 years, 10 months ago
    Consciousness is the keystone upon which life and the existence of intelligence depends. Any attempts at artificial intelligence remains robotic without self-awareness and consciousness. But, we cannot create consciousness, even if we create intelligence. Why? Be cause we don't know WTF consciousness is.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by Zenphamy 7 years, 10 months ago
    Rand's robot is a construction of some human's--nuts and bolts, springs, levers, cams, hydraulics, electrical power, memory chips, etc.--combined into an entity and then given instructions (inputs)--either step by step or by a programmed instruction set. The question then comes, can it be instructed to be 'intelligent'? Generally intelligent is defined as: "It can be more generally described as the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviors within an environment." I see no reason that such a machine could not be envisaged, constructed, and programmed--but I don't see any self generated value assignments and judgements, or ethics deriving from just the intelligence--only the ethics or values of the builder transferred by the programmer or built into the CPU..

    The real question I perceive is consciousness, which we still have difficulties in fully defining. Does consciousness arise from intelligence? I think not. The most basic concept of consciousness, in my opinion, is awareness of self, separate from everything else in the environment, even while recognizing similarities to other entities and that components of self are available and used by other entities as well. Consciousness utilizes intelligence as it also utilizes it's senses, it's motive ability, it's memory, it's ability to reason, it's curiosity, etc. Consciousness recognizes the needs of self, and in that can then develop it's ethic and values. Can Rand's robot have consciousness from which it's own values and ethics derive? I don't see how, unless one considers consciousness as deriving from or arising from an increasing complexity or accumulation of parts or programmed instruction sets. That mechanized model of consciousness has fallen out of favor by many that study that area in neurology at least.

    So I think what we really seem to be talking about is, can we design and build an artificial consciousness, or maybe should we if we can. And if we did, could we impose ethics and values that would align with ours, or would that artificial consciousness develop it's own that takes it's own path? Even more, if we could; what are the ethical impact to ourselves of enslaving another consciousness?

    Until we develop a better, or complete knowledge and understanding of consciousness, I think we're trying to find our way in the dark with a dimming flashlight.

    But a great post to energizes some deep thought. Txs.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ WilliamShipley 7 years, 10 months ago
      The answer to the question as to whether a robot can have consciousness lies in another, long debated question: are we entirely physical beings or is there a metaphysical component, e.g. a soul present?

      If you believe that we are entirely physical beings than the nature of how we operate is a real physical phenomena capable of being perceived and eventually replicated. If it is possible, however difficult the process is, then eventually we will be able create to create an intelligence that indistinguishable from our own.

      To deal with Rand's example we have to go further and imagine that this creation is also indestructible and immortal both of which are absolute concepts that in real terms may never be actualized.

      The only way that we would be unable to physically replicate the human mind is if it has a non-physical component not capable of being perceived by our senses or understood by rational analysis.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by Zenphamy 7 years, 10 months ago
        I see no need to address as you put it, "a metaphysical component, e.g. a soul"... If such a thing (more a non-thing) existed or is, we could never replicate it because of it's supposed definition.

        The change that's happening in neurology that I mentioned above boils down to recognition from various specialties, that the analogies of the brain to computers is an error, or at least very incomplete. As I also mentioned, I see no impossibility in our eventual ability to build a computing system with as much or more intelligence as the human brain/mind. But that won't address the question arising from the topic of the post--assigning value and making judgements that fit an ethic that is also self determined and directed--at least not that we might recognize.

        I just happen to think that there exists truths and facts, maybe even functions within the neurological system and the mind that develops in it that we simply don't understand yet and I think part of that is the determination of a more exact definition and understanding of consciousness itself.
        Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ blarman 7 years, 10 months ago
        Well said. And the crux of the question you ask directly applies to khalling's query, for if a metaphysical component does exist, in order to create a truly sentient being we would have to have the understanding and control of sentience in order to introduce such into a physical shell. If the metaphysical does not exist, we wouldn't be creating consciousness at all anyway - except by complete accident (see "Short Circuit").

        Can we create a computer capable of vast assembly of facts? Absolutely. Can we create a computer that can then act on those facts? Yes, but the real question which was raised by Zenphamy and which I agree with is in the determination of its value set.
        Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by $ CBJ 7 years, 10 months ago
      ". . . what are the ethical impact to ourselves of enslaving another consciousness?" We do that every day with the lower animals. And we build machines with varying degrees of autonomy. I'm more concerned with the ethical implications of building machine/animal hybrids and, eventually, machine/human hybrids, like the Borg on Star Trek.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by Zenphamy 7 years, 10 months ago
        We think we see possible signs of forms of consciousness in 'some' so called lower animals, but that may very well be the result of our failure to yet accurately define what consciousness is. As to Borgs, the addition of mechanical parts to humans is happening and will continue to progress. I have no doubt of that at all. But remember that the problem with the Borgs was the hive mind--not necessarily the joining of mechanical and biological.
        Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by davidmcnab 7 years, 10 months ago
    The key here is the phrases "which cannot be affected by anything, which cannot be changed in any respect". Straight away, that rules out any kind of intelligence, artificial or otherwise. Set it to work on a kitchen appliance production line, because that's all it will be good for.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by term2 7 years, 10 months ago
    The real issue will be when robots are NOT programmed specifically, but learn on their own (like human children). Good article in wired magazine this month on how computer coding is become a dying art- to be replaced by computers that observe and learn autonomously. One might not even be able to determine why such a robot acted in a given situation.

    This is the real singularity fear people should have. That kind of robot would have no morality other than what IT learned on its own.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by jimjamesjames 7 years, 10 months ago
    I'll stick with this, Asimov’s “Three Laws of Robotics”:
    1. A robot may not injure a human being, or,
    through inaction, allow a human being to come to
    harm.
    2. A robot must obey the orders given it by human
    beings except where such orders would conflict
    with the First Law.
    3. A robot must protect its own existence as long as
    such protection does not conflict with the First or
    Second Law. (Asimov 1984)
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ WilliamShipley 7 years, 10 months ago
      The three laws are the result of discussions between Asimov and John W. Campbell, his editor. Asimov credits Campbell with stating them, Campbell claims they were inherent in Asimov's stories and he just pointed them out.

      While they are nice literary concepts and something to guide us, actually implementing them would be incredibly difficult with many opportunities for exceptions.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ puzzlelady 7 years, 10 months ago
    Great question, khalling. Thanks for stimulating so many good thoughts from the Gulchers. Here are my two cents.

    To be absolutely objective and rational, there can be no such thing as an immortal, indestructible robot except in science fiction. In fiction anything goes. If robots are built by humans, they are machines and need human maintenance and repairs. The more advanced our technological inventions, the more can go wrong with them. Even machines with built-in self-diagnostics need infrastructure, a planet of mines and factories and energy sources. "The Matrix" had the novel plot of humans used as living batteries to power the machines. And if there were machines with self-preservation programmed into them, then their own survival against, say, an earth-shattering asteroid or sun-extinguishing black hole, or even just human sabotage, would be their "value".

    Such robots would have been built in the image of their creators, who themselves had evolved naturally to a level of sentience that enabled them to construct mechanical embodiments of their own survival efficacy. Would sufficiently complex machines then acquire not only a survival need but a mythology about their creators who should be worshipped in an interpretation of Asimov's first law?

    Rand's formulation of the concept of values as pertaining only to living things and their struggle between life and death was spot-on. Hence come all of mankind's yearnings for immortality, for an afterlife, for an idealized being that has existed forever and will exist forever. If such a being did exist, what values could it possibly have, since its existence is not at stake? Maybe to stave off boredom by creating Universes as playthings, with evolving lifeforms and intelligences of whom it could demand worship and obedience?

    We can ask how a lifeform can have evolved with emerging consciousness, and even that question is rooted in a stage of expansion from early organic processes of self-preservation at the microorganism level, with nervous systems that detect danger and react in adaptive ways. As humans’ nervous systems and complex brains grew to develop language, retain information, combine percepts into concepts and concepts into more complex thought systems, a stage of development was reached where the inquisitiveness of the creature’s detection mechanism reached the point of observing not only the surrounding environment but its own mental functioning as an object for observation: self-awareness and self-consciousness. This can be seen most fundamentally, for example, in a child’s toilet training, of being made aware first of physical functioning and awareness of self, and interfacing mentally and emotionally with the envelope of rules of behavior of the group in which the child is embedded. That sets the pattern for imitation and replication, and self-control and self-direction.

    Consciousness, then, can be defined as the human software run through the brain’s operating system as encoded in the DNA. The computer metaphor is fitting, since human brains designed computers in their own image and logic.

    And the drive to create artificial intelligence is a natural outgrowth of the life directive to expand, to build toward ever greater complexity along a continuum to infinity, what human minds can understand as perfection and omniscience and omnipotence. Whether we humans can ever build a machine on that level I leave to the science fiction imagination. Machines are not a life form, even though we can call living things a type of machine, self-built from inborn blueprints.

    I welcome the development of advanced computers as tools and auxiliary memory storage, provided human brains can keep up with their maintenance. Without skilled technicians, machines break down. The sport that hackers make of messing up the systems is an encouraging sign that robots will not get the upper hand. You can teach them almost anything but human attributes such as self-esteem, respect, objective ethics, imagination, individualism and love.

    What I would like to see develop from all of our strivings to build better and better machines is a world where there are no threats to our safety and happiness from our fellow humans; where we can all cooperate towards conquering the natural dangers, whether from microbes that can wipe out half our population or from meteor collisions that can wipe out most of life or from even just shortages of energy on which life depends and that could be solved by finding and building reliable, permanent sources rather than expropriating other humans. No matter how wonderful and intelligent are the robots we can construct, until the war meme is eradicated, they will just become fancier weapons.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by CircuitGuy 7 years, 10 months ago
    " try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. "
    If we can imagine this,we can imagine technology that would make human beings indestructible and have everything they want. Then value wouldn't exist for humans. I wonder if this is why people picked for the first story in the Christian bible the expulsion from paradise.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by lrshultis 7 years, 10 months ago
    Rand was dealing with human life and values. Values do not necessarily imply consciousness or even awareness. Plants and most non human animals have values but have no self awareness as to why they seek them. Even autonomous machines would have values which had to be gained. Just a power source would be vital and need to be periodically supplied from the environment.
    For most of the time humans have been on Earth, there has been no conscious awareness of values. That is something that has to be learned through thinkers in religion at first and later in philosophy through rational thought.
    It would be no more possible to know whether some seemingly conscious robot is self aware any more then it is to know whether an animal is self aware, humans notwithstanding. There are no standards to judge the matter other than considering one's own awareness of self awareness. Even if a machine could pass the Turing Test, there would be doubt in the matter.
    Reply | Mark as read | Best of... | Permalink  
  • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
  • Posted by Hot_Black_Desiato 7 years, 10 months ago
    I love this discussion. Robots could be considered A-Moral. I would have to approach this from who was the robots creator? what was programmed?

    Was the robot programmed by another robot using 100% logic? i.e. the borg?

    Was the robot programmed like Samaritan or "The Machine" on Person of Interest?

    Garbage in, garbage out, is the primary premise of all AI, which ironically is the same as with our brain?

    Does the Robot has a primary premise or hard coded goal? Refer to Start Trek "The Movie" or the episode of the original series with Nomad?

    Isaac Asimov provides highly enlighted views on programming robots and turning them loose with their "Prime Directive."

    I-Robot, and "Robin Williams, Man of the Century" two great movies to place some context. Also, the Terminator series and "The Matrix" series are wonderful examples of potential, negative impact of powerful machines turned loose.

    I like the difference between "Knowledge" and "Wisdom." Zenphamy mentions this in his comment.

    Knowledge, the taking in of information.
    Wisdom the practical application of knowledge.

    Personally, I like Isaac Asimov.

    Quoting again "To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."

    That premise in itself would make the Robot a giant useless paperweight.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by Dobrien 7 years, 10 months ago
    Artificial Intelligence by definition a man made machine it lacks naturalness or spontaneity, is forced or controlled by man , not occurring in nature. It cannot have a conscious it can only imitate an externally preproduced command.
    It has no connection to ,or concern of existence or non-existence. it does not " know " life or death.
    I am contemplating this and I think of weaponized drones and the ability of man to destroy , and be disconnected from the horror and be disconnected from his conscience of that destruction.
    Would you consider a war drone as a type of robot?
    If so could one ever have a conscience?
    Reply | Mark as read | Best of... | Permalink  
    • Posted by 7 years, 10 months ago
      even before conscience, one has to have the ability to perceive. I do not see that AI will ever have perception separate from that which man manipulates.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ WilliamShipley 7 years, 10 months ago
        Certainly there is a physical component to perception where physical reality is converted into electrical and chemical signals for transmission to the brain. That physical component can be constructed with even more capabilities than humans already. AI can see a broader spectrum, hear a wider range of sounds. Smell is still quite difficult, especially matching such excellent capabilities as the family dog.

        The portion of perception that is based on processing the data is still beyond us, but as we understand how we work, we will be able to mimic it.

        Many years ago, I wrote a primitive chess playing program. It was amazing how fast it became necessary to play chess against it rather than try to predict the result of the algorithms I had written.
        Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ jdg 7 years, 10 months ago
    I don't buy that last paragraph at all.

    Suppose we were to upload your mind, your consciousness, into that robot.[1] I'm sure that your outlook on life would change in some ways, because you'd be invincible, or at least indestructible. Nevertheless, I expect that you would choose some goals and pursue them. Even though your existence would no longer be in question, your enjoyment of it still would be, and I expect you would find at least some trades with others worthwhile. You would still be a person. (Those who doubt this, please read Minsky's Society of Mind.)

    For what it's worth, though I lean in the direction of transhumanism, I am not in any hurry to try to create artificial intelligences, because while I'm sure they will have goals, I'm not at all sure they will be willing to live-and-lef-live with humans.
    --
    [1] The transhumanists actually hope to do this eventually, though they do not expect their 'bots to be indestructible. I have no opinion yet on whether it will be possible.
    Reply | Mark as read | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo