Questions About Artificial Intelligence
"To challenge the basic premise of any discipline, one must begin at the beginning. In ethics, one must begin by asking: What are values? Why does man need them?
“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.
I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”
To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness
Questions regarding objectivist ethics. Discuss.
“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.
I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”
To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness
Questions regarding objectivist ethics. Discuss.
Previous comments... You are currently on page 3.
It has no connection to ,or concern of existence or non-existence. it does not " know " life or death.
I am contemplating this and I think of weaponized drones and the ability of man to destroy , and be disconnected from the horror and be disconnected from his conscience of that destruction.
Would you consider a war drone as a type of robot?
If so could one ever have a conscience?
The real question I perceive is consciousness, which we still have difficulties in fully defining. Does consciousness arise from intelligence? I think not. The most basic concept of consciousness, in my opinion, is awareness of self, separate from everything else in the environment, even while recognizing similarities to other entities and that components of self are available and used by other entities as well. Consciousness utilizes intelligence as it also utilizes it's senses, it's motive ability, it's memory, it's ability to reason, it's curiosity, etc. Consciousness recognizes the needs of self, and in that can then develop it's ethic and values. Can Rand's robot have consciousness from which it's own values and ethics derive? I don't see how, unless one considers consciousness as deriving from or arising from an increasing complexity or accumulation of parts or programmed instruction sets. That mechanized model of consciousness has fallen out of favor by many that study that area in neurology at least.
So I think what we really seem to be talking about is, can we design and build an artificial consciousness, or maybe should we if we can. And if we did, could we impose ethics and values that would align with ours, or would that artificial consciousness develop it's own that takes it's own path? Even more, if we could; what are the ethical impact to ourselves of enslaving another consciousness?
Until we develop a better, or complete knowledge and understanding of consciousness, I think we're trying to find our way in the dark with a dimming flashlight.
But a great post to energizes some deep thought. Txs.