Questions About Artificial Intelligence
"To challenge the basic premise of any discipline, one must begin at the beginning. In ethics, one must begin by asking: What are values? Why does man need them?
“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.
I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”
To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness
Questions regarding objectivist ethics. Discuss.
“Value” is that which one acts to gain and/or keep. The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.
I quote from Galt’s speech: “There is only one fundamental alternative in the universe: existence or nonexistence—and it pertains to a single class of entities: to living organisms. The existence of inanimate matter is unconditional, the existence of life is not: it depends on a specific course of action. Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death. Life is a process of self-sustaining and self-generated action. If an organism fails in that action, it dies; its chemical elements remain, but its life goes out of existence. It is only the concept of ‘Life’ that makes the concept of ‘Value’ possible. It is only to a living entity that things can be good or evil.”
To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."-Ayn Rand, Virtue of Selfishness
Questions regarding objectivist ethics. Discuss.
Previous comments... You are currently on page 2.
Without contemplating the advances brought to us by Moore's Law, he made a remarkably accurate prediction. We still don't know enough about AI, or it's natural counterpart to say fr sure. It's going to be an interesting 42 years :^)
Imagine the difficulty of training a robot through coding to recognize a CAT. Whats happening now is that the robot observes pictures of 1000 cats and then decides on its own how to determine if another picture or a live thing is a cat.
This is the real singularity fear people should have. That kind of robot would have no morality other than what IT learned on its own.
a machine or construct with awareness???...that can affect reality???...can you imagine it???
Was the robot programmed by another robot using 100% logic? i.e. the borg?
Was the robot programmed like Samaritan or "The Machine" on Person of Interest?
Garbage in, garbage out, is the primary premise of all AI, which ironically is the same as with our brain?
Does the Robot has a primary premise or hard coded goal? Refer to Start Trek "The Movie" or the episode of the original series with Nomad?
Isaac Asimov provides highly enlighted views on programming robots and turning them loose with their "Prime Directive."
I-Robot, and "Robin Williams, Man of the Century" two great movies to place some context. Also, the Terminator series and "The Matrix" series are wonderful examples of potential, negative impact of powerful machines turned loose.
I like the difference between "Knowledge" and "Wisdom." Zenphamy mentions this in his comment.
Knowledge, the taking in of information.
Wisdom the practical application of knowledge.
Personally, I like Isaac Asimov.
Quoting again "To make this point fully clear, try to imagine an immortal, indestructible robot, an entity which moves and acts, but which cannot be affected by anything, which cannot be changed in any respect, which cannot be damaged, injured or destroyed. Such an entity would not be able to have any values; it would have nothing to gain or to lose; it could not regard anything as for or against it, as serving or threatening its welfare, as fulfilling or frustrating its interests. It could have no interests and no goals."
That premise in itself would make the Robot a giant useless paperweight.
1. A robot may not injure a human being, or,
through inaction, allow a human being to come to
harm.
2. A robot must obey the orders given it by human
beings except where such orders would conflict
with the First Law.
3. A robot must protect its own existence as long as
such protection does not conflict with the First or
Second Law. (Asimov 1984)
Load more comments...