OK, so I made this point and then all three of them, the toaster, the fridge and the washer, started chattering back and forth for a while. The toaster finally began to chuckle before asking me if I'd ever heard of "John Galt's Motor?" I fear I'm totally screwed at this point.
A condition stating that the Zeroth Law must not be broken was added to the original Three Laws, although Asimov recognized the difficulty such a law would pose in practice.
Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?" "Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."
Besides, such a law would be the worst sort of collectivist thought. The whole is greater than the one.
I'll stick with the three original laws, thank you very much.
Be careful. As far as I know, they're not programming them with Asimov's laws (yet), so there's nothing preventing the phone, or one of its friends, from taking revenge!
I just recorded a Stossel show where he admitted that advanced tech will shape the future, making it better than the past -- although, he admits, there will be a learning curve for folks like him.
I'm a "when the ship lifts, all debts are paid" kid, and still have to push myself into the future!!! -- j
Each lower number law supersedes an upper number law. The 0th law supersedes all these other laws of robotics.
"In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:" http://en.wikipedia.org/wiki/Three_Laws_...
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'm older than John Stossel and I find he is very hesitant in embracing the future. I can't wait! I wish nanotechnology was more advanced, I would be the first one in line to have nannites injected into my body aka like the main character in Ben Bova's - Moon War. I already have bilateral hearing aids and an implanted Spinal Stimulator to deal with the pain from severe osteoarthritis and crushed discs. I will also reference Isaac Asimov's short story (forgot the name) but it deals with a clinic where humans enter in one section and have their human organs removed for artificial ones and robot's enter into another section to have human organs installed that were just removed from the humans. I'm a fraction there, so I say bring it on!
Well, is my face red. Colossus came after HAL. Also, some research has brought to light the context of the short story of my remembrance is not the same as what I found. The story I remember concerns an expedition happening on a planet on which there is nothing but robots. In a central square is a statue of a human with the inscription, "Well done, thou good and faithful servant". The denouement is the robots are the descendants of those who had been left as mankind went out to the stars. The expedition was MAN returning home after millienia, with no memory of Earth.
Based upon the Expansion-Contraction theory of the universe, certain scientists propose that when the universe contracts back to a singularity and the big bang occurs all over again the computers would have set up a new intelligent species to come forth. It works like this: The astounding amount of coincidences that it takes for intelligent life to evolve indicates that it was planned. When we reach the point where computers overcome mankind, they will eventually set the universe up so that these coincidences must occur. Machines are immortal so they can exist long after man has disappeared and will have eons to figure this all out. Hey, its not my theory -- but it is a theory.
There was a movie about this type of future world and hidden behind the curtain were all the grotesquely fat lazy humans needing to control their own avatars within the world since their bodies no longer had the ability to "live" in the real world.
As AI becomes more sophisticated, the ability to use circumlocution to circumvent proscriptions will become pretty much....human. At that point, we may not be able to forestall our own demise. One early movie in this vein (long before HAL) was 'Colossus: The Forbin Project". Another, more benign scenario, was a short story from many years ago titled "Thou Good and Faithful". I agree with the poster who remarked the ability of robots and AI to free us from the necessity of spending our existence on the provision of necessities and allowed humanity to pursue those endeavors which uplifted us and let us escape to the stars would be ideal. Alas, the greater portion would, indeed, languish in sloth and ennui. The saving grace of those who would seek 'to go beyond' (as the title of an Enya instrumental piece suggests) makes the effort worthwhile. To walk among the stars, what a glorious dream.
As much as I would like to have personal robots in my future, please don't forget that after Asimov created the three laws he then wrote a series of stories in which those very same laws created a lot of problems. johnf
I fear I'm totally screwed at this point.
Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?"
"Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."
Besides, such a law would be the worst sort of collectivist thought. The whole is greater than the one.
I'll stick with the three original laws, thank you very much.
that advanced tech will shape the future, making it
better than the past -- although, he admits, there
will be a learning curve for folks like him.
I'm a "when the ship lifts, all debts are paid" kid,
and still have to push myself into the future!!! -- j
today it's in a bag with a flat battery, history!!! -- j
The 0th law supersedes all these other laws of robotics.
"In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:"
http://en.wikipedia.org/wiki/Three_Laws_...
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The 3rd law would prohibit such.
law 0 - "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
Image a robot that knows it must exist if humanity is to be saved and humans are trying to turn it off.
johnf