Now The Military Is Going To Build Robots That Have Morals
Posted by Zenphamy 10 years, 11 months ago to Philosophy
So, can this be done and if so, where does Objective Philosophy fit in the determinations to be made? Who's going to determine which ethical and moral principles form the base of such programming?
From the article: "Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. βIt is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,β Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent."
From the article: "Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. βIt is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,β Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent."
A far scarier image of robots is as social worker rather than as combat trooper...
https://en.wikipedia.org/wiki/With_Folde...
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These would be hard-coded into all micro-processors and could not be violated else the robot self-destruct.
If a robot is programmed to behave differently under different circumstances as represented by its sensor inputs, the choice of what is important data believed to represent different circumstances and what the action is designed to be are human choices.
What is so hard to grasp about 'my rational choice to not use force or it's threat to obtain those things I need or want in my life from others, but that I won't allow others to use force or it's threat against me.'
I think Maph confuses a youthful, hippy belief in voluntarist anarchism with the mature, reasoned Ojbectivist life. Those are very different topics.
You seem like a pretty decent guy, Zenphamy, so if you'd like to discuss and debate the deeper philosophical details of Objectivism in a civilized manner, I'd be more than happy to listen to your take on things. Just, please provide rational explanations of your own position, and not merely denunciations of my position. That's all I ask.
[searching...]
Robert A Heinlein : "If tempted by something that feels 'altruistic,' examine your motives and root out that self-deception. Then, if you still want to do it, wallow in it!"
Fan, yes. I don't think he was a *big* fan. You *did* read "Starship Troopers" and "The Pragmatics of Patriotism", didn't you? :)
" I would say that my position is not too far from that of Ayn Rand's; that I would like to see government reduced to no more than internal police and courts, external armed forces β with the other matters handled otherwise. I'm sick of the way the government sticks its nose into everything, now."
The Robert Heinlein Interview (1973)
Your flow chart, described as a combat co-routine is a perfect example. It quite dramatically and drastically illustrates the over simplification and limitation of applying force in combat and the lack of Objectivist determination of the reality of actual combat situations. A brief example might be a rear area for combat troops to relax and resupply and the protection provided in order for that to be safe for those men and their support. It might well consist of a layered defense system of roving patrols and observations and detection for outlying areas with bombing or missiles strikes to keep the area 'clean', concertina wire, mine fields, and in our case the use of surveillance and weaponized drones. Is anyone entering that defined area considered an enemy combatant or does the defender have to wait until absolute confirmation of identity before elimination or did the planner need to incorporate such confirmation systems in the mine fields or bombing and missile strikes or in our case the drone strikes?
The Objective reality we've all had to learn to deal with in all wars, particularly since WWII, is the use of human weapons, even the elderly, women, and children with bomb vests and hand grenades in both active and inactive combat areas. If some of that defense is turned over to autonomous drones with some sort of decision matrix programmed in, what and how are ethical actions instilled in that programming and yet retain the effectiveness of the weapon?
I'm pretty confident that the program flow chart will be a bit more complicated than what you propose.
His simplistic attempt to reduce philosophical thought and application of principles to a half page diagram at all is one instance of his rationalism mechanically manipulating words without regard to context and meaning -- both of which are ignored completely in a diagram. He is not and cannot "program" Ayn Rand's "philosophy".
But that regards method of thinking. Regarding its content he is hopelessly lost in his equating "non-aggression" with all of Ayn Rand's philosophy, of which he has no understanding at all. The notion that Ayn Rand's "epistemology" as a "combat routine" is so absurd that it makes your ears wilt. He has no idea what epistemology is or what Ayn Rand's epistemology in particular is.
The original question, posed by Zenphamy, was "where does Objective Philosophy fit in the determinations to be made?" Ayn Rand always said we should not permit floating abstractions, but rather should ground ourselves in concrete reality. So far, I have provided one concrete example for a potential method of programming Objectivist epistemology as combat co-routine into a computer. That's exactly one more concrete example than anyone else has provided, including you. If you think you have a better example, I'd love to see it.
There is no "programming" of "Objectivist epistemology into a computer". Maphesdus doesn't know what that means, let alone "provided a concrete example".
Based on the totality of what I have read of her writing, "coercion" would be a more accurate term, imo.
http://i.imgur.com/vCkVxHb.jpg
If anyone else can come up with something better, please show me.
That does not mean an "echo-chamber exclusively for people who agree with Ayn Rand" and it does not mean a place for trolls who don't understand Ayn Rand's philosophy and who reject what they do not begin to comprehend to repeatedly misrepresent it in the name of "debate" as they demand to be taken seriously. Maphesdus is not a "like minded individual" and does not belong here.
* Turning Objectivism into a computer program for guiding automated drones.
* The relation of Ayn Rand's philosophy to the undertaking as such as described in the article.
As far as I can tell, these two sentence both mean exactly the same thing. The only difference is yours is worded more vaguely and with less precision. But since you also seem to think that it's not possible to turn Objectivism into a computer program at all, I'm not clear on what you're trying to say. So here's a suggestion: back off the insults for a bit, calm down, and explain what you mean in concise and logical manner.
Also, I didn't say precision of thought. I just said precision. Can you even read?
I can't really think of many other objectivist premises it would need, but this isn't really my argument.
I like the flow chart, good luck!