Scientist warns the rise of AI will lead to extinction of humankind

Posted by freedomforall 10 years ago to Science
13 comments | Share | Flag

Link is to pdf of technical paper.
Excerpt:
"Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives."
SOURCE URL: http://www.tandfonline.com/doi/pdf/10.1080/0952813X.2014.895111


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by $ MikeMarotta 10 years ago
    I confess that until I read the article, I thought that "the rise of Al" was "the rise of capital-A lowercase-L" i.e, the rise of Al Gore... I just thought, you know, the way he burns energy and has a huge carbon footprint that Al Gore was considered a threat to humanity....
    Reply | Mark as read | Best of... | Permalink  
  • Posted by iroseland 10 years ago
    The writers are just another bunch of Luddites who are scared to death about something they really don't understand. First, they over simplify the problem when it comes to the computers wanting to kill us to expand access to resources. To put is simply, they would actually need us, or they would not last very long till the power grid fails around them. The raw materials are still mostly mined by humans. The equipment is still assembled by humans. Repair is pretty much all humans. So, in reality they would require a more symbiotic relationship. In the meantime lets explore how a self aware machine will pop into existence. The Luddites usually assume that for some reason it will happen at a university or the military somewhere. What they fail to understand is that all of the premium AI talent these days is heading to wall street to do work on the high frequency trading black boxes. Those in turn are also hidden behind a very tall very thick legal wall of non-disclosures and non-compete agreements. They don't even go for patents as that would require showing their hands. So, I would suggest that the day that a self aware machine is born it will be by accident in the data center of a trading company. This has two side effects. 1. The machine will not have access to very much stuff, as the black box networks are not even directly on the internet. 2. The most real damage it could cause would be to cause a trading outage. Since thanks to the high frequency traders the markets have built in protections to reduce the odds of another flash crash.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by 10 years ago
      Would the specialist "scientists" on wall street be prepared to deal with (or even to recognize) that self aware machine? Would their 'specialization limit their ability to understand the unintended consequences? If they recognized the danger, would the motive of their organization serve to mask the wider danger as it apparently did in that industy's response to the "Global Financial Crysis?"
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by iroseland 10 years ago
        Wall street is currently hiring the best of the best Ai guys. They can do this because they pay better than anyone else. So, you could pretty much expect that it would end up in isolation. So, it would not be able to do anything dangerous since you cannot turn off the lights if you cannot reach the switch. From what I have seen I would trust a self aware machine working on wall street way more than one working for a government or at a university.
        Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by dbhalling 10 years ago
    I am not an expert on this but this line in the paper is chilling “In conclusion, it appears that humanity’s great challenge for this century is to extend cooperative human values and institutions to autonomous technology for the greater good”

    “Greater Good?” The phrase used by all despots for their actions.


    A lot of people have studied this issue. One of the first was Issac Asimov and he created the three rules for robots.

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Another person who took this recently was Ray Kurzweil in his book Singularity. He presents an optimistic outlook, but delves into some very interesting and difficult questions including what it means to be you. Am I still db if I lose an arm? What if my arm is replaced with a bionic arm? What if a computer augments my brain?
    Reply | Mark as read | Best of... | Permalink  
    • Posted by Lucky 10 years ago
      Asimov's three rules are clever but I cannot recall if or how he intended the rules to be unavoidable.
      Even then, information presented to the robot could be false, the robot may be presented with a conflict between humans, it may not correctly identify a human, it may have imperfect information especially about the future.
      Further, what is a robot? Is a set of instructions a robot? Or the process of carrying out those instructions? Asimov envisaged robots as humanoid in appearance and having a precise location in the humanoid body. But the programs that buy and sell on stock exchanges, or guide our signing up to a web-site, are information in solid-state chips or magnetic disks rather than anything we humans can see.

      This is coming, the future is going to be interesting.
      There is scope here for some imaginative writing - what will the successors of humans be like, will they have what we call emotions, will they or will they have the ability to evolve, will they allow animals such as humans to survive in reservations or zoos, will they care or have values? How many will there be- one interlinked program like a slime-mold or many and will they cooperate or compete or fight?
      Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by 10 years ago
      I noticed that "greater good" mind numbed trash, too. There is a programmed bias, even if the author may not be aware of it. We must always be on guard against despots but remain open to warnings of other dangers from experienced people of good will. As you were I was reminded of Asimov's rules, but there is no chance I trust military hubris to implement them.
      Are you still db if all your memories are transferred into a cybernetic body? Will that cyber-'db' have a lifetime of learned ethics transferred, too?
      Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by $ MikeMarotta 10 years ago
      Rudy Rucker has three novels about non-Asimovian robots. So, in less than a hundred years, some immoral hacker can use 3D printing and other means to continuously create drones that find and kill anyone wearing a red jacket or carrying a blue umbrella. The drones will evade pursuit, hide in crowds of children, or inside hospitals. And some people will long for the good old days of when Muslims used car bombs on each other and humans piloted the aircraft that bombed villages.

      I expect that about that same time, whatever we have for personal technology will include anti-drone shields or drone scramblers.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Snoogoo 10 years ago
    Well you have to admit, a bunch of hyper-rational robots against a bunch of irrational humans who are all willing to sacrifice themselves to the 'greater good'... I'll put my money on the robots.
    Reply | Mark as read | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo