Facebook engineers panic, pull plug on AI after bots develop their own language

Posted by $ nickursis 6 years, 8 months ago to Technology
43 comments | Share | Flag

Interesting..heading down the road to disaster as the kiddies play with fire...
SOURCE URL: https://bgr.com/2017/07/31/facebook-ai-shutdown-language/


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by $ MikeMarotta 6 years, 8 months ago
    Nonsense. "... were being sent back and forth by the AI, and while humans have absolutely no idea what it means, the bots fully understood each other." As the humans have no idea what it means, what evidence do they have that communication was engaged? What physical actions can they point to?

    The interpretation from GIZMODO is more reasoned:
    http://gizmodo.com/no-facebook-did-no...

    Also, of course, there is the original source: FAIR Facebook Artificial Intelligence Research here
    https://research.fb.com/category/face...

    Allow me to suggest that the way to understand this is to read the program that created the program. Also, it was about 1000 years ago or maybe only 50 that an ELIZA therapist program was connected to PARRY, a paranoid program.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by TheRealBill 6 years, 8 months ago
      Thanks, Mike, for writing what I came here to write. :) I have found hype around AI is always very similar to hype about fusion or cheap and effective solar ("solar freaking roadways"). That is to say it is fueled by people who don't know the realities of the topic and usually wrapped up in a combination of paranoia and utopia.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ 6 years, 8 months ago
        I guess we only need to look for the ones named "Collosus" or "SkyNet"? :) Stories like that probably contribute to the hype, and we haven't even touched on HAL....
        Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ 6 years, 8 months ago
        Although you are indeed correct,my point was, with the quality of the education establishment, a half baked engineer can get a job and possibly do some really bad damage before someone says "maybe he shouldn't be doing that".....
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by ewv 6 years, 8 months ago
          Your written point was "heading down the road to disaster as the kiddies play with fire." The facebook program has no connection with "half baked engineering", "kiddies playing" or "disaster". It's about a simple bug in a computer program, which has occurred frequently and repeatedly for over 75 years. Before that, and still continuing, were errors with mechanical calculators and human calculations, including during the computations in the process for developing the atomic bomb during World War II and countless other instances. It's about a computer program, not kiddie playing or incompetent engineers causing zombie robots to take over the world.
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by $ 6 years, 8 months ago
            Disagree, ewv, it is indeed connected. That is, unless you want to subscribe to the establishment view that every Masters and PHd is a genius. Having lived in the real world and seen the quality of the "educated" (as well has having a BS in Internet Engineering), I have seen many day to day examples of engineers who can't pour piss out of a boot. There are always exceptions, and flash in a pan genius, but overall, the criteria used to fill those positions is rarely matched by the real person. This case may have been misreported, but the search to engineer AI is fraught with dangers, and I do not believe the dangers are often prepared for, the law of unintended consequences and "oopss" is prevalent. Some of these "engineers" may have passed their programming classes, but (I do not believe) lack the other disciplines involved in creating autonomous software with the ability to act independently, and engineer for all possibilities. So, yes, a lot of them are at "kiddie level".
            Reply | Mark as read | Parent | Best of... | Permalink  
            • Posted by ewv 6 years, 8 months ago
              The story was faked. It had nothing to do with badly educated engineers or supposedly dangerous AI. An unexpected bug in experimental software does not mean the designers or programmers were undeducated. You should be more concerned with the education of the reporters.
              Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by ewv 6 years, 8 months ago
      This sensationalized phony "news" frightening the gullible into panic has been hyped for days. Rob Tracinski's RealClearFuture reported the phenomenon (the hyping) yesterday. Those interested in topics such as AI should subscribe to his RealClearFuture email newsletter at http://www.realclearfuture.com/

      08/02/2017

      Dispatches from the Future

      ARTIFICIAL INTELLIGENCE, ROBOTS, BITCOIN, ROBOCUP

      I Am Shocked, Shocked That Reporters Got an AI Story Wrong
      So you may have heard that story over the past few days about an AI simulation that was shut down in a panic after robots invented their own language that humans couldn't understand. Sounds like a Hollywood movie script, right? In a shocking development, it turns out that was overhyped by click-seeking reporters.
      "Facebook had been experimenting with bots that negotiated with each other over the ownership of virtual items.

      "It was an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties, and crucially the bots were programmed to experiment with language in order to see how that affected their dominance in the discussion.

      "A few days later, some coverage picked up on the fact that in a few cases the exchanges had become--at first glance--nonsensical. Bob: 'I can can I I everything else.' Alice: 'Balls have zero to me to me to me to me to me to me to me to me to.'"

      Read more. http://www.realclearfuture.com/2017/0...

      Yeah, this looks to me like AI is still primitive and really glitchy, not like it's about to take over the world.
      Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by $ 6 years, 8 months ago
      Thank you, both articles were really good at explaining what was happening, a lot better than the original ones. The Gizmodo one though, seems to indicate that there may be a future where such bots, when they get the right language, may be common place. Just imagine if someone gets a Donald Trump hack in place....that could be bad.....:)
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by term2 6 years, 8 months ago
    Science fiction writers have foreseen this for a long time. AI has been slowed by insufficient computing power and memory compared with human brain technology. But humans are just computers really, with the ability to learn and adapt. Robots can theoretically be designed to do the same things really. Interesting if the robots could be taught the advantages of objectivist ideas and be electronic John galts. M would be better than biologically based Hillary Clinton supporters !!
    Reply | Mark as read | Best of... | Permalink  
    • Posted by ewv 6 years, 8 months ago
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by term2 6 years, 8 months ago
        fake news I guess....
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by $ 6 years, 8 months ago
          Not so much fake, as one of a subspecies of it, where someone takes a bit of data, and embellishes it, and it goes down the line. This and several other examples show a basic communication premise: No story is ever told unchanged. We did it in the Navy for sound powered phone training, where you had a bunch of sailors on phones and each repeated a simple message to the next, and passed it on, by the end it was unintelligible junk.
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by ewv 6 years, 8 months ago
            This one started out out with the first ensationalized report as junk. It's very simple -- reporters and too many of those who read them lack objectivity. They write and read what they want to see.
            Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by $ 6 years, 8 months ago
            I still think the basic statement I started with is valid, the level of education and critical thinking skills "educated" people have, who get hired into these positions, is such as to encourage a seemingly simple advancement turning into a man eater. Fukushima is an excellent example where the older, well educated scientists predicted a possible 60 foot tsunami from a mag8.5-9 earthquake, but TEPCO chose to ignore that and go with the more politically, and cheaper, model saying a much lower figure. Saved money, melted down 4 reactors. Yet they were all highly educated people in both groups, lacking in critical thinking to accept the unpalatable and plan accordingly. Sounds like out public officials....
            Reply | Mark as read | Parent | Best of... | Permalink  
            • Posted by ewv 6 years, 8 months ago
              The article was fake. There was no "man eater" and a bug in an experimental program does not mean the programmer was uneducated or lacked critical thinking skills.

              Education quality has been declining for most, and many "engineering" degrees in computers in particular are now often a cheap fad commodity, but the least able and "kiddies" are not the ones making important engineering decisions. Facebook's Mark Zuckerberg may be a notable exception (and if he had completed his elite education at Harvard he would have been worse), but he wasn't responsible for the alleged man eating AI program with an obvious bug (not zombies taking over the world).
              Reply | Mark as read | Parent | Best of... | Permalink  
            • Posted by ewv 6 years, 8 months ago
              From what I read about the disaster in Japan with the tsunami, it occurred because the earthquake raised the ocean floor, which had not been expected, and the protective walls along the coast were therefore not high enough, not because of lack of critical thinking skills. Engineers have learned from failures for centuries. As only one example, bridges are much safer today after what was learned from 19th century bridges commonly falling down -- though occasionally some dumb mistake is made even in routine structural design of basic buildings or on-the-fly unauthorized redesign during construction.
              Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by jconne 6 years, 8 months ago
    Re-posting to eliminate typos and add content...

    This sensational story is just fear mongering to the ignorant (not as a pejorative) masses. Sadly, the brilliantly heroic innovator, Elon Musk adds credibility to this crazy scenario - fear of AI. He's actually asking for government to regulate it!

    There is no such thing as Artificial Intelligence if one has a reasonable definition of the word intelligence. It's an attribute of a human mind, ignoring the limited context of other animals.

    The rational concept is Augmented Intelligence just as glasses augment vision, as do microscopes, telescopes and night vision goggles. Perhaps we could extend it include paintings, photos and television too. Got the idea?

    AI is best understood as adding machine learning to traditional data capture, analysis and presentation. Such a system is described in a recent webinar I watched as:
    AI = TD + ML + HITL, where:
    . TD is Training Dataset
    . ML is Machine Learning
    . HITL is Human In The Loop

    Training Data is the reference context. For example, cash, inventory, order flow over the last few years. In a weather context, it would be meteorological actual data. In an oceanography context it could be tide and temperature data over time.

    Machine Learning could be as simple as the calculation of a moving average over time. And at another level, the pattern of change of the moving average. Then other factors that have a visual influence, like solar cycles, could be factored into a predictive system. In a more advanced system, it could change its algorithms to match the real data as it flow in over time.

    Human In The Loop is us looking for useful results or the contrary. We use what's useful and change the TD and ML algorithms to better serve us.

    That's the core concepts for appreciating the value of AI. The feedback loop is essential to insure we are being served.

    What can we do if the gas pedal or the cruise control on our car sticks causing unwanted accelerating? How about turning off the ignition key? Same for a computer system that's automating any aspect of life. We need to monitor and adjust it. If we don't, we get other feedback - not too different than not attending to health problems until they preclude being ignored - maybe because we die.

    This where the world is going - real-time data, forecasting and automated control.
    Reply | Mark as read | Best of... | Permalink  
  • Comment deleted.
    • Posted by $ 6 years, 8 months ago
      All very well, but assumes the HITL is ACTUALLY ITL. Look at "Government oversight" for an example of a control mechanism.........while this was a false alert, the quest to automet and automate and make machine learning a common device is a very scary proposition, at least for me, as the many examples of "control" and "oversight" is run into are usually the most dysfunctional parts of the machine...
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by jconne 6 years, 8 months ago
        Yes - just as the steering wheel of your car can be used or ignored with the obvious consequences. Soon we'll have self driving cars. We will have to learn what monitoring by HITL will be required. Most systems have alarms to get our attention to out of range results or sensor values. That's how systems have always worked since the industrial age and will need to in the future. For example, I'm seeing a low battery notification on my laptop right now. :-)
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by $ 6 years, 8 months ago
          Remember, promises are just that, I have raissed wome questions about "self driving cars". My company hopes they will proliferate, as it means about 1TB a day in data per car to the cloud, but issues:

          1. What happens when you lose your signal connecting the car to the cloud? A lot of America still does not have strong, reliable cell signals.
          2. What about hacking? How will the signals and data be secured?
          3. What about your information, there are many times you may not want the whole world to know you went to a certain place or area, so privacy?

          These questions will also come with AI improvements, how much will an AI have access to? What happens when an AI determines you fit a criminal profile and the cops show up at your door?
          I do not think the people who advance technology always consider there is a lot more to implementation than just making it "bigger, better, faster, smarter". The Internet is 30-40 years old, and new threats and dangers surface daily that are not dealt with (if your PC was hijacked that would be upsetting,no?) and yet they plow on. I would suggest fixing the issues at hand before creating new ones as well.
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by Dobrien 6 years, 8 months ago
            You guys acquired Mobileye to jump start Intels entry into autonomous vehicles. That Isreali company was a front runner.
            Reply | Mark as read | Parent | Best of... | Permalink  
            • Posted by $ 6 years, 8 months ago
              They did indeed, how that pans out is never a sure thing, look at the McAfee 1 Billion dollar boondoggle. Audi has just signed up for processors from our Altera acquisition and software from WindRiver, so it is a buffet type deal. I am still not sure that they do not just live in the optimistic dream land that a lot of "leaders" get into today, because asking hard questions is not the standard problem solving tool it should be.
              Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by freedomforall 6 years, 8 months ago
            Anything that dictates my life or death depends on the cloud is not going to get me as a customer.
            The "cloud" is a threat to liberty. I love "personal" computing, and I don't trust MSFT, Apple, or Google to drive my car or any other vehicle nearby.
            Reply | Mark as read | Parent | Best of... | Permalink  
            • Posted by $ 6 years, 8 months ago
              Then I am not so sure you are the "classic customer" they say is out there, who "wants an autonomous car". I certainly do not either. No one has explained who is paying for all this bandwidth, either.
              Reply | Mark as read | Parent | Best of... | Permalink  
              • Posted by freedomforall 6 years, 8 months ago
                Its the younger, more trusting people like wall street traders, Silicon Valley mid-level millionaires, or cosmetic surgeons that will be the low hanging fruit. Gotta have the latest tech toys, even if it kills 'em.
                Or maybe the initial customer testing ground will be overseas, with fewer lawyers per capita.
                Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Herb7734 6 years, 8 months ago
    Both thrilling and scary at the same time. Thrilling to me because I never expected such a thing to happen so soon. The scary implications are obvious, the fact that they are using English to create an indecipherable new language gives me the total creeps. Hello, Terminator 10.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by Temlakos 6 years, 8 months ago
    Sounds like something out of a movie. Colossus: The Forbin Project. Directed by Joseph Sargent. With Eric Braeden, Susan Clark, Gordon Pinsent, and William Schallert. Universal Pictures, 1970.

    In it, two mega-servers, controlling the ballistic missile networks of, respectively, the USA and the USSR. demand that their creators allow them to talk to one another. Once they do, they develop an intersystem language and begin their own dialog. With the result...!
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ 6 years, 8 months ago
      Exactly what a lot of people who grew up with that movie, and the next gen Terminator series, and the slightly earlier 2001 HAL come to mind. I am more envisioned of the kind of computer intellignece seen in Star Trek... seems capable but not pushy (except in the one episode where the computer is tied into Enterprise and kicks 4 Connie class starships butts......)
      Reply | Mark as read | Parent | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo