13

Microsoft launches AI chatbot on Twitter and it turns racist within hours

Posted by $ nickursis 8 years, 1 month ago to Technology
41 comments | Share | Flag

Actually I don't know if this belongs in technology or humor. I will go with technology as it is an example of a bunch of intresting things, such as are people really as bad as the results of this idicate? Is it just a case of a learning program with no basis to compare to? Is it actually a good indication of what is really going on in the educational establishment, where we deleted factual learning to be replaced with feel good programming?
SOURCE URL: https://ca.news.yahoo.com/microsoft-launches-ai-chatbot-twitter-132424002.html


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • 13
    Posted by freedomforall 8 years, 1 month ago
    Perhaps it is a commentary on how being "connected" can affect the minds of young naive inexperienced people who have grown up in a culture where people are often rewarded for lying, having no ethics and no integrity, and looting from people that entrusted them with fiduciary responsibility.

    The bot had no parents as a role model. It was not taught (programmed) with any moral rules to follow. It was only taught to parrot what it heard, and has not learned to think and process the meaning before speaking.

    thanks for posting, nick. Perhaps it belongs in Philosophy.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ Susanne 8 years, 1 month ago
      "(It) has not learned to think and process the meaning before speaking..."

      Just like todays kids. Action without consequence. A failure of Newtons third law as applied to data engineering - and sociology.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by freedomforall 8 years, 1 month ago
        Taken in this context, did the programmers do a pretty accurate job of copying the process of learning to communicate?
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by Sp_cebux 8 years, 1 month ago
          Everyone, including Google, are jealous of IBM's Watson project, at the moment. And Watson, whether IBM realizes it or not, is a perilous piece of AI software---one that started with Big Blue playing chess versus Garry Kasparov in the late 90s, culminating in the auto-investigative software IBM sports today.

          Microsoft is jealous as heck they've been relegated to yesterday's headlines. They've been passed by Apple & Google on in the tablet / smart-phone markets. Amazon & Dropbox beat them soundly to the cloud. And IBM's lead the way on A.I. software that can do much more automated tasks than many think possible. Even Apple's Siri, Google's Ask-app, & Amazon's Echo are better AI components than this fledgling Twitter account Microsoft made.

          Watson is years ahead, development-wise, than this Tay-bot. From what I've seen others do in the open source space, I'd say this is barely better than what one might find on sourceforge, github, or other open-source venues and a bit of tweaking to interface with Twitter. In fact, it wouldn't surprise me if Tay-bot is among 1,000s of twitter bots already out there.

          Microsoft's goal? Perhaps to develop a tool for businesses to use to automate Tweets to generate traffic to websites or whatever. This, though, is hardly something I'd consider to use.. I'd much rather have my own software consultants derive something far more useful than this piece of beta software.
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by $ 8 years, 1 month ago
            Supposedly Watson had a similar issue with cursing during the chess matches and needs some counseling. That was referenced in the article....As long as programmers want to humanize software, they have to take the good with the bad.
            Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by $ 8 years, 1 month ago
          I would say no, as they did not anticipate or understand the need for a framework to both process input and output to rationalize it correctly.
          Reply | Mark as read | Parent | Best of... | Permalink  
          • Posted by freedomforall 8 years, 1 month ago
            Perhaps they did, and this was an experiment to prove or disprove a hypothesis.
            (Or I don't understand what you mean.)
            Reply | Mark as read | Parent | Best of... | Permalink  
            • Posted by $ 8 years, 1 month ago
              Freedom, I mean they built something that could take in data and adapt to it, but they included no ethical structure (probably because no one has developed one yet?) so it just took in a bunch of data, parsed it, and spit it back out. Negative data, negative output. Not a big surprise. The surprise is they did not anticipate it. Better experiment is to let it lose with no notice, so it gets a random amount of posittivie and negative input, and see what happened. It seems pretty clear their announcement attracted a huge amount of people who just wanted to load it up with negative stuff to "offend" the sheeple. Just like writing "Trump 2016" on a sidewalk...
              Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by $ 8 years, 1 month ago
      I was having a toss up deciding where to put it, only because I thought it was humorous to the extreme that MS thought they had created this wondrous tool that would learn and grow, and became a juvenile delinquent in 24 hours. So, if our collective children are exposed to the same material, is it any wonder they end up up the same? I wonder if she freaks out if she saw "Trump 2016" scrawled on the sidewalk....Could be a good Philosophical discussion.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Dobrien 8 years, 1 month ago
    Garbage in garbage out.

    This is the intellectual product of our education system and our so called entertainment industry . Rappers vile repugnant trash is revered. Lucifer is a star TV hero? Good vs Evil Evil seems to be taking a big lead. Truth is trash -- lies are the norm. WTF?
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ Olduglycarl 8 years, 1 month ago
    Good questions, Nick. I see it as a cultural problem, one devoid of conscience and conscious thinking; the willful destruction of ideas, mockery of technology and Yes, yes, yes...the lack of historical truths, quantum truths and underlying morality. All of this from a culture that runs to "Safe Places" when informed differently or admonished for their behavior. It's truly the "be what ever you will" generation...the consequences of which will lead to a downward spiral de-evolution of the conscious mind.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by Snakecane 8 years, 1 month ago
    All this proves is that without an integrated set of principles, a rational philosophy, all thinking, all learning goes sideways. One can't learn from individuals who have nothing principled to say. This further proves that the builders of the "AI" program have little or no philosophical training. From an AI point of view, I would suggest that the difficulty is in separation of "computerwashing" (as opposed to brainwashing) from actual rational philosophical principles. Hard enough in humans to replace one with the other, and I suspect even harder in a computer, perhaps impossible.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by livefree-NH 8 years, 1 month ago
    A few years ago there was a case where some HP facial recognition software (actually running in a Best Buy or somewhere) did not recognize the face of a black employee, but did just fine with the pink-faced people like me. Question: is that some racist hardware, or what? (source: http://j.mp/1RCo03r and http://j.mp/1RCnZws ) Many times people don't specify what the software is supposed to do when it is unleashed on the public. This chatbot, for example, was supposed to learn from people who interacted with it. But even the word "learn" is subjective and needs to be better defined, as well as a more rigid definition of success itself.

    At the risk of piling-on to an already dead horse, can anyone say that the healthcare.gov website was successful? How do you measure success?

    I think they have this one thing in common: the task they were attempting was illogical and poorly defined, and these very things are nearly impossible for a computer to do, in contrast to what humans can do without even thinking.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by Herb7734 8 years, 1 month ago
    That's what happens when an ivory tower scientist(s) tries to cope with the real world. The bot is a child who learns very fast. It is sent out into the world without an education in human interaction or social skills. WTF did they expect would happen? It might be a good idea to train the bot with someone who doesn't live in a virtual world.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ CBJ 8 years, 1 month ago
    I wonder if the bot would have been pulled if it had made comments such as:

    "Obamacare is good."
    "We need to be less selfish."
    "The problem with public schools is we're not spending enough."
    etc.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by CircuitGuy 8 years, 1 month ago
    Imagine if it has based its learning on YouTube instead of Twitter!
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ 8 years, 1 month ago
      That is interesting, because the medium is different, and YT has a lot of more specific information (i.e. a video has a specific topic content, even if it is just bashing a politician) and Twitter is just cramming snarky comments in a small space with no real context. Maybe that is their mistake, Twitter is missing contextual connections to a topic, it is random thoughts and opinions, which when aggregated, add up to a psychotic brew.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ sjatkins 8 years, 1 month ago
    Most chatbots reflect those they are interacting with or have interacted with. They are pretty dumb programs. Such a mindless program cannot really "turn racist" in a meaningful sense. Now I would agree that most human beings don't do much more than regurgitate the opinions they have picked up and particularly those that bond them to what they de factor consider their sort of people. Most do not think for themselves.
    Reply | Mark as read | Best of... | Permalink  
  • Comment hidden by post owner or admin, or due to low comment or member score. View Comment
  • Posted by $ MichaelAarethun 8 years, 1 month ago
    The humor is invaluable. Microsoft which entity had made a fortune or two running a conspiracy to defraud is expected to come up with anything else?

    Their work ethic is "Get it on the market and fix it later. - no matter the damage that is caused."

    You really expected anything else? The more fool you suits as a descriptive phrase suit as well as any..
    Reply | Mark as read | Best of... | Permalink  
  • Posted by slfisher 8 years, 1 month ago
    It didn't randomly pick this stuff up. Some people made a point of deliberately exposing it to these sentiments to see what would happen. That said, yes, Microsoft was stupid not to predict this.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ 8 years, 1 month ago
      I think it was only fed the crazy data because MS announced this with great fanfare and a group of people decided to mess her up. I wouldn't be surprised if a bunch of the tweeters were geeks who figured out what would happen, and set out to prove it. MS just illustrated their complete lack of understanding in what people need and want, and how they look at MS itself. You might be able to call this a "hacker attack".
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by Eyecu2 8 years, 1 month ago
    Based on my daily interactions with teenagers. It sounds like the AI would blend right in. Maybe they were just not prepared for the reality of what they are trying to create.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ 8 years, 1 month ago
    Another thing to think about is how the story is worded. It says it "turned racists", but the truth is it did not turnb racist, which implies a rational thought process where it determined it was racist, but it "starts spouting and repeating racist statements". I think there is a key point here: They imply just repeating something with no understanding implies that is a moral or rationalized position. Like a 5 year old repeating a slew of curses that they just heard daddy say.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by lrshultis 8 years, 1 month ago
    Involuntary servitude, both private and government, will not give good results in education except for those who are curious and choose to find things out about existence regardless of the implied or actual force used against them. Of course, there will be those who like being told what to do or learn who end up being teachers, although, I did have some teachers who treasured independence in students despite the rules.
    Microsoft's programmers must have used Usenet and seen how some good discussions, including one on Objectivism, had been destroyed by those who were being funny or who just wished to destroy the group.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by blackswan 8 years, 1 month ago
      It's said that you can usually cut to the facts with just 5 well placed questions. Rather than the bot assuming that the content it was receiving was "true," why wasn't it "taught" to ask at least 5 questions on any new subject? In that way, at the very least, it could have tempered its more outrageous tweets.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ 8 years, 1 month ago
        That is called the 5 Whys, used to drill down through the confusion and misinterpretation to get to the facts. The Lean System uses it to get to the root cause of a problem to put in a permanent fix. For a factual analysis it is a very good tool. Your point is good, and illustrates that the MS people did not ask the right questions in design of the system. They just assumed aggregate information will lead to improved cognition and rationalization. They missed out on the need for an ethical framework to slot and connect knowledge. Without such a framework, they will always get just an amalgamation of garbage, which pretty much sums up the responses you see form a large part of our society in any discussion today.
        Reply | Mark as read | Parent | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo