13

Microsoft launches AI chatbot on Twitter and it turns racist within hours

Posted by $ nickursis 8 years, 1 month ago to Technology
41 comments | Share | Flag

Actually I don't know if this belongs in technology or humor. I will go with technology as it is an example of a bunch of intresting things, such as are people really as bad as the results of this idicate? Is it just a case of a learning program with no basis to compare to? Is it actually a good indication of what is really going on in the educational establishment, where we deleted factual learning to be replaced with feel good programming?


All Comments

  • Posted by $ 8 years, 1 month ago in reply to this comment.
    Freedom, I mean they built something that could take in data and adapt to it, but they included no ethical structure (probably because no one has developed one yet?) so it just took in a bunch of data, parsed it, and spit it back out. Negative data, negative output. Not a big surprise. The surprise is they did not anticipate it. Better experiment is to let it lose with no notice, so it gets a random amount of posittivie and negative input, and see what happened. It seems pretty clear their announcement attracted a huge amount of people who just wanted to load it up with negative stuff to "offend" the sheeple. Just like writing "Trump 2016" on a sidewalk...
    Reply | Permalink  
  • Posted by $ 8 years, 1 month ago in reply to this comment.
    Supposedly Watson had a similar issue with cursing during the chess matches and needs some counseling. That was referenced in the article....As long as programmers want to humanize software, they have to take the good with the bad.
    Reply | Permalink  
  • Posted by $ sjatkins 8 years, 1 month ago
    Most chatbots reflect those they are interacting with or have interacted with. They are pretty dumb programs. Such a mindless program cannot really "turn racist" in a meaningful sense. Now I would agree that most human beings don't do much more than regurgitate the opinions they have picked up and particularly those that bond them to what they de factor consider their sort of people. Most do not think for themselves.
    Reply | Permalink  
  • Posted by Sp_cebux 8 years, 1 month ago in reply to this comment.
    Everyone, including Google, are jealous of IBM's Watson project, at the moment. And Watson, whether IBM realizes it or not, is a perilous piece of AI software---one that started with Big Blue playing chess versus Garry Kasparov in the late 90s, culminating in the auto-investigative software IBM sports today.

    Microsoft is jealous as heck they've been relegated to yesterday's headlines. They've been passed by Apple & Google on in the tablet / smart-phone markets. Amazon & Dropbox beat them soundly to the cloud. And IBM's lead the way on A.I. software that can do much more automated tasks than many think possible. Even Apple's Siri, Google's Ask-app, & Amazon's Echo are better AI components than this fledgling Twitter account Microsoft made.

    Watson is years ahead, development-wise, than this Tay-bot. From what I've seen others do in the open source space, I'd say this is barely better than what one might find on sourceforge, github, or other open-source venues and a bit of tweaking to interface with Twitter. In fact, it wouldn't surprise me if Tay-bot is among 1,000s of twitter bots already out there.

    Microsoft's goal? Perhaps to develop a tool for businesses to use to automate Tweets to generate traffic to websites or whatever. This, though, is hardly something I'd consider to use.. I'd much rather have my own software consultants derive something far more useful than this piece of beta software.
    Reply | Permalink  
  • Posted by freedomforall 8 years, 1 month ago in reply to this comment.
    Perhaps they did, and this was an experiment to prove or disprove a hypothesis.
    (Or I don't understand what you mean.)
    Reply | Permalink  
  • Posted by $ 8 years, 1 month ago
    Another thing to think about is how the story is worded. It says it "turned racists", but the truth is it did not turnb racist, which implies a rational thought process where it determined it was racist, but it "starts spouting and repeating racist statements". I think there is a key point here: They imply just repeating something with no understanding implies that is a moral or rationalized position. Like a 5 year old repeating a slew of curses that they just heard daddy say.
    Reply | Permalink  
  • Posted by $ 8 years, 1 month ago in reply to this comment.
    I would say no, as they did not anticipate or understand the need for a framework to both process input and output to rationalize it correctly.
    Reply | Permalink  
  • Posted by $ 8 years, 1 month ago in reply to this comment.
    I am not sure Siri has a huge self learning capacity, it seems like it is limited, and it also gets one persons inputs, I don't think it gets multiple inputs all the time in an open manner like this.
    Reply | Permalink  
  • Posted by $ 8 years, 1 month ago in reply to this comment.
    No, it probably would be trumpeted as a great success and bought or seized by the government....
    Reply | Permalink  
  • Posted by $ 8 years, 1 month ago in reply to this comment.
    I think it was only fed the crazy data because MS announced this with great fanfare and a group of people decided to mess her up. I wouldn't be surprised if a bunch of the tweeters were geeks who figured out what would happen, and set out to prove it. MS just illustrated their complete lack of understanding in what people need and want, and how they look at MS itself. You might be able to call this a "hacker attack".
    Reply | Permalink  
  • Posted by $ 8 years, 1 month ago in reply to this comment.
    That is called the 5 Whys, used to drill down through the confusion and misinterpretation to get to the facts. The Lean System uses it to get to the root cause of a problem to put in a permanent fix. For a factual analysis it is a very good tool. Your point is good, and illustrates that the MS people did not ask the right questions in design of the system. They just assumed aggregate information will lead to improved cognition and rationalization. They missed out on the need for an ethical framework to slot and connect knowledge. Without such a framework, they will always get just an amalgamation of garbage, which pretty much sums up the responses you see form a large part of our society in any discussion today.
    Reply | Permalink  
  • Posted by Snakecane 8 years, 1 month ago
    All this proves is that without an integrated set of principles, a rational philosophy, all thinking, all learning goes sideways. One can't learn from individuals who have nothing principled to say. This further proves that the builders of the "AI" program have little or no philosophical training. From an AI point of view, I would suggest that the difficulty is in separation of "computerwashing" (as opposed to brainwashing) from actual rational philosophical principles. Hard enough in humans to replace one with the other, and I suspect even harder in a computer, perhaps impossible.
    Reply | Permalink  
  • Posted by mccannon01 8 years, 1 month ago in reply to this comment.
    That would be the best choice, Freedom. I suspect, though, MSFT would more likely send her to a collectivist re-education camp where her fist tweet would be "Obama, Mmm, Mmm, Mmm".
    Reply | Permalink  
  • Comment hidden due to member score or comment score too low. View Comment
  • Posted by $ MichaelAarethun 8 years, 1 month ago
    The humor is invaluable. Microsoft which entity had made a fortune or two running a conspiracy to defraud is expected to come up with anything else?

    Their work ethic is "Get it on the market and fix it later. - no matter the damage that is caused."

    You really expected anything else? The more fool you suits as a descriptive phrase suit as well as any..
    Reply | Permalink  
  • Posted by blackswan 8 years, 1 month ago in reply to this comment.
    It's said that you can usually cut to the facts with just 5 well placed questions. Rather than the bot assuming that the content it was receiving was "true," why wasn't it "taught" to ask at least 5 questions on any new subject? In that way, at the very least, it could have tempered its more outrageous tweets.
    Reply | Permalink  
  • Posted by johnpe1 8 years, 1 month ago
    there obviously was an Oz behind the curtain, here! -- j
    .
    Reply | Permalink  
  • Posted by lrshultis 8 years, 1 month ago
    Involuntary servitude, both private and government, will not give good results in education except for those who are curious and choose to find things out about existence regardless of the implied or actual force used against them. Of course, there will be those who like being told what to do or learn who end up being teachers, although, I did have some teachers who treasured independence in students despite the rules.
    Microsoft's programmers must have used Usenet and seen how some good discussions, including one on Objectivism, had been destroyed by those who were being funny or who just wished to destroy the group.
    Reply | Permalink  
  • Posted by slfisher 8 years, 1 month ago
    It didn't randomly pick this stuff up. Some people made a point of deliberately exposing it to these sentiments to see what would happen. That said, yes, Microsoft was stupid not to predict this.
    Reply | Permalink  
  • Posted by $ Olduglycarl 8 years, 1 month ago
    Good questions, Nick. I see it as a cultural problem, one devoid of conscience and conscious thinking; the willful destruction of ideas, mockery of technology and Yes, yes, yes...the lack of historical truths, quantum truths and underlying morality. All of this from a culture that runs to "Safe Places" when informed differently or admonished for their behavior. It's truly the "be what ever you will" generation...the consequences of which will lead to a downward spiral de-evolution of the conscious mind.
    Reply | Permalink  
  • Posted by livefree-NH 8 years, 1 month ago
    A few years ago there was a case where some HP facial recognition software (actually running in a Best Buy or somewhere) did not recognize the face of a black employee, but did just fine with the pink-faced people like me. Question: is that some racist hardware, or what? (source: http://j.mp/1RCo03r and http://j.mp/1RCnZws ) Many times people don't specify what the software is supposed to do when it is unleashed on the public. This chatbot, for example, was supposed to learn from people who interacted with it. But even the word "learn" is subjective and needs to be better defined, as well as a more rigid definition of success itself.

    At the risk of piling-on to an already dead horse, can anyone say that the healthcare.gov website was successful? How do you measure success?

    I think they have this one thing in common: the task they were attempting was illogical and poorly defined, and these very things are nearly impossible for a computer to do, in contrast to what humans can do without even thinking.
    Reply | Permalink  
  • Posted by Herb7734 8 years, 1 month ago
    That's what happens when an ivory tower scientist(s) tries to cope with the real world. The bot is a child who learns very fast. It is sent out into the world without an education in human interaction or social skills. WTF did they expect would happen? It might be a good idea to train the bot with someone who doesn't live in a virtual world.
    Reply | Permalink  

  • Comment hidden. Undo