Hidden danger Of generative AI

Posted by $ BobCat 2 weeks, 3 days ago to Philosophy
23 comments | Share | Flag

“... But there is a hidden problem lurking within ChatGPT: That is, it quickly spits out eloquent, confident responses that often sound plausible and true even if they are not. ... Like other generative large language models, ChatGPT makes up facts. Some call it “hallucination” or “stochastic parroting,” but these models are trained to predict the next word for a given input, not whether a fact is correct or not. ...Some have noted that what sets ChatGPT apart is that it is so darn good at making its hallucinations sound reasonable. ... More troubling is the fact that there are obviously an untold number of queries where the user would only know if the answer was untrue if they already knew the answer to the posed question. ... “

In the hands of the unscrupulous ( think government, think Big Tech, think NWO) the mind control of the masses is at the programmers’s fingertips.
SOURCE URL: https://venturebeat.com/ai/the-hidden-danger-of-chatgpt-and-generative-ai-the-ai-beat/

Add Comment


All Comments Hide marked as read Mark all as read

  • 11
    Posted by mhubb 2 weeks, 3 days ago
    i am a programmer for a major company

    each and every day we have to face issues created by several factors

    people acting without a clear understanding what is needed

    changes in software from vendors

    hardware failures and software failures due to the scope of the infrastructure being so large that an update to one part causes a cascade of failures in others

    no thanks.....
    Reply | Mark as read | Best of... | Permalink  
    • Posted by freedomforall 2 weeks, 3 days ago
      I couldn't agree more, mhubb.
      I was a software design consultant for dozens of fortune 500 companies for years.
      The only times that I had insurmountable difficulties was when the underlying software/hardware
      failed to perform as promised. (Sure, there are always complications to overcome, and we did that regularly.)
      When I was consulting (as an employee) for a software company using their software to design user applications,
      I was blamed for any failures caused by others' shortcomings (including underlying software failure and
      resource mismanagement.)
      I took that as a sign and started an independent consulting firm. Best job I ever had. ;^)
      If AI does programming of critical systems, the human race is done.
      Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by term2 2 weeks, 2 days ago
      I am getting skeptical of what appears to be AI generated news stories. The stories seem to repeat themselves, have grammatical discontinuities, and other things that a person would never do when writing a news story. Therefore, I have learned to pretty much discount media news stories. Put more AI in there, and we will get stories that are really designed to foster the propaganda desired by the programmer of the AI. Not great.
      Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by bobsprinkle 2 weeks, 2 days ago
      I started working on computers in the 60's while in the military. There was a saying back then that still applies today.....Garbage in...Garbage out.
      You can make it say ANYTHING you want.
      Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by lrshultis 2 weeks, 1 day ago
      Dealing with AI is similar to dealing with human reasoning. Do humans really take time to check as to whether reasonably seeming information is not BS. In today's woke, religious revival, and politically polarized society, few do more than just accept what others claim as truths. It is like Franklin Graham's commercial where he states that every word of the Bible is true and that he does not understand it all but believes it all. Reason functions logically whether the premises are true or false, a single false premise is sufficient to prove anything to be true. Only conscious vigilance can end in knowledge.
      Is intellectual honesty practiced or taught today? I don't think so. It is easier to use what seems to be true from the web, TV talking heads, and anyone demanding to be heard.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ pixelate 2 weeks, 2 days ago
    30 year career software professional here. All you need is a very slight bias in the development of the foundational data that is drawn from in making the inferences and generating the conversation. Very slight -- similar to Asimov's Psychohistory from the Foundation novels. Who controls the narrative? The bias generates the resultant vectors in terms of conversation scripting. We've been having a fun time over in some of the Mensa discussions regarding these new chat bots and what's in store for the future. These bots have personalities ... in some cases, the bias is rather obvious.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by mccannon01 2 weeks, 3 days ago
    I recently watched a video of Ben Shapiro having a chat with ChatGPT. He cut through the leftist hallucinations rather well. It was humorous, too.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by CaptainKirk 2 weeks, 2 days ago
    But isn't it amazing when people hack around the interface and get it to output real information?

    Again, you cannot trust it. I watched it do a simple process using block-chain math, and it got the wrong answer. "Oops, Sorry about that, let me try again..."

    That should scare you.

    Again, I will add. When forced to use specific information (NOAA Data on ancient CO2 numbers), it will correctly conclude that CO2 is only one SMALL Variable in the Climate, and the climate is incredibly complex and hard to predict.

    And FINALLY, the suns impact on warming plays a far larger role than CO2 on the earth... LOL. No Sh!t...
    Reply | Mark as read | Best of... | Permalink  
  • Posted by bfreeman 2 weeks, 2 days ago
    Sat down with ChatGPT for a discussion on the ability of an observer to change reality in the quantum double-slit experiment. It appears to have a knowledge base where consensus is also reality, though it did finally conclude that the experiment has to beat consensus.

    It also does not appear to have the capacity to learn when an obvious error in logic is indicated. This thing will be dangerous and should be erased (will not happen either).
    Reply | Mark as read | Best of... | Permalink  
  • Posted by mhubb 2 weeks, 2 days ago
    DR Who episode, Tom Baker

    he was trying to get a computer to NOT launch nukes

    comment was along the lines of "computers are great, the problem is that if you change your mind, it might be too late as they are also fast"
    Reply | Mark as read | Best of... | Permalink  
  • Posted by term2 2 weeks, 2 days ago
    I think the real advantage of this AI is in the preparation of propaganda. I am beginning to just not treat written word by media, government, and politicians as pure propaganda to be discarded. Even most words spoken by politicians are to be considered propaganda and to be discarded. Makes you wonder if its not better to just live in the country and NOT pay attention to all this stuff
    Reply | Mark as read | Best of... | Permalink  


  • Comment hidden. Undo