Big Beautiful Bill? Congress Will Vote to Strip States of Their Power to Regulate Artificial Intelligence. D.C. NIFO

Posted by freedomforall 3 weeks, 3 days ago to Government
16 comments | Share | Flag

Excerpt:
"Buried deep in Congress’s 1,116-page “One Big Beautiful Bill Act” is a provision so sweeping, so dystopian, and so underreported that it’s hard to believe it was passed out of a committee.

Section 43201 of the bill, blandly titled the “Artificial Intelligence and Information Technology Modernization Initiative,” doesn’t just fund the federal government’s full-scale AI expansion—it removes every state’s right to regulate artificial intelligence for the next decade.

Let that sink in: For the next ten years, no state in America—not even your state—will be allowed to create its own safeguards, protections, or liability standards for how AI is developed or deployed."
--------------------------------------
Does this not stink of the same Deep State sewage as giving the Pharma companies protection against 'vaccine' damages?

D.C. NIFO
SOURCE URL: https://jonfleetwood.substack.com/p/congress-will-vote-to-strip-states


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by 3 weeks, 3 days ago
    BAN ALL MASSIVE FUNDING BILLS FOREVER!
    STOP CON-GRESS' ADDICTION TO FUNDING UNCONSTITUTIONAL ACTIONS.
    Con-gress has proven repeatedly they can't be trusted to do ANYTHING that honors the Bill of Rights.
    Force all funding be done on a case by case basis with no bill to exceed 20 pages at 12 point size print single spaced.
    Force every bill to sunset (ending all funding) in 2 years or less with no renewals.
    Kill the Deep State D.C. monster.
    D.C. NIFO
    Reply | Mark as read | Best of... | Permalink  
  • Posted by $ Thoritsu 3 weeks, 1 day ago
    I am ok with the AI state restriction. NONE of them have the wisdom to do anything positive. Very happy it remains unregulated.

    We sure as shit do NOT need another California CARB.
    Reply | Mark as read | Best of... | Permalink  
    • Posted by 3 weeks, 1 day ago
      Agree on the CA CARB.
      Other states' govs are utterly stupid to follow anything CA has done recently.
      But concentrating the (utterly corrupt) regulation in D.C. is even greater stupidity.
      I need a time machine to return to the 70's and live my life trying to prevent this idiocy.
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ Thoritsu 3 weeks, 1 day ago
        Well, the government is not going to write a law that says no government can regulate something for 10 years. This is about as good as it gets.

        Not to imply this whole thing is ok. I detest that they opened up State Tax credits! This is tax relief for idiot states that tax themselves into oblivion, and subsidy from low tax States. This needs to DIE! (and I live in MA!)
        Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by $ Olduglycarl 3 weeks, 2 days ago
    Didn't know it was in this bill either and YES, I am a fan of I action, 1 bill to vote on at a time.
    BUT, I wonder what safe guards (if any) there has already been built in to the AI agenda.

    Does this AI thing include autonomous autos/planes/trucks, Teaching or any kind of Monitoring of anything?
    I'd have a problem with these things if it did . . .
    Reply | Mark as read | Best of... | Permalink  
    • Posted by $ gharkness 3 weeks, 1 day ago
      Safeguards in AI? Surely you jest! Honestly, I think it's too late. We're so screwed.

      https://techcrunch.com/2025/05/22/ant...
      Reply | Mark as read | Parent | Best of... | Permalink  
      • Posted by $ Olduglycarl 3 weeks, 1 day ago
        UNPLUG THE DAMN THING!
        Reply | Mark as read | Parent | Best of... | Permalink  
        • Posted by $ gharkness 3 weeks, 1 day ago
          One would think. But....not sure it's possible. Kind of like a metastasized cancer. Can't just cut it out of one spot.

          I read something even more alarming today, after I made my previous comment. It was from Jeff Childer's Coffee and Covid Substack: (Please forgive the long quote, but I couldn't find anything to cut out of it!
          =========================

          And there is another, bigger, even weirder reason why it will be impossible to constrain AI.

          🔥 At bottom, artificial intelligence is serious weird science. Try to stick with me here; it’s important.

          At its core, in the deepest, central part of the software that runs AI, nobody understands how it works. They’re just guessing. AI is just one of those happy lab accidents, like rubber, post-it notes, velcro, penicillin, Silly Putty, and Pet Rocks.

          It happened like this: Not even ten years ago, software developers were trying to design a tool capable of predicting the next word in a sentence. Say you start with something like this: “the doctor was surprised to find an uncooked hot dog in my _.” Fingers shaking from too many jolt colas, the developers had the software comb through a library of pre-existing text and then randomly calculate what the next word should be, using complicated math and statistics.

          What happened next ripped everybody’s brain open faster than a male stripper’s velcro jumpsuit.

          In 2017, something —nobody’s sure what— shifted. According to the public-facing story, Google researchers tweaked the code, producing what they now call the “transformer architecture.” It was a minor, relatively simple, software change that let what they were now calling “language models” omnidirectionally track meaning across long passages of text.

          In fact, it was more that they removed something rather than adding anything. Rather than reading sentences like humans do, left to right, the change let the software read both ways, up and down, and everywhere all at once, reading in 3-D parallel, instead of sequentially. The results were immediate and very strange. The models got better —not linearly, but exponentially— and kept rocketing its capabilities as they fed it more and more data to work with.

          Put simply, when they stopped enforcing right-to-left reading, for some inexplicable reason the program stopped just predicting the next word. Oh, it predicted the next word, all right, and with literary panache. But then —shocking the researchers— it wrote the next sentence, the next paragraph, and finished the essay, asking a follow-up question and wanting to know if it could take a smoke break.

          In other words, the models didn’t just improve in a straight line as they grew. It was a tipping point. They suddenly picked up unexpected emergent capabilities— novel abilities no one had explicitly trained them to perform or even thought was possible.

          It’s kind of like they were trying to turn a cart into a multi-terrain vehicle by adding lots of wheels and discovering, somewhere around wheel number 500 billion, that they accidentally built a rocket ship that can break orbit. And nobody can quite explain the propulsion system.

          ================================

          If a third of this is true, it's too late.
          Reply | Mark as read | Parent | Best of... | Permalink  
    • Posted by 3 weeks, 2 days ago
      After recent experiences with viruses, who in a rational mind would agree to the
      utterly corrupt fedgov in charge of AI? One well placed bullet is all it takes to
      remove the only person that has shown genuine resistance to the Deep State.
      Reply | Mark as read | Parent | Best of... | Permalink  
  • Posted by 3 weeks, 2 days ago
    From the party that engineered the "civil" war and blamed it on the victims.
    Thanks, GOP (and Trump) for another back-breaking, unconstitutional waste of our scarce resources that fills the pockets of the banking cartel and Wall St.
    D.C. NIFO.
    Reply | Mark as read | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo