One nation, one algorithm
Big and beautiful ... $3,000 robots on the loose ... And model doesn't mean what it used to.
Remember the “One Big Beautiful Bill” in Congress that would bar states from passing AI laws for 10 years?
Its been almost a month since the bill was introduced and the verdict is still out on the bill as a whole — the Senate has yet to take up the measures, and some House Republicans are already regretting their vote — and on the specific AI provision.
Supporters in the AI industry pitch it as a way to spare innovation a 50‑state compliance migraine.
Critics at the state government level call it a big-tech power grab.
But today, we want to ground the debate in some numbers by looking at what states have actually been doing to regulate AI — and what could be lost if the bill passes in its current form.
The Numbers
Our analysis of the legislative landscape shows that states have introduced close to 900 AI-related bills in 2025 alone, compared to 128 federal proposals — a ratio of more than 7-to-1. States have enacted at least 124 AI-related bills into law so far.
New York leads the legislative charge with 86 AI bills, followed closely by New Jersey with 74 and Texas with 66.
Arizona ranks much lower, with just eight AI-related bills introduced this year.
While not all of those bills made it into law (in fact, only a fraction have), they aren't just symbolic gestures — they represent serious attempts to grapple with AI's implications in everything from healthcare decisions to election integrity.
The most popular category of AI-related bills from state lawmakers is education. Lawmakers at the state level have introduced at least 84 education-AI bills this year, ranging from curriculum-building measures that add AI literacy to K-12 courses to bans on using ChatGPT to complete graded assignments.
California lawmakers introduced legislation that would require the state to develop model AI curricula for every public high school career-tech pathway.
New York lawmakers introduced a bill to weave “AI system literacy” into junior- and senior-high standards statewide.
Connecticut lawmakers are taking a different approach, filing legislation to want to prohibit schools from letting AI tools provide direct instruction or grading without human oversight.
The next most popular categories are consumer protection (57) and healthcare (52).
On the consumer-protection front, states are forcing up-front disclosure whenever an AI system is interacting with consumers or setting prices.
A Maine bill would require a prominent notice any time AI is used in a consumer transaction.
And Arizona House Democratic leader Oscar De Los Santos floated a bill this year that would outlaw algorithmic rent-price-fixing by large landlords. Like many AI bills (and bills sponsored by Democrats in Arizona), it didn’t receive a hearing this year.
And when it comes to healthcare, many states have already barred insurers and hospitals from letting AI make final coverage or treatment calls. Arizona lawmakers got a bill along those lines signed into law this year.
They’re also looking into statewide registries of approved medical-AI tools and mandatory bias audits before deployment.
Oftentimes, with a new policy area like AI, it can take several years to pass a bill — even if it’s broadly popular. The bill-to-law process has endless hurdles, and sometimes it takes lawmakers a few years to clear them all.
And it can take time for ideas that one state tries to percolate into the broader conversation. But what one state is trying today could become a national state model tomorrow as states share best practices and approaches.
On the other hand, a patchwork of competing and conflicting state laws would be a nightmare for AI companies to comply with. And states can’t offer the kind of big-picture framework that the federal government could when it comes to steering, regulating and implementing AI.
The laws that states could lose
So what has actually made it into law at the state level?
In the triage of dealing with the massive disruption that AI represents, lawmakers are focusing on a lot of child-safety regulations.
Arizona approved a bill this year to make it illegal to use AI to generate images or video that appear to depict child sexual abuse. It earned bipartisan support at the Capitol, and Gov. Katie Hobbs signed it into law.
And while the federal bill wouldn’t nullify criminal laws about AI use — just regulatory ones —Republican Rep. Julie Willoughby, who sponsored Arizona’s AI child porn law, is still concerned.
Willoughby and fellow Republican state Rep. Nick Kupper recently sent a letter to Arizona’s Democratic U.S. senators urging them to oppose that provision of the bill, saying “The sweeping federal moratorium on enforcing laws like these is an unjustified overreach and would unnecessarily delay important protections for our residents.”
“As state legislators, we recognize the importance of national standards to address harms and potential benefits presented by rapidly evolving technology, such as AI,” they wrote. “However, we are equally committed to protecting Arizona’s ability to address the unique needs and values of our constituents through timely and carefully crafted legislation.”
More to the point, Arizona’s laws wouldn’t even be necessary if Congress was actually regulating AI, they noted.
“Moreover, H.R. 1 does not appear to propose any regulatory AI scheme to replace any state laws that are jeopardized by the moratorium. This approach will inevitably lead to unintended and consequential societal harms,” they wrote.
Willoughby also earned bipartisan support for her bill to bar health insurers from using AI to make final decisions on medical claim denials or prior authorization requests.
The bill, she says, ensures a human, not an algorithm, is responsible for those decisions.
And it would probably be nullified if the “one big beautiful” bill makes it into law.
“These are practical laws designed to protect Arizonans,” Willoughby said in her letter to Arizona’s U.S. senators. “Washington shouldn’t be dictating whether we can enforce them.”
Meanwhile, in Congress…
At the federal level, most of the focus has been on helping to boost the AI industry by building research infrastructure, exemplified by initiatives like the National Artificial Intelligence Research Resource (NAIRR), which received $72.3 million in 2025 funding.
The CREATE AI Act of 2025 further illustrates this strategy, aiming to "establish the National Artificial Intelligence Research Resource" and ensure "United States leadership in artificial intelligence."
This research-first mentality shows Congress is more interested in fostering innovation than mitigating the risks and threats AI poses, which has been the focus at the state level.
Bills dealing with commerce and standards lead federal AI legislative activity, with 24 bills introduced — accounting for nearly one-fifth of all federal proposals. That’s followed by bills dealing with AI and national security (13 bills), education and workforce (12 bills), public health (10 bills), and intellectual property (9 bills).
The data underscores the complementary roles of state and federal governments: states address immediate, localized concerns, while federal efforts focus on long-term innovation and national security.
A balanced solution may involve a hybrid regulatory model, where federal standards provide a baseline for consistency, yet allow states to enact additional protections tailored to their unique needs.
For example, federal guidelines could set minimum safety and ethical standards for AI systems, while states could address specific concerns like consumer privacy or educational applications, as seen in their 2025 legislative focus.
This approach could reconcile the competing priorities of fostering technological advancement and safeguarding public welfare.
But that’s not the approach that the "One Big Beautiful Bill" takes.
The bill still faces intense scrutiny in the Senate, though very little of that scrutiny is actually trained on the AI provision.
But if the Senate approves the bill with that provision intact, a whole lot of laws that states have approved to protect their citizenry and rein in AI’s potential threats will be wiped off the books.
$3K humanoids hit the cart: Hugging Face and The Robot Studio introduced HopeJr, an open-source bipedal robot that walks, grasps, and ships later this year for under $3,000 — cheaper than most iPhones. Consumer-grade robotics just crossed a psychological price ceiling, opening the door to hobbyist-scale automation.
Please help us buy a robot.
A Yeti goes mega-viral: A brand-new TikTok handle called “Yeti-Boo” blew past 28.6 million views in its first week thanks to a fully AI-generated, ASMR-style snow beast that vlogs from the woods. Zero followers → cult character in seven days is a reminder that low-cost synthetic actors can now win the algorithm all by themselves.
VFX in a click: Luma Labs’ new “Modify Video” lets filmmakers keep the original performance but swap in entirely new worlds — lighting, textures, even characters — in post. Think “shoot once, restyle infinitely,” and you’ll know why traditional VFX shops are sweating.
Some AI humor for the nerds 🙂