Big, beautiful blockage
The tortoise chasing the hare … An intellectual divide widens … And Thomas Jefferson loved ChatGPT.
AI moves fast. Congress doesn’t.
But if Congress gets its way, they’d be the only ones who can regulate AI.
A proposal to block states from passing any AI-related laws for 10 years was tucked inside the “big, beautiful” budget bill that made it out of the GOP-led U.S. House earlier this month.
In effect, the same people who move at a glacial pace want to hold the reins of a technology that makes breakthroughs every day. And each one of those breakthroughs generates serious, and largely unknown, implications for society as a whole.
The U.S. Senate still has to vote on the bill, so there’s an opening for state officials to make their case.
Right now, two GOP legislators in Arizona, and a whole lot of state attorneys general, are pleading with senators to strip the provision blocking states from enacting AI regulations.
Meanwhile, Big Tech is probably drooling over the idea of an inept government body like Congress taking control, especially as the richest people in the world make huge investments in AI that rival the GDPs of some of the wealthiest countries.
Big, beautiful backlash
The bill's AI provision is broad and sweeping.
No state or local government may enforce any law “regulating artificial intelligence models, artificial intelligence systems, or automated decisions” for a decade.
So far, 40 attorneys general, including Arizona Attorney General Kris Mayes, have come together to urge Congress to remove the moratorium.
"Imposing a broad moratorium on all state action, while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections," they wrote in a joint letter to congressional leaders.
They also warned Congress that this moratorium would “wipe away any state-level frameworks already in place,” leaving Americans exposed to the risks of rapidly advancing AI.
On Tuesday, Arizona House Republican Whip Julie Willoughby and Rep. Nick Kupper reached across the aisle for some help. They sent a joint letter to Democratic U.S. Sens. Mark Kelly and Ruben Gallego, urging them to reject the moratorium.
“The sweeping federal moratorium on enforcing laws like these is an unjustified overreach and would unnecessarily delay important protections for our residents,” Willoughby and Kupper wrote.
A “blanket prohibition” on state policies could also “hinder innovation and accountability,” they wrote, by freezing regulatory progress and halting enforcement of “common-sense reforms” at the state level.
Willoughby and Kupper highlighted how the moratorium would directly block Arizona from enforcing recent, practical laws designed to protect residents:
HB2175 prohibits health insurers from using AI to make final decisions on medical claim denials or prior authorization requests, ensuring a human—not an algorithm—remains responsible for those outcomes.
HB2678 (which Willoughby sponsored) updates Arizona’s criminal code to include computer-generated images that appear to depict child sexual abuse, giving law enforcement stronger tools to combat digital exploitation.
Big, beautiful bonanza
We don’t know what Big Tech will do if they no longer have to worry about state officials regulating their business.
But they certainly are moving fast and spending big.
Nvidia CEO Jensen Huang recently declared, “AI is now infrastructure,” and announced that companies are building $5 trillion worth of “AI factories” worldwide—massive data centers that will power the next wave of AI.
To be clear, these aren’t just data centers; they’re the new industrial backbone, as essential as electricity or the internet.
At Oracle, they’re spending $40 billion on Nvidia chips for OpenAI’s new “Stargate” data center in Texas, part of a $500 billion joint venture to build AI megacenters across the U.S. The Texas site alone will eventually house eight buildings and 1.2 gigawatts of computing power, which is on par with the output of the largest power plants in the country (and almost enough to power the flux capacitor in “Back to the Future”).
OpenAI has secured $11.6 billion for its Texas campus, aiming to reduce dependence on Microsoft and build the next generation of AI models.
And similar projects are planned in 16 states — including Arizona, California, and New York.
Big, beautiful bet
Like the titans of Big Tech, the Trump administration also is moving fast on AI.
Internationally, the U.S. and the United Arab Emirates just finalized a $1.4 trillion AI and semiconductor deal, including plans for the world’s largest AI campus in Abu Dhabi and new U.S.-based data centers. Over $200 billion in related deals were signed during President Donald Trump’s Gulf tour.
At home, it looks like U.S. officials are betting that minimal regulation and massive investment will secure AI supremacy.
The moratorium is as much about global competition as it is about domestic policy. By centralizing power in Washington and sidelining states, Congress is signaling to the world, and especially to China, that America intends to lead by accelerating, not slowing, AI deployment.
But this comes with risks, as the attorneys general and state lawmakers pointed out.
States like Arizona, which have acted as “laboratories of democracy,” would be unable to respond to new AI threats or adapt to emerging technologies for a decade.
For now, Mayes is throwing the weight of the Arizona Attorney General’s Office behind the push for states to regain some regulatory power.
“From displacing workers to guzzling precious water supplies and enabling fraud and exploitation, there is urgent work to be done to protect Arizonans from the dangers of this powerful technology,” Mayes said in a news release. “We must act now to establish guardrails and prevent harm from AI. Congress must not tie the hands of state leaders working to protect their residents and ensure the ethical development of this rapidly evolving technology.”
Compute land-rush: Social Capital’s Chamath Palihapitiya told his X followers he’s buying 2,100 acres of desert outside Phoenix with land broker Anita Verma-Lallian to build a one-gigawatt AI data-center campus. He pegs the “compute city” at roughly $25 billion and boasts it’ll sit just down the road from Bill Gates’ proposed smart-city parcel. First up: wrangling water rights, power hookups, and enough permits to fill a server rack — an old-school reminder that the AI boom still has to clear real-world red tape.
Literacy is key: The University of Arizona’s representative on Gov. Katie Hobbs’ new AI steering committee, physics professor Elliot Cheu, says he wants to encourage AI literacy, particularly in the classroom, Arizona Public Media’s Hannah Cree reports.
“Understand what AI is doing, how it's using your data, what are the strengths of using an AI informed analysis, and what are the weaknesses,” he said.
East Coast, West Coast: An intellectual divide is growing about the future of AI, and it’s starting to break down roughly as a clash between cautious East Coast academics and brash Silicon Valley entrepreneurs, The New Yorker’s Joshua Rothman writes. But neither side has a firm grasp on what’s coming, or what it will be like for humans to work on “cognitive factory floors.”
“Meanwhile, there are barely articulated differences on political and human questions — about what people want, how technology evolves, how societies change, how minds work, what ‘thinking’ is, and so on — that help push people into one camp or the other,” Rothman writes.
Bad rep: Telling people you use AI makes them trust you less, according to a new study from the UA’s business school. Researchers conducted experiments in teaching, financial investment, and graphic designers, among others. All told, the results were consistent across the 5,000 participants. And the loss of trust wasn’t just with people who never use AI. Even people who are familiar with AI were turned off when they found out somebody was using AI.
If you want to fine-tune your trust-o-meter when it comes to people using AI, just click that button.
Oooooh, that’s nice: In yet another example of AI popping up in unexpected places, a Scottsdale resort is partnering with a tech company to use AI-powered “lifestyle robotics” for massages, the Scottsdale Independent reports. The resort, W Scottsdale, says the massage robots from Aescape will provide a “fully customizable robotic massage experience.”
Oh, that’s not nice at all: A human resources software company is in hot water for using AI tools to allegedly discriminate against older job applicants, Scripps News Group reports. A job applicant sued Workday after the software rejected him for more than 100 jobs over seven years. He and four other applicants, who are all at least 40 years old, submitted hundreds of applications that were rejected, sometimes within minutes.
SnitchGPT: How okay will you be if your AI can call the cops on you?
BlackmailGPT: To be fair, it only happened in the stress-testing phase without guardrails. This won’t happen in production. Right …?
Uno-reverse: Maybe we should all just start threatening it. Yeah, that should be fine.
We don’t usually include a “What we’re laughing at” section, like we do in the Arizona Agenda and the Tucson Agenda, but we couldn’t resist this one.
Apparently, the Founding Fathers were also time travelers! Now it all makes sense.