Fair use up in the air
You can’t just eat everything … Local rules popping up … And pick your rabbit hole.
The all-you-can-eat buffet could be closing.
Artificial intelligence firms have a big appetite for data. They feed on books, music, print and anything else they can use as AI training data.
But that doesn’t sit well with the companies and people who create that material, particularly when they watch AI firms make huge amounts of cash, without sharing any of it.
AI firms claim the “Fair Use” doctrine allows them to use the material for free and without having to ask for permission. It’s the same doctrine that blessed things like Google Books’ search snippets.
But that arrangement is up in the air right now. And the sudden firing of a key player by the Trump administration just made the future for AI even more uncertain.
(Un)fair use?
The fair use argument took a hit earlier this year (which we wrote about), when Thomson Reuters successfully sued an AI startup for copyright infringement.
The success of the Thomson Reuters lawsuit meant creators would finally get their payday, while AI firms would have to completely re-think how they do business.
But now the future of fair use is once again in limbo after officials unexpectedly fired Chief Copyright Officer Shira Perlmutter, along with the head of the Library of Congress, which oversees the U.S. Copyright Office.
The Trump administration didn’t really explain why it fired Perlmutter, so the tech world is abuzz with speculation.
But the timing was impossible to miss: Perlmutter had just released a 108-page study that included some stark warnings for Congress to heed about AI.
Perlmutter’s report argued that “transformative” use (essentially, paraphrasing rather than regurgitating), which AI firms argue allows them to use other people’s material, has limits. The report argued licensing deals are probably going to be needed.
All of which could have rough financial implications for AI firms, and the billionaires who own them.
Legal fig leaf
Since the first neural nets, engineers have treated the open web as a buffet, rather than a restaurant where you order a single meal and pay for it.
The legal fig leaf was fair-use case law. But the Thomson Reuters ruling sets a new — very different — precedent: If the AI’s outputs compete in the same market as the source, the use tilts toward infringement.
Creators cheered. AI labs shuddered.
Perlmutter’s report tries to salvage a middle path. She stressed three takeaways:
Fair use is not “infinite,” especially when outputs substitute for originals.
Paid licensing markets are “nascent but real” — see OpenAI’s recent deals with Axel Springer and The Atlantic.
Congress might need a blanket-license regime or a collective-rights scheme to keep litigation from strangling innovation.
Looking to legislators
The fallout from Perlmutter’s firing is still settling, but lawmakers have plenty of ideas of their own.
Congress was already poking around the edges of the issues she raised in her report.
The bipartisan NO FAKES Act (H.R. 2794) would let anyone, from Beyoncé to your local newscaster, collect damages when their face or voice is cloned without consent or when their likeness shows up in an unlicensed training set.
Meanwhile, states are carving their own path.
The National Conference of State Legislatures compiled more than 550 AI-related bills that are live in 45 states and Puerto Rico, many touching dataset transparency or copyright disclosures.
For example, California’s AB 412 would force developers to keep a public portal of every known copyrighted work in their corpus.
That burden would simply be too onerous, and expensive, for startup firms, the Electronic Frontier Foundation says, and would just make it easier for Big Tech to control the AI market.
If federal lawmakers don’t come up with a national standard, the country could wind up with a patchwork of license regimes — precisely what the AI giants dread.
Lawsuits piling up
If all this wasn’t complicated enough, court dockets are filling up with AI-related copyright lawsuits.
The New York Times v. Microsoft & OpenAI survived a motion to dismiss in April and now moves into discovery, with the Times arguing that ChatGPT regurgitates its prose almost verbatim.
Authors, led by the Authors Guild, have a parallel class action suit covering novels and nonfiction works.
On the visual side, Getty Images v. Stability AI alleges 12 million scraped photos fed AI art company Stable Diffusion without a license.
If even one of these cases ends like the Thomson Reuters case, every frontier model that trained on unlicensed content could suddenly owe back-royalties — or face unplugging.
Billionaires weigh in — loudly
Twitter founder Jack Dorsey parachuted into this policy vacuum by tweeting four words: “delete all IP law.”
Elon Musk, who owns the new version of Twitter and its AI tools, replied, “I agree.”
Techcrunch, WaPo, and a phalanx of intellectual property lawyers called the notion reckless, but it struck a chord with the open-source crowd.
Venture capitalist Marc Andreessen took the opposite tack, saying he would pay extra for a large-language model that isn’t “polluted” by copyrighted text, hinting at a premium tier of “clean-room” models.
The split vision is stark: One camp wants zero friction; the other sees a market for pristine, fully licensed data.
We see a fascinating world unfolding in new ways every day. If you want to stay up to date on AI, then smash that button!
Keeping the centers out of the center: As the data center industry grows in Arizona, the Phoenix City Council is considering new zoning regulations to keep the big, energy-consuming centers away from areas where residents live, walk or get to work, Axios’ Jeremy Duda reports.
AI in the classroom: Tucson Unified School District officials are considering a policy for how AI should be used in classrooms, with “tread carefully” as the guiding principle, Tucson Sentinel columnist Blake Morlock writes. Over at the Balsz School District in Phoenix, they’re partnering with GenTech to teach students about AI and robotics, KTAR’s Shira Tanzer reports. They’re trying to prepare students for jobs in Arizona’s growing tech industry, and the big chipmaker TSMC also helped design the program.
Not just here: Arizona courts are exploring the potential of AI, including by allowing avatars to present victim impact statements or act as court reporters. Court systems in other states also are experimenting, like a judge in Florida who used a virtual reality headset to get the perspective of a defendant who waved a gun at a wedding, the Associated Press reported.
I’m sorry, I’m afraid I can’t do that, Dave: AI companions are getting more popular, including with kids. They’re also getting a little creepy, KJZZ’s Mark Brodie found when he talked with Danny Weiss from Common Sense Media. They found kids can easily get around age restrictions and the advice they’re getting from AI companions is hard to resist, even when it’s not great.
Hints of Skynet: A think tank in Washington, D.C. is toying with AI to see if it can help them with diplomacy, NPR reported. The Pentagon funded the Futures Lab, where the Center for Strategic and International Studies is experimenting to see if ChatGPT could help stop a nuclear war. The United Kingdom and Iran are both doing it, too.
Dr. Robot: The Icahn School of Medicine at Mount Sinai in New York is now the first medical school in the U.S. to fully incorporate AI into its doctor training program, CBS News reported. Students are free to use ChatGPT to prep for surgeries, improve their bedside manner, and generally take advantage of AI however they like.
Robot rehearsal: Tesla’s Optimus just strutted through a 22-second demo video, but Elon Musk says the humanoid is “still very far from our final form,” hinting that the bot’s real tricks are still under wraps.
Building the mothership: Sam Altman shared photos of “Stargate 1,” the Oracle-backed data-center megaproject rising in Abilene, Texas. When finished, Altman says it will be the largest AI-training facility on Earth—think less launchpad, more warp-drive for models.
Agents wanted: YC-backed Firecrawl is dangling $1 million to hire swarms of autonomous AI agents (and the humans who built them). The openings range from content creation to customer support—no pulse required, just good code.
Bug-bounty bot: An autonomous system dubbed XBOW has hacked its way to the top of HackerOne’s U.S. leaderboard, edging out flesh-and-blood researchers in reported vulnerabilities and reputation points.