The Battle for AI's Future: A Tale of Two Industries

The Battle for AI's Future: A Tale of Two Industries

🎭 The Story This Legal Study Tells

Imagine a high-stakes poker game where the house keeps changing the rules as the game unfolds. That's essentially what's happening with AI and copyright law right now—and this academic study reveals the hidden playbook.

📖 The Setup: A World Turned Upside Down

The Old World: Copyright law was straightforward. You create something → You own it → Others need permission to use it. Fair use was the exception—a short list of "yes, you can use this without asking" scenarios (education, news reporting, parody, etc.).

The AI Earthquake: Then generative AI arrived and broke everything. These systems need to "read" millions of copyrighted books, articles, and images to learn. But is that reading... copying? Stealing? Learning? Fair use?

Nobody knew. The law was written for humans, not machines.

🥊 The Fighters in This Story

In the Blue Corner: The AI Industry

  • Companies like Anthropic (Claude), Meta (Llama)

  • Argument: "We're just learning from publicly available content, like humans do when they read"

  • Goal: Train AI models without paying for every single book, article, or image

  • Stakes: If they have to pay, AI development could grind to halt

In the Red Corner: The Copyright Holders

  • Authors, publishers, artists, creators

  • Argument: "You're stealing our work to build billion-dollar businesses"

  • Goal: Get paid when AI companies use their content

  • Innovation: Creating a NEW market for "training licenses" (pay to train on our work)

The Referee: The Courts

  • Trying to figure out rules for a game that didn't exist when the rulebook was written

  • Under political pressure from both sides

  • Making it up as they go along (literally case-by-case)

🔬 The Study's Big Discovery: "Develop-Fair Use"

The researcher, Dr. Chanhou Lou, studied three major cases:

  • China: Ultraman case (anime character images used to train AI)

  • USA: Bartz v. Anthropic (authors suing Claude's creator)

  • USA: Kadrey v. Meta (authors suing Meta's AI division)

His breakthrough insight: Fair use isn't a fixed list anymore. It's becoming a dynamic balancing act based on market competition.

Think of it like this:

  • Old Fair Use: "Here's a list of 12 things you can do without permission"

  • New "Develop-Fair Use": "Let's see... does this hurt the copyright holder's business? Does it create new value? What's the market impact? How does this affect competition?"

💡 The Four Contexts That Matter

The courts (both Chinese and American) are breaking AI use into FOUR distinct situations:

1. Data Input 🔵 (Most likely FAIR USE)

When users upload copyrighted images to train a custom AI model

  • Example: Uploading Ultraman images to create a LoRA model

  • Court thinking: This is more like "reading to learn"

2. Data Training 🟢 (Probably FAIR USE)

When AI companies use copyrighted content to train base models

  • Example: Meta training Llama on millions of books

  • Court thinking: "Non-expressive use" - the AI isn't reproducing the content

3. Content Generation 🟡 (MAYBE FAIR USE)

When AI creates outputs in the style of copyrighted work

  • Example: Generating Ultraman-style images

  • Court thinking: Depends on whether output competes with original

4. Content Output 🔴 (Probably NOT FAIR USE)

When AI outputs directly substitute for the original work

  • Example: AI generating full Ultraman episodes that replace buying the real thing

  • Court thinking: Market substitution = infringement

🎯 The "Antinomy" - The Core Tension

Here's where it gets brilliant. Dr. Lou identified what he calls the "antinomy of competition":

The Dynamic:

  1. AI companies invoke fair use to develop NEW markets (AI services, tools, apps)

  2. Copyright holders create NEW markets (training licenses) to oppose fair use

  3. Courts have to decide: Which market matters more?

The Example from Bartz Settlement:

  • $1.5 billion settlement

  • $3,000 per copyrighted work used in training

  • This CREATES a price for a market that didn't exist before!

Think about it:

  • Before AI: No such thing as a "training license" market

  • After this settlement: There's now a baseline price ($3K per work)

  • Copyright holders: "See? There IS a market for this!"

  • AI companies: "You're creating artificial scarcity!"

It's like the copyright holders are building a tollbooth on a road that used to be free, and arguing "See? This is a legitimate business we need to protect!"

🌍 The Political Angle

President Trump's Take (July 2025): "You can't be expected to have a successful AI program when every single article, book, or anything else that you've read or studied, you're supposed to pay for... China is not doing it."

Translation: AI development is now seen as national competitiveness. Broad fair use = stronger AI = economic advantage.

China's Policy (August 2025): Called for "copyright rules adapted to AI development" - signaling flexibility for AI industry.

The Implication: Fair use isn't just about legal theory anymore. It's about:

  • Economic policy

  • Industrial strategy

  • Global competition

  • Who wins the AI race

📊 What Makes This Study Unique

Why This Research Matters:

  1. First Comparative Analysis: Nobody had done a deep comparison of Chinese vs. US AI fair use cases before

  2. "Develop-Fair Use" Theory: The insight that fair use is DYNAMIC, not static, changes how we think about the whole problem

  3. Market Competition Focus: Instead of asking "Is this fair use?" courts are really asking "What's the competitive impact?"

  4. Four-Context Framework: Breaking AI use into 4 contexts gives courts a practical tool

  5. Predictive Power: Explains why Bartz settled (input side resolved), but output side claims preserved

🎓 The Key Insights for Your Business

What This Means in Plain English:

For AI Adoption (Your Clients):

1. The Input Side is Getting Clearer

  • Using copyrighted content to TRAIN AI = increasingly likely to be fair use

  • Courts in both US and China are leaning this way

  • If you're just training models, risk is lower

2. The Output Side is Still Dangerous ⚠️

  • If your AI GENERATES content that substitutes for copyrighted work = risky

  • This is where lawsuits will focus next

  • Be careful about AI outputs that directly compete with original creators

3. Context is Everything 🎯 The same copyrighted work might be:

  • FAIR USE when used for training

  • INFRINGEMENT when output mimics the original

  • DEPENDS when somewhere in between

4. The Market is the Metric 💰 Courts are asking:

  • Does this create new value or just copy old value?

  • Does this compete with the original creator's market?

  • What's the economic impact on both sides?

For Your Thought Leadership:

This Study Validates Your Public Defender Analysis!

Remember your Public Defender research showed AI adoption depends on:

  • ✅ Confidentiality (data stays local)

  • ✅ Cost (affordable tools)

  • ✅ Quality (trustworthy outputs)

  • ✅ Task suitability (not everything should be automated)

This Legal Study Shows:

  • ✅ Courts care about market impact (like you showed with vendor dependency)

  • ✅ Dynamic balancing needed (like your AI suitability spectrum)

  • ✅ Context matters (like your five pillars of work)

  • ✅ Open-ended evaluation (like your responsible use framework)

The Connection: Both studies reveal that AI adoption isn't about rigid rules—it's about thoughtful, context-dependent decision-making that balances competing interests!

🔮 What Happens Next?

The "Wait and See" Approach:

Both US and China are letting courts develop the law case-by-case rather than rushing to pass AI-specific legislation. Why?

  1. Technology is moving too fast - Law would be obsolete before it's passed

  2. Economic stakes are huge - Getting it wrong could kill the AI industry OR devastate creators

  3. Need real-world data - Can't predict all scenarios in advance

What to Watch:

Near Term (2025-2026):

  • More settlements establishing "training license" baseline prices

  • Courts distinguishing input-side (safer) from output-side (riskier) uses

  • Development of "four-context" analysis in case law

Medium Term (2027-2028):

  • Possible legislation codifying what courts have figured out

  • International standards emerging from US-China convergence

  • Training license market matures with established pricing

Long Term (2029+):

  • Fair use doctrine evolves to accommodate AI permanently

  • Balance between AI innovation and creator rights stabilizes

  • New business models emerge that work for both sides

🎬 The Bottom Line

The Story in One Sentence: Fair use for AI isn't a fixed rule—it's a dynamic wrestling match between an emerging AI industry trying to build new markets and a traditional copyright industry trying to protect (and create) their markets, with courts acting as referees who make up the rules as the game unfolds.

Why It Matters: Understanding this helps you:

  1. Assess risk more intelligently than "is AI legal or not?"

  2. Make better decisions about which AI uses are safe vs. risky

  3. Stay ahead of regulatory changes

  4. Position yourself as someone who understands the deeper game being played

The Parallel to Public Defenders: Just like public defenders need to understand which tasks are suitable for AI assistance (evidence review = yes, client relationships = no), businesses need to understand which AI uses are legally safe (training = probably yes, output substitution = probably no).

Both situations require nuanced, context-dependent thinking rather than simple "yes/no" answers.

Vanessa Sifuentez

Digital marketing consultant & AI strategist | Founder, The Right Influencer | Host, Mound Up Podcast | Empowering Denton County businesses & campaigns with AI-driven marketing strategies | Flower Mound, TX | Passion • Purpose • Profit

https://www.therightinfluencer.com
Previous
Previous

When Justice Can't Keep Up: Inside Public Defenders' Race Against the AI Revolution