PHOENIX, Dec. 03, 2025 (GLOBE NEWSWIRE) -- The AI Rights Project today announced its official launch as an independent public-education and policy initiative dedicated to helping students, creators and citizens understand and assert their rights in the AI age. As artificial intelligence rapidly reshapes how people learn, work and create—often faster than laws or public understanding can adapt—the Project fills a critical gap by helping the public navigate how AI is transforming rights, expression and democracy.
Rooted in its Know Your AI Rights™ framework and eBook, The AI Rights Project provides trusted, nonpartisan guidance—free from Big AI influence—on how AI systems affect daily life, from our access to healthcare to our access to justice. Through clear and actionable education and civic engagement, the Project equips communities with the clarity needed to thrive in an AI-driven world. Guided by its Six Pillars, the organization connects public education with research-based AI law and policy advocacy, including an education campaign built on The Bill of AI Rights and the How LLMs Work video series. Together, these resources help families, creators and civic institutions understand not only how AI works, but what rights and responsibilities accompany it.
At a moment when realism, authorship and identity are all being reshaped by emerging technologies, The AI Rights Project is advancing model policy frameworks that protect the public’s ability to understand what is real, what is synthetic and what belongs to the people who created it.
Policy initiatives include:
1. The AI-Simulated Realism Disclosure & Transparency Act
As AI-generated images, audio and video become increasingly indistinguishable from reality, the public faces a growing challenge: the ability to know what is real and what is simulated. In 2024, Arizona enacted a deepfake law focused on election-related synthetic media and digital impersonation, creating civil remedies and disclosure requirements for certain AI-generated content involving candidates. But while this statute was an important first step, it does not address the broader, year-round public need for clear disclosure standards across all forms of AI-simulated realism, nor the civic right to understand the authenticity of media encountered in daily life.
To close this gap, The AI Rights Project drafted the AI-Simulated Realism Disclosure & Transparency Act, an Arizona bill designed to establish rights-protective, constitutionally grounded disclosure rules that help people identify synthetic media without restricting speech. Built on First Amendment–compatible principles of disclosure over suppression, fraud prevention, democratic integrity and the public’s right to truthful information, the bill requires that AI-generated or AI-altered media presented as real include simple, consistent disclosures that empower voters, consumers, families and creators to distinguish genuine content from simulated realism.
The bill also introduces a narrowly tailored, public-interest enforcement mechanism that strengthens individual autonomy by enabling people to hold creators and distributors accountable when realism is simulated without disclosure. Though drafted for Arizona, the framework is designed to serve as a national model for federal legislation, offering policymakers across the country a transparent, speech-compatible approach to protecting civic trust in an era where reality itself is becoming negotiable.
At its core, the bill advances a simple civic principle: in a healthy democracy, people deserve to know what is real before making decisions that shape their lives and their future.
2. Public ≠ Permission: The AI Training & Copyright Fairness Act
As generative AI systems expand into every creative field, millions of publicly accessible works—art, literature, music, photography, design, journalism, code and more—have been ingested into training datasets without the creator’s knowledge, consent or meaningful recourse. This raises two foundational questions at the heart of modern authorship: Why should “public posting”—the very act by which artists share their work with the world—constitute permission to use that work to train AI? And why should machine learning be treated as equivalent to human learning when determining fair use under copyright law?
The AI Training & Copyright Fairness Act is the first policy model to challenge both premises—not by rewriting copyright law, but by returning it to first principles: copying requires consent, or a clear statutory exception. The model bill outlines transparent, rights-protective standards for how AI developers obtain, document and disclose the creative works used in training, ensuring that innovation does not come at the expense of human authorship, economic opportunity or creative integrity.
The model bill also closes a structural gap in the law. AI training—a process of automated copying and large-scale analysis—was never contemplated when Congress defined fair use for human creative activity. As a result, courts are increasingly being asked to stretch analog-era doctrines to technologies capable of replicating and recombining human expression at unprecedented scale. Clear statutory guidance is now essential to ensure that both creators and developers operate under balanced, transparent and rights-respecting rules.
Some argue that protecting creators’ rights will slow innovation, but the opposite is true: clear, balanced rules strengthen public trust, reduce legal uncertainty and create a sustainable foundation for innovation that benefits everyone. At its core, the model bill affirms that the future of AI creativity must be built with human creators, not built on them.
Together, these two initiatives anchor The AI Rights Project’s broader work at the intersection of transparency, public education and rights-based policy leadership. By translating complex technological issues into accessible civic principles, the Project helps ensure that people—not just corporations or governments—shape how AI affects daily life.
Additional work includes:
3. The AI Rights Advocacy Certification Program, which prepares students and community leaders to advocate effectively on AI-rights issues.
4. Participation as amicus in major court cases that will shape the legal, creative and civic landscape of artificial intelligence, including litigation involving:
- AI-simulated realism
- Unauthorized use of copyrighted works to train AI
- Bias, discrimination and equal-treatment risks in algorithmic decision-making, including cases involving education, employment, healthcare, housing and access to opportunity
- AI-driven impacts on democratic participation, including election misinformation, manipulated media and erosion of civic trust
Through the Let’s Keep It Real Artwork Competition, the Project will replace interim AI-generated visuals on its website with human-made artwork, reinforcing its commitment to authenticity, transparency and creativity.
“Rights are only as strong as our understanding of them,” said Jim W. Ko, Founder and Executive Director. “Every citizen deserves clear, trusted information about how AI affects their lives and freedoms.”
Get involved at airightsproject.org/join or airightsproject.org/sponsors.
Because the future of AI is not about technology—it’s about the kind of society we choose to build.
ABOUT THE AI RIGHTS PROJECT
The AI Rights Project is an independent, nonpartisan public education and policy initiative empowering students, creators and citizens to understand and assert their rights in the AI age. Learn more at airightsproject.org.
Media Contact:
Jim W. Ko, Founder & Executive Director
The AI Rights Project
press@airightsproject.org
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/72e80739-c029-41c6-aa2a-fa7748b2f57d

