In yesterdays newsletter - (AI: Where Are We Headed?) I explored three different stories we tell ourselves about AI, and came to the conclusion that we are probably underestimating the medium term impact of AI and related technologies. So, if you haven’t read that article, it’s probably a good idea to look at it before you read this.
Today though I want to move beyond any potential doomsday scenario, and introduce a framework I believe should be in place to help ensure that an AI Future is human-centric.
I’ve been working on this framework for some time, and in developing it, I owe a huge debt of gratitude to the many people over the years who have shared their thoughts with me on AI - from those who joined me on AI-focused think tanks at Harvard, Duke and Arizona State University, to the folks in Microsoft Research who have generously shared their findings, and the many AI innovators we’ve had on the Humanity Working podcast. Anything you read here is really just a distillation of their contributions.
But first…
Do We Need Another Framework?
The major AI companies have done important work establishing principles for responsible development of AI. OpenAI has safety teams. Google has AI ethics boards. Microsoft publishes responsible AI standards. UNESCO has created comprehensive recommendations for how governments should regulate AI. All of these matter. But together they address only part of the equation—how AI gets built and how it should be governed. They don’t address how we as employers, educators, workers, students, parents and communities prepare ourselves and our institutions for an AI-transformed world.
Every person on the planet is being affected by AI, and all of us have some power, and some responsibility in determining how it is implemented. So think of these safeguards as guardrails to ensure that AI serves humanity instead of undermining it.
Here are the ten safeguards I believe we must actively pursue:
🌍 The Ten Human Safeguards for the AI Age
1. We Must Build Universal AI Literacy
What It Is: Collectively, we ensure that every person has a basic understanding of how artificial intelligence works and how it shapes their lives.
Why It Matters: When only a few people understand AI, power concentrates in their hands, putting everyone else at risk. Widespread AI literacy keeps us safer, and enables informed participation, healthy skepticism, and collective oversight. This is the foundation of democratic engagement with technology.
What This Could Look Like:
Communities demand AI education in schools.
Workers organize to understand how AI affects their industries.
Parents learn about AI alongside their children.
Local libraries and community centers offer accessible AI literacy programs.
Professional organizations make AI fluency a core competency.
2. We Must Preserve Human Purpose and Contribution
What It Is: Human work, creativity, and contribution remains valued even as AI dominates.
Why It Matters: If both labor and creativity lose their economic and social value, people risk losing purpose and agency in their own societies.
What It Could Look Like:
Communities create new models for recognizing unpaid human contribution.
Creators demand fair compensation when their work is used to train AI systems.
Societies choose to support human artists, teachers, and caregivers.
Organizations explicitly value human judgment and creativity alongside efficiency.
3. We Must Educate for Human-AI Collaboration
What It Is: A fundamental shift in how we educate, moving from knowledge transmission to developing curiosity, reasoning, ethical judgment, and the ability to work effectively with AI systems.
Why It Matters: If schools focus on the things machines will soon do better than us, while neglecting uniquely human capabilities, we’re preparing the next generation to be obsolete. Education must evolve to keep humans essential.
What It Could Look Like:
Teachers, parents, and communities demand curricula focused on critical thinking over memorization.
Schools and universities reinvigorate humanities programs.
Schools use AI to personalize learning while emphasizing human collaboration.
Educators teach students when to trust AI and when to question it.
Society emphasizes creativity, moral reasoning, and human connection.
4. We Must Ensure AI Reflects Human Diversity
What It Is: AI systems are developed with input from many cultures, perspectives, and lived experiences—not just the narrow set of people who currently build them.
Why It Matters: When a homogeneous group builds the intelligence systems that shape global society, their biases and blind spots become embedded at scale. Diverse input is essential for fairness.
What It Could Look Like:
Communities organize to demand representation in AI development.
Workers from underrepresented groups enter and advance in AI careers.
Organizations reject AI systems that don’t reflect diverse input.
Collaboration on datasets and impact assessments is global.
Citizens demand transparency about who built the systems they use, and what datasets systems were trained on.
5. We Must Demand Accountability from AI Power Brokers
What It Is: Citizens and communities require transparency and accountability from the organizations and governments that control powerful AI systems.
Why It Matters: Concentrated control over digital intelligence allows a few entities to shape economies, elections, and information flows without consent from the people. Accountability doesn’t happen automatically, it requires sustained public pressure.
What It Could Look Like:
Citizens support independent oversight bodies with real authority.
Communities organize to audit algorithms that affect their lives.
Workers demand to understand AI systems that manage them.
Voters make AI accountability an election issue.
Consumers choose companies with transparent AI practices.
6. We Must Protect Times and Spaces for Authentic Human Experience
What It Is: Conscious, collective decisions to preserve parts of life that remain free from data collection, prediction, and algorithmic mediation, where humans can think, connect, and exist without being optimized.
Why It Matters: Constant digital monitoring fundamentally changes how people think and interact, replacing genuine connection with performance for machines. Always-connected humans lose the ability to connect deeply with each other. Regular analog immersion is essential to keep us human.
What It Could Look Like:
Families establish device-free times and spaces.
Schools create tech-free zones for genuine play and conversation.
Communities designate public spaces without surveillance.
Workplaces protect thinking time from constant algorithmic interruption.
Social movements advocate for the right to be “offline” and untracked.
7. We Must Maintain Vigilance on AI Safety
What It Is: Active citizen engagement and oversight ensures that AI development prioritizes long-term safety and human wellbeing.
Why It Matters: Without sustained public attention and pressure, short-term commercial incentives will override safety concerns. Citizens can’t delegate AI safety entirely to companies or governments, we need informed, engaged oversight.
What It Could Look Like:
Scientists and engineers publicly raise safety concerns.
Citizens learn enough to evaluate safety claims critically.
Communities demand safety reviews before high-risk AI deployment.
Professional organizations establish safety standards for AI.
AI Whistleblowers are protected.
8. We Must Ensure Humans Can Flourish Amidst Reduced Labor Demands
What It Is: Economic and social systems that provide both livelihood and purpose in environments where human labor is no longer economically necessary. All before mass unemployment becomes mass desperation.
Why It Matters: The coupling of economic output with paid labor is not a feature of the system, it’s just what is necessary when humans are the most productive choice. Once AI and robotics are cheaper and more effective, people won’t be economically needed to do work. Without new structures for distributing resources and creating meaning, we risk a society where most people survive on basic income, supplemented with gambling, speculation and consumption. This is a hollowing out of human agency.
What It Could Look Like:
Communities experimenting with models beyond traditional employment.
Citizens demanding systems that recognize unpaid care work, creative contribution, and community building as valuable.
Workers organizing to ensure productivity gains from automation are broadly shared.
Societies creating new forms of meaningful participation that aren’t tied to market wages.
Movements rejecting the idea that human worth equals economic productivity.
9. We Must Embed Ethics at Every Level
What It Is: Sustained pressure from workers, consumers, and citizens to ensure ethical considerations aren’t optional add-ons but fundamental requirements at every stage of AI development and deployment.
Why It Matters: When ethics are voluntary, they get sacrificed for speed and profit. Ethical AI requires people at every level—from developers to users—consistently demanding it and refusing to accept less.
What It Could Look Like:
Workers refuse to build systems they consider unethical.
Consumers choose companies with strong ethical practices.
Communities organize to reject harmful AI applications.
Professional organizations make AI ethics training mandatory.
Citizens make ethical AI a criterion for voting and purchasing decisions.
10. We Must Require Shared Reality and Human Autonomy
What It Is: Collective action to protect society’s ability to agree on what’s real and each person’s right to form thoughts free from AI-driven manipulation.
Why It Matters: When AI can generate infinite content at zero cost, targeted at billions of tiny audiences, we lose the shared reality that makes community, collaboration, and democracy possible. We fragment into incompatible realities, each reinforcing itself while warring with all the others.
What It Could Look Like:
Communities demand transparency in AI-generated content.
Educators teach tribal manipulation as part of AI literacy.
Citizens support infrastructure for verifying authenticity.
Workers in media and technology maintain professional standards.
Individuals consciously seeking sources that challenge their algorithmic feed.
Social pressure builds against deepfakes and AI manipulation.
Social movements build bridges across algorithmically-divided groups
Legal protections are established for freedom of thought.
What’s Missing
As I’ve shared this framework with others, many have pointed out that The AI Race has the potential to cause massive environmental damage. This is absolutely correct, so why does one of the safeguards not address this?
The reason is simple. These safeguards are focused on direct threats to us as humans, rather than the indirect threat (A race to AI causes environmental catastrophe, which affects humans). This in no way diminishes the scale of the environmental challenge, it just makes it out of scope for these specific safeguards.
Final Thoughts
Over the next few months I’ll be discussing these Ten Safeguards more - both in this newsletter, and on the Humanity Working podcast, so if you are interested, look out for them.
And in the meantime, if this resonates with you, I’d urge you to share this newsletter. If we are going to make sure that AI works in our favor, we all have to do our bit.
Let’s get to work.



