Made for
by Metagov
Invitation
People often speak about AI as if it is one thing. It can seem like that when we use today’s most popular interfaces: a single product, packaged by an unfathomably big company. But that view is both misleading and disempowering. It implies that only the big companies could possibly create and control this technology, because only they can handle its immensity. But another orientation is possible.
The best way to solve a hard math problem is to break it up into smaller, easier problems. Similarly, as we better understand AI systems in their social and technical particulars, we can recognize them as involving a sequence of smaller operations. Those can start to seem more app…
Made for
by Metagov
Invitation
People often speak about AI as if it is one thing. It can seem like that when we use today’s most popular interfaces: a single product, packaged by an unfathomably big company. But that view is both misleading and disempowering. It implies that only the big companies could possibly create and control this technology, because only they can handle its immensity. But another orientation is possible.
The best way to solve a hard math problem is to break it up into smaller, easier problems. Similarly, as we better understand AI systems in their social and technical particulars, we can recognize them as involving a sequence of smaller operations. Those can start to seem more approachable for our communities to manage. Interventions start to seem possible. We can think beyond how the post-2022 AI corporate “labs” want us to think about what AI is or could be. We don’t need to be a trillion-dollar tech company to make a dent in shaping this technology through our communities’ needs and knowledge. We can remember the long history of developing and using AI techniques—in ways less flashy than the current consumer products—and imagine a future where we can more easily disentangle and co-govern these toolsets.
This document from the Metagov community has two goals. First, it identifies distinct layers of the AI stack that can be named and reimagined. Second, for each layer, it points to potential strategies, grounded in existing projects, that could steer that layer toward meaningful collective governance.
We understand collective governance as an emergent and context-sensitive practice that makes structures of power accountable to those affected by them. It can take many forms—sometimes highly participatory, and sometimes more representative. It might mean voting on members of a board, proposing a policy, submitting a code improvement, organizing a union, holding a potluck, or many other things. Governance is not only something that humans do; we (and our AIs) are part of broader ecosystems that might be part of governance processes as well. In that sense, a drought caused by AI-accelerated climate change is an input to governance. A bee dance and a village assembly could both be part of AI alignment protocols.
The idea of “points of intervention” here comes from the systems thinker Donella Meadows—especially her essay “Leverage Points: Places to Intervene in a System.” One idea that she stresses there is the power of feedback loops, which is when change in one part of a system produces change in another, and that in turn creates further change in the first, and so on. Collective governance is a way of introducing powerful feedback loops that draw on diverse knowledge and experience.
We recognize that not everyone is comfortable referring to these technologies as “intelligence.” We use the term “AI” most of all because it is now familiar to most people, as a shorthand for a set of technologies that are rapidly growing in adoption and hype. But a fundamental premise of ours is that this technology should enable, inspire, and augment human intelligence, not replace it. The best way to ensure that is to cultivate spaces of creative, collective governance.
These points of intervention do not focus on asserting ethical best practices for AI, or on defining what AI should look like or how it should work. We hope that, in the struggle to cultivate self-governance, healthy norms will evolve and sharpen in ways that we cannot now anticipate. But democracy is an opportunity, never a guarantee.
Model design
How are foundational models designed, and who does the designing? What institutions regulate the designers?
- Organize worker governance and ownership of AI labs in the hope that ethics can take precedence over profit motives
- Develop smaller, purpose-specific models that involve less costly and environmentally destructive training, and can be less error-prone; ensure models are fit for purpose, with large-data models used only when necessary
- Design models through institutions oriented around the common good, like democratic governments and nonprofit organizations, as with the Swiss Apertus model
- Train developers to understand and be aware of their worldviews, and to engage in design justice practices with affected communities
Data
What data is used to train models? Where does it come from? What permission and reciprocity is involved?
- Ensure that all training data is auditable through techniques of data provenance and traceability, building on examples like the Apertus and Pythia models, and the OSI Open Source AI Definition
- Establish data cooperatives, data collaboratives, and data trusts to provide ethical, consensual data sourcing and compensate data providers, such as the Transfer Data Trust and Choral Data Trust
- Adopt a clear, accessible, and usable data policy in any organizational context
- Reflect best practices of community accountability from the Indigenous Data Alliance and the Collaboratory for Indigenous Data Governance
- Leverage existing data under cooperative control, as in agricultural co-ops and credit unions
- Gather datasets that reflect demonstrable cultural diversity to allow diverse forms of interaction and participation; disclose limitations where this is not possible
- Use participatory taxonomy development, data labeling, and annotation processes to help ensure models are better reflective of community norms, language, and values, as with Reliabl.ai
Training
How are foundational models trained? What infrastructures and natural resources do they rely on?
- Organize training processes through accountability-oriented institutions such as democratic governments or nonprofit consortia
- Ensure that data annotation workers can build collective power through unions and collectives, such as through NGOs like Techworker Community Africa and She Codes Africa, which can help negotiate rates and provide legal support
- Monitor and evaluate labor practices within the supply chain, following the example of Fairwork
- Utilize community-governed standards like participatory guarantee systems so that communities that host data centers or data labor can set locally appropriate guidelines
Tuning
What fine-tuning do models receive before deployment? What collective intervention is involved?
- Utilize collective intelligence processes such as alignment assemblies to set standards for AI behavior and define system prompts, resulting in community models and community-aligned benchmarks
- Implement co-design practices that include alignment workers fully in the process of ethical oversight, rather than the dehumanizing roles they are often expected to assume
- Design tuning processes around an ethics of care, ensuring that all workers in the process experience respect and dignity in their work
- Identify and promote evaluation processes that improve safety and alignment over time
- Create bias bounty programs that encourage users to identify and report evidence of bias in model behavior
Context
How do AIs obtain contextual information? What kinds of actions are agents able to carry out?
- Enable privacy-sensitive tools for connecting local models with community data, such as RooLLM and KOI Pond
- Promote cooperative worker ownership, like READ-COOP, for human-in-the-loop, AI-assisted activities
- Manage and protect contextual data through user-owned cooperatives, like Land O’Lakes’s Oz platform
- Adopt open standards, like Model Context Protocol, that enable context-holders to define more accurate, appropriate, and ethically sourced data-use policies
- Utilize community-governed and transparently curated infrastructure, such as Stract optics, for agent web searches
- Establish clear, privacy-respecting, and consent-based norms for model access to user data, such as through the Human Context Protocol or data pods
Hosting
Where are AIs running while they are interacting with users? How do they treat user data?
- Deploy AI systems at data centers powered by renewable energy, such as GreenPT and Earth Friendly Computation, and that respect local ecosystems
- Host AI services on cooperatively owned and governed servers, such as Cosy AI, or through democratic local institutions like public libraries
- Run local models on personal or community computers with tools like Ollama and Jan
- Use decentralized or federated solutions for hosting like Golem or Internet Computer
User experience
What kinds of interfaces and expectations are users presented with? What options do users have? How do interfaces nudge user behavior?
- Ensure worker control over the deployment of AI systems in their workplaces
- Provide for user choice around the worldviews and model moderation practices, such as open-weights models allow
- Establish sectoral agreements over AI use, as in the outcome of the 2023–2024 Hollywood strike
- Create interfaces that enable user choice among different models, such as Duck.ai
- Provide privacy-protecting mechanisms, including user-data mixers and data-protection compliance
- Expect user interfaces and models to respect local law and global treaties by design
Public policy
How does public policy shape the design, development, and deployment of AI systems?
- Get involved in AI policymaking
- Demand high standards for procurement of foundational model providers, ensuring that both the providers and the models are audited according to best practices of human rights and sustainability
- Develop policy with AI-augmented citizen assemblies that lay out clear guidelines in highly sensitive contexts, such as education, healthcare, law enforcement, and public benefits
- Insist on public debates about limits on AI resource usage without positive social purpose
- Hold AI companies responsible for the behavior of models that they control, such as through lawsuits and legislative advocacy
Culture
What cultural norms form around expectations for AI providers and users? How do these norms shape behavior?
- Promote clear statements of shared values like the DWeb Principles
- Ensure that discussions on AI norms and policy center frontline and most-impacted communities, rather than just technologists and business elites
- Establish clear, context-sensitive agreements on AI use at sites such as classrooms, workplaces, and communities
- Cultivate awareness of the risks around addictive design, surveillance, and user profiling from personal use of corporate AI platforms that are not collectively governed
- Encourage practices of collaboration, creative thinking, and disconnection that resist dependency on corporate AI
Economics
How is the development and maintenance of AI funded, and who benefits economically from its use? What models ensure that value flows back to communities rather than being extracted from them?
- Ensure that workforce-displacing AI adoption accompanies universal benefits that provide more time for self-governance, like shorter working hours and guaranteed income
- Organize cooperative funding pools to invest in community-owned computing power and shared model development
- Invest democratically controlled public funds in AI for public benefit
- Promote revenue-sharing frameworks for self-governing communities who contribute data, such as royalty frameworks or data unions
Ecosystems
How do different community-governed AI systems connect, share information, and make decisions together? What standards or protocols enable collective governance across networks and jurisdictions? How do AI systems relate to their local and planetary environments?
- Establish more traceable, collectively governed data repositories that can be used across models and contexts, like ImageNet and Dataverse
- Share audit logs across networks with standard tools like Petri
- Map and document the impacts of data centers on communities and ecology
- Develop cross-border standards, collaborations, and joint ventures among stakeholders that challenge the international arms-race mentality through co-governance
- Work toward a world of many models, enabling more widespread choice
Feedback loops
Finally, what feedback loops can we imagine across these layers of the stack? How could change in one area lead to greater change through its effects at other layers?
- Collective power at the level of deployment can put pressure on changing norms in training and tuning processes
- Successful training of smaller, more efficient models can enable AI systems that are less costly and easier for communities to own and govern
- Centering impacted communities in design and deployment can reframe narratives about what AI should be for and what it is capable of
- Economies more conducive to investment for collective ownership can open the door to collective governance at multiple levels
- Interconnected ecosystems of open standards and shared norms can spread best practices developed in one policy context to others
Feedback loops can be messy. Remember that collective governance begins with care and consideration for others. May our interventions begin there.
Now, time to intervene!
Credits
Initiated and edited by Nathan Schneider, with contributions from Cormac Callanan, B Cavello, Coraline Ada Ehmke, Val Elefante, Cent Hosten, Joseph Low, Thomas Renkert, Julija Rukanskaitė, Ann Stapleton, Joshua Tan, Madisen Taylor, Freyja van den Boom, Jojo Vargas, Mohsin Y. K. Yousufi, Ian G. Williams, and Michael Zargham.
Website built with open-source software and AI collaboration. Text by collaborating humans.
Made for
by Metagov
November 2025