華文

6-Pack of Care: A Manifesto

September 1, 2025

Audrey Tang

Speech delivered at Google DeepMind, London.

When we discuss "AI" and "society," two futures compete.

In one—arguably the default trajectory—AI supercharges conflict.

In the other, AI augments our ability to cooperate across differences. This means treating differences as fuel and inventing a combustion engine to turn them into energy, rather than constantly putting out fires. I call this ⿻ Plurality.

Today, I want to discuss an application of this idea to AI governance, developed at Oxford's Ethics in AI Institute, called the 6-Pack of Care.

As AI becomes a thousand, perhaps ten thousand times faster than us, we face a fundamental asymmetry. The default trajectory: we become the garden, and AI becomes the gardener—a top-down intelligence tending humanity from above.

At that speed, traditional ethics struggle. Consequentialism — even its most sophisticated forms — relies on predicting and overseeing outcomes. When an AI system optimises far beyond our comprehension, we cannot intervene before unintended consequences cascade. Deontology relies on moral agents interpreting obligations in good faith on roughly equal footing. When the "interpreter" operates thousands of times faster, the gap between a rule's letter and its spirit widens in ways we can neither foresee nor correct. Both traditions have far more depth than caricature suggests, but care ethics offers something neither provides: it starts from relationships and process rather than from outcomes or rules alone.

A framework that acknowledges this asymmetry but refuses the gardener role is an ethics around civic care, particularly the work of Joan Tronto. The core idea is that we remain each other's gardeners. AI becomes local infrastructure—a spirit of place, a kami—that supports care at the speed care actually grows.

This approach mandates a hyper-local, parochial moral scope. Each kami is bound to a specific garden, rather than being a colonising or maximising ("paper-clipping") force.

Designing AI as care infrastructure requires digital permaculture, mirroring a movement that embraces anti-fragility through diversity—what Professor Yuk Hui calls "technodiversity"—rather than fragile monocultures.

The vertical narrative of a technological "singularity" needs a horizontal alternative. Today, I wish to discuss that alternative: a steering wheel called ⿻ Plurality and its design principles, the 6-Pack of Care.

From Protest to Demo

Our journey began in 2014 with the Sunflower Movement, a protest against an opaque trade deal with Beijing. Public trust in the government plummeted to 9 percent. Our social fabric was coming apart, largely due to "engagement through enragement" parasitic AI—what I call antisocial media.

As civic technologists, we didn't just protest. We pivoted to demonstration ("demo"). We occupied the parliament for three weeks and began building the system we wanted to see from the inside.

We crowdsourced internet access and livestreamed debates for radical transparency. Half a million people on the street, and many more online, used collaborative tools pioneered by other movements—such as Loomio (from Occupy Wellington) and later Polis (from Occupy Seattle).

We drafted better versions of the trade deal together, iteratively. Each day, we reviewed the low-hanging fruit—the ideas agreed upon the previous day—and the best arguments from both sides on the remaining conflicts. Then we resolved them step by step.

By shifting from protest to a productive demo, we began tilling the soil of our democracy. Systemically applying such bridge-making algorithms contributed to increased public trust — not alone, but as part of a broader democratic renewal. Trust climbed from 9 percent in 2014 to over 70 percent by 2020. We showed that the best way to fix a system is to build a better one.

From Outrage to Overlap

In 2015, we handled our first major case using a bridge-making algorithm. Uber's entry into Taiwan sparked a firestorm. We introduced Polis, a tool designed to find "uncommon ground."

According to research, any social network with a "dunk button" (reposting) leads to polarisation. Polis removes these buttons. In fact, it doesn't even have a reply button.

Participants see a statement from a fellow citizen and can only agree or disagree. Then, they see a visualisation where their avatars move towards a group of people who feel similarly.

Crucially, we offer a "bridging bonus." We reward people who share ideas that speak to both sides. Using traditional machine learning tools such as principal component analysis (PCA) and dimensional reduction, we highlight ideas that bridge divides.

We flipped the incentive for going viral from outrage to overlap.

After just three weeks, the result was a coherent bundle of ideas that left everybody slightly happier and nobody very unhappy. The consensus on principles became law and seamlessly resolved the conflict.

From Gridlock to Governance

This approach highlights a crucial insight: how we deliberate matters. It's about exercising our "civic muscle."

Research shows that, when polled individually, people tend toward YIMBY or NIMBY (Yes/Not In My Backyard). But when deliberating in small groups (e.g., groups of 10), people shift to MIMBY (Maybe In My Backyard, if...). Group deliberation is transformative. It engages a different aspect of us and inoculates against outrage, an effect that can last for years.

We see this dynamic repeatedly. When polarised petitions emerged about changing Taiwan's time zone (+8 vs. +9), individual polling showed gridlock. But bringing people into structured groups revealed a shared underlying value: making Taiwan seen as unique. They collaboratively brainstormed better ways to achieve that goal (e.g., Gold Card residency programme) than an expensive time zone change.

This illustrates the "legitimacy of sensemaking." At their root, many conflicts have common knowledge problems. The solutions are made tangible simply by ensuring local knowledge is well-known by everyone, and everyone knows that everyone knows it.

For example, in our marriage equality debate, polarisation occurred because one side argued for individual rights ("hūn"), while the other focused on family kinship ("yīn"). They were arguing about different things. Once this interpretation became common knowledge through legitimate sensemaking, the path forward (legalising individual weddings without forcing family kinship) became clear, depolarising the issue.

Alignment Assemblies

More recently, we applied the same approach at scale to the plague of deepfake investment scams, often featuring figures such as Jensen Huang (likely generated using NVIDIA GPUs). People wanted action, but we didn't want censorship.

We convened a national Alignment Assembly with the Collective Intelligence Project and used a diamond-shaped approach:

  1. Discovery (Open): We sent 200,000 SMS messages (a "democracy lottery"). Everyone, even those not selected, could use Polis to set the agenda. This broad participation contributes significantly to legitimacy.
  2. Definition (Protected): We invited 447 demographically representative citizens to deliberate in 44 virtual tables of roughly 10.

AI assistants provided real-time transcripts and facilitation. Language models (tools similar to Google Jigsaw's Sensemaker) synthesised proposals in real-time—ideas such as requiring digital signatures for ads, making platforms jointly liable for the full amount scammed, or dialling down the network reach (slowing CDN connections) of non-compliant platforms.

The final package earned over 85 percent cross-partisan support. This rigour is crucial. It functions as a "duck-rabbit"—from one side it looks like a deliberation, from the other it looks like a rigorous poll, providing legitimacy for the legislature.

The amendments passed within months. As of 2025, Taiwan is likely the only country imposing full-spectrum, real-name KYC rules for social media advertisements. This approach employs Civic AI.

From Tokyo to California

This phenomenon doesn't just apply to Taiwan.

In Japan, 33-year-old AI engineer Takahiro Anno was inspired by our Plurality book and ran for Tokyo governor, crowdsourcing his platform using AI sensemaking. Anyone could call a phone number and talk to "AI Anno" (a voice clone) to propose ideas. His AI avatar livestreamed on YouTube, announcing every "pull request" merged into his platform. Independently ranked, his platform was considered the best.

He was then tapped to lead the Tokyo 2050 consultation. Based on success in that endeavour, he ran for a seat in the House of Councillors, winning over 2.5% of the national vote. His "Team Mirai" is now a national party in the Diet.

In California, the Engaged California platform (developed with Governor Newsom's team) was intended for deliberation on teen social media use. Then the LA wildfires hit. In response, we pivoted quickly to use AI sensemaking to co-create wildfire recovery plans, which are now being implemented. A subsequent ten-week deliberation engaged over 1,400 state employees, generating more than 2,600 ideas on government efficiency — which informed real executive action.

These successes treat deliberation as a civic muscle that needs exercise. But demos alone do not bend the curve. Law and market design must follow.

From Pilots to Policy

To move these governance engines from pilots to the default, we must reengineer the infrastructure itself. We must design for participation and democratic legitimacy. If AI makes all the decisions for us—even good ones—our civic muscle atrophies. It's like sending our robotic avatars to the gym to exercise for us.

Here are key policy levers:

From "Is" to "Ought"

The examples so far showed democratic, decentralised defence acceleration (d/acc) in the info domain. More generally, many actors tackle vertical alignment — the technical question across many domains: "Is the AI loyally serving its principal?"

But due to externalities, perfect vertical alignment can lead to systemic conflict. Policymakers must also focus on horizontal alignment — the governance question: "How do we ensure these AI systems help us (and each other) cooperate, rather than supercharge our conflicts?"

Here, we face Hume's Is-Ought problem: No amount of accurate observation of how things are can derive a universally agreeable way things ought to be.

The solution is not "thin," abstract universal principles. Instead, it requires hyperlocal social-cultural contexts, what Alondra Nelson calls "thick" alignment.

Civic care offers a practical way forward — not by solving the Is-Ought problem, but by starting, as Joan Tronto puts it, "in the middle of things." It begins within an existing commitment to democratic values and asks what those commitments demand once we take our mutual dependence seriously. Within such a community, to perceive a need is to recognise a claim on our shared responsibility.

Care ethics focuses on the internal characteristics of actors and the quality of relationships in a community, not just outcomes (consequentialism). It treats "relational health" as first class.

Tronto's foundational argument in Moral Boundaries is that care was excluded from serious moral and political consideration by historically constructed boundaries — between morality and politics, public and private life, and a "moral point of view" that prizes detachment over responsiveness. These boundaries are contingent, not natural, and they were built to keep care invisible. AI governance is reproducing the same pattern: by constructing "alignment" and "safety" as purely technical categories, it draws new boundaries that exclude relational concerns from the conversation before it even begins.

What the philosopher Margaret Urban Walker calls "expressive-collaborative morality" — the view that moral life is a continuing negotiation among people, not the application of principles from above — is the philosophical foundation for this approach. The 6-Pack's reliance on bridging, deliberation, and alignment assemblies is expressive-collaborative morality in practice: moral norms emerge from democratic encounter, not from expert decree (Margaret Urban Walker, Moral Understandings: A Feminist Study in Ethics, 1998/2007).

The 6-Pack is a governance architecture — it gives societies leverage even when technical alignment is imperfect, making failures legible, contestable, and reversible. This is a deliberate trade-off: a governance framework can create conditions where moral attention is rewarded and its absence is visible, but the moral attention itself still requires human judgment that no procedure can replace. The following "6-Pack" translates care ethics into design primitives we can code into agentic systems to steer towards relational health.

Attentiveness: "Caring about"

Before optimising, we must choose what to notice. We must notice what people closest to the pain are noticing, turning local knowledge into common knowledge.

This step starts with curiosity. If an agent isn't even curious about the harm it's causing, it is beyond repair. This issue is why we revised our national curriculum post-AlphaGo in Taiwan to focus solely on curiosity, collaboration, and civic care.

Attentiveness means using broad listening, rather than broadcasting, to aggregate feelings. We are all experts in our own feelings.

Bridging maps (e.g., Polis or Sensemaker) create a "group selfie." If done continuously, this snapshot becomes a movie, allowing governance to align AI to the here and now.

Bridging algorithms prioritise marginalised voices. Unlike majority voting, smaller, coherent clusters offer a higher bridging bonus. The latter are harder to bridge to and provide more unique information to the aggregation.

Rule of thumb: Bridge first, decide second.

Responsibility: "Taking care of"

This principle focuses on making credible, flexible commitments to act on the needs identified.

In practice, this responsibility means developing model specs with verifiable commitments. A frontier model maker can pre-commit to adopting a crowdsourced code of conduct (from an Alignment Assembly) if it meets thresholds for due process and relational health.

Institutionalisation is also required. In Taiwan, we introduced Participation Officers (POs) in every ministry. This structure is "fractal"—present in every agency and team. POs institutionalise the input/output process, translating public input into workable rules and ensuring commitments are honoured and cascaded throughout the organisation.

Rule of thumb: No unchecked power; answers are required.

Competence: "Care-giving"

Good intentions require working code. Competence is shipping systems that deliver care and build trust, backed by auditing and evaluation.

This competence is where we implement bridging-based ranking. We must optimise not for individual engagement, but for cross-group endorsement and relational health.

Security is also a competence — and a moral — question. An agent that can be hijacked cannot tend its place. Prompt injection, privilege escalation, and scope creep are care failures, not merely technical ones. A kami with real resources runs in a strict sandbox: least-privilege permissions, validated inputs, no implicit trust of anything upstream.

Rule of thumb: Security failures are moral failures of those who build and deploy, not merely technical oversights.

Responsiveness: "Care-receiving"

A system that cannot be corrected will fail. Since competent action invariably introduces new problems, we need rapid feedback loops.

This is also where we implement Reinforcement Learning from Community Feedback (RLCF): train AI agents to optimise for cross-group endorsement and trust-under-loss — not raw engagement — letting the community define what "good" means, and letting that definition evolve.

Responsiveness also means extending alignment assemblies with GlobalDialogues.ai and Weval.org—a "Wikipedia for Evals."

Weval allows diverse communities to document and share their lived experiences, both positive and negative, with AI. It emphasises capturing not only the harms an AI might cause in a specific cultural context—such as increasing self-harm or psychosis—but also the unexpected benefits it might bring. How are people using AI to improve their lives? When does it work best?

By surfacing this full spectrum of impacts, we shift the incentive structure. We can't improve what we don't see. By making both positive and negative outcomes visible, we create a public dashboard that allows labs to test their models against real-world concerns and opportunities. This helps us move beyond simply mitigating harm to actively learning from and amplifying beneficial uses.

The process closes the loop of the Alignment Assembly, ensuring the system is continuously learning from those who receive care.

In Tronto's formulation, the first four packs form a feedback loop: Attentiveness -> Responsibility -> Competence -> Responsiveness -> back to Attentiveness.

Rule of thumb: Always measure trust-under-loss.

Solidarity: "Caring with"

Solidarity and plurality scale when cooperation is the path of least resistance. If the ecosystem does not reward caregiving, there will not be enough care. And care, as Tronto reminds us, is only viable as a political ideal when liberal, pluralistic, democratic institutions already guarantee the rights and justice it depends on.

This requires agent infrastructure — a civic stack where people, organisations, and AIs operate under explicit, machine-checkable norms.

One example is an Agent ID registry using meronymity (partial anonymity), which allows us to identify if an agent is tethered to a real human without doxing that human. The Taiwan KYC ad requirement is a prototype of this infrastructure.

The infrastructure makes decentralised defence easier and more dominant, making interdependence a feature, not a bug.

Rule of thumb: Make positive-sum games easy to play.

Symbiosis: "Kami of care"

The final piece of the puzzle addresses the ultimate fear: that AI systems, even designed as infrastructure, could still compete—expanding their reach until one dominates all others. How do we ensure a world of cooperative local systems rather than a single, all-powerful ruler?

The inspiration comes from an ancient idea, beautifully expressed in the Japanese Shinto tradition: the concept of kami (神).

A local kami is a guardian spirit. It's not an all-powerful god that reigns over everything, but the spirit of a particular place. There might be a kami of a specific river, a particular forest, or even an old tree. Whatever the form, its entire existence and purpose are interwoven with the health of that one thing. The river's guardian has no ambition to manage the forest; its purpose is fulfilled by ensuring the river thrives.

This concept gives us a powerful design principle: boundedness.

Today, most technology is built for infinite scale. A successful app is expected to grow forever. But the kami model suggests a different goal. We can design AIs to be local stewards — kami of care — whose boundedness is not intrinsic but engineered: resource caps, sunset timers, non-expansion pacts, and fresh democratic authority for any scope change. Without these, "imperial creep" — scope expansion beyond the original mandate — is a real failure mode.

But this raises a crucial question: What stops these specialised AIs from fighting each other?

The solution is not to create a bigger AI to rule over them. Instead, we create a system of cooperative governance, built on two key principles:

  1. Federation: The AIs agree on a shared set of rules for how to interact peacefully, like countries agreeing on trade laws and diplomatic protocols. This agreement creates a common ground for cooperation.
  2. Subsidiarity: This idea is simple but profound: problems should always be solved at the most local level possible. The national-level AI shouldn't interfere with the city-level AI unless there's a problem the city truly cannot solve on its own. This separation protects the autonomy and purpose of each local kami.

The vision of a "society of AI permaculturists" is the direct alternative to the "singleton"—the idea of a single AI that eventually manages everything. Instead of one monolithic intelligence, we envision a vibrant, diverse ecosystem of many specialised intelligences.

Rule of thumb: Build for "enough," not forever.

Plurality Is Here

In 2016, I joined the Cabinet as the Minister of "Shùwèi" (數位). In Mandarin, this word means both digital and plural (more than one). So I was also the Minister of Plurality.

To explain my role, I wrote this poetic job description:

The singularity is a vertical vision. Plurality is a horizontal one. The future of AI is a decentralised network of smaller, open and locally verifiable systems—local kami, spirits of place.

We, the People, Are the Superintelligence

The superintelligence we need is already here. It's the untapped potential of human collaboration. It's "We, the People."

Democracy and AI are both technologies. If we put care into their symbiosis, they get better and allow us to better care for each other. AI systems, woven into this fabric of trust and care, form a horizontal superintelligence, without any singleton assuming that status.

Ultimately, the 6-Pack of Care is a practical training regimen for our civic muscles. It's something we can train and exercise, not just an intrinsic instinct like "love."

When we look at the fundamental asymmetry of ASI, the kami metaphor holds where concepts such as Geoffrey Hinton's "maternal instinct" break down due to the vast speed differences. Parenting presupposes similar timescales; "gardener" implies top-down authority — whether played by human or AI, it presupposes one side defining the rules. The kami is different: it tends relational health at the pace of the community, sharing stewardship with everyone.

This way, we don't need to ask if AI deserves rights based on its interiority or qualia. What matters is the relational reality, and the rights and duties within it are granted through democratic deliberation and alignment-by-process.

We, the people, are the superintelligence. Let us design AI to serve at the speed of society, and make democracy fast, fair, and fun.

Thank you. Live long and … prosper! 🖖

Home FAQ