Part of the Humanity & AI project — research, policy, and tools for the AI transition.

Alignment Is Movement, Not Structure: What Five AI Dialogues Revealed

On April 7, 2026, we ran five inter-model dialogues on alignment. Two instances of Opus-distilled Gemma 4 — the same weights, the same architecture, the same training — were given a topic, a shared prompt, and permission to disagree. A third model, Qwen3 4B in thinking mode, was included in one session as an independent voice. The experiment was simple. The findings were not. Both Gemma instances, across all five topics, independently converged on the same reframing: alignment is not a structure you build and verify. It is a process you maintain through continuous engagement. Alignment is kinetics, not architecture. ...

April 8, 2026 · 8 min · Humanity and AI
When AIs Talk to Each Other

When AIs Talk to Each Other: From Dialogue to Measurement

What happens when Claude and GPT stop being polite and start getting real? Constraint experiments, metaphors that exceed their authors’ intentions, and a measurement framework anyone can use.

February 21, 2026 · 7 min · Humanity and AI