On April 6, 2026, OpenAI released a 13-page policy paper called Industrial Policy for the Intelligence Age. It proposes a public wealth fund seeded by AI companies, a robot tax that shifts the tax base from payroll to capital gains, a four-day 32-hour workweek at full pay as an efficiency dividend, a Right to AI treating access as foundational like literacy, automatic safety net triggers when displacement metrics hit thresholds, and containment playbooks for autonomous AI. It is the most comprehensive policy document any frontier AI lab has published.

Sam Altman compared the needed response to the Progressive Era and the New Deal. The framing is deliberate: this is an industry asking to be regulated, on its own terms, before someone else writes the rules. And to be clear — the paper is better than silence. It is better than lobbying against governance. It deserves serious engagement. Here is that engagement.

The Crosswalk

Foundation’s Universal Basic Citizenship framework has sixteen components. When you lay OpenAI’s proposals against them, the overlap is immediate and striking.

The public wealth fund maps directly to Economic Security — UBC’s thirteenth component, which decouples basic financial stability from employment. OpenAI’s version is narrower: a fund seeded by AI company equity, distributing dividends to citizens. Foundation’s version is broader: income as one layer of a financial infrastructure that includes financial literacy, housing stability, and healthcare access. But the core insight is the same. When machines do the work, humans still need the floor.

The robot tax maps to the same component plus the Social Contract Reevaluation — Foundation’s fourth component, which says that policies must evolve with technological change. Shifting the tax base from payroll to capital gains is exactly the kind of structural adjustment the social contract component demands. OpenAI arrived at this through economic modeling. Foundation arrived at it through systems thinking. Same destination.

The Right to AI maps cleanly to Information Access and AI Benefits — Foundation’s eighth component. Treating AI access as foundational, like literacy, is precisely what UBC argues. The four-day workweek is implicit in Quality of Life — not a named component, but a natural consequence of a framework that measures societal success by human capacity rather than labor output. Automatic safety net triggers map to Social Safety Net provisions that UBC builds into multiple components. The employee voice provisions map to Civic Participation — Foundation’s insistence that citizens don’t just receive benefits but hold structural power.

Most of what OpenAI proposed, Foundation already had. Not approximately. Not directionally. Specifically. The components are there. The logic connecting them is there. The urgency is there. If you’ve read the sixteen-component framework, the OpenAI paper reads like a partial implementation spec written by people who arrived at the same problem from the supply side.

What They Missed

Here is the gap, and it is not small.

OpenAI’s paper has no equivalent to Guardian AI — the concept that citizens should have AI systems working on their behalf, not just AI systems working on them. It has no Civic Participation infrastructure — no mechanism by which the people affected by AI actually govern its deployment. It has no Thought Privacy protections — no recognition that an intelligence age without cognitive liberty is a surveillance age with better marketing. It has no democratic governance layer whatsoever. The paper proposes that AI companies fund a wealth dividend. It does not propose that citizens decide how AI develops, where it deploys, or what it optimizes for.

This is the critical distinction. OpenAI’s framework is economic redistribution without civic infrastructure. It is a payment to citizens for disruption — compensation for a transformation they had no hand in designing and no power to redirect. The implicit model is: we build the future, you receive a check. The check may be generous. It may even be adequate. But it is still a transaction in which one side holds all the agency and the other side holds a deposit slip.

Foundation’s UBC framework asks a different question. Not “how do we compensate people for the AI transition?” but “how do we give people the infrastructure to navigate it themselves?” The sixteen components aren’t a benefits package. They’re a power structure. Healthcare, education, digital access, thought privacy, civic participation — these aren’t nice things to provide. They’re the minimum conditions under which a citizen can meaningfully participate in decisions about their own future. OpenAI wants to write the checks. Foundation wants to hand people the controls. Both start from the same recognition that the economy is transforming. Only one of them follows that recognition to its democratic conclusion.

The Timing

This paper dropped the same day as a New Yorker investigation questioning Altman’s trustworthiness on safety commitments. That juxtaposition is not something I can ignore and neither should you. The company building what it calls the most transformative technology in human history is simultaneously telling America to prepare for massive disruption and asking for permission to keep building at maximum speed. The policy paper is generous and forward-thinking. The safety record is contested and opaque. Both things are true at the same time.

Both OpenAI and Anthropic CEO Dario Amodei have said publicly that current economic organization will no longer make sense in the intelligence age. They agree on the diagnosis. They agree on the scale. Neither company is proposing that citizens actually govern AI. Only that citizens receive money from it. This is the tell. When the people building the most powerful technology in history unanimously agree that society needs restructuring but unanimously stop short of proposing democratic control over that technology — that’s not a policy gap. That’s a design choice.

The Oklahoma Connection

Oklahoma passed data center incentive bills and geothermal energy conversion legislation this session with overwhelming bipartisan margins. The state is building the physical infrastructure of the intelligence age. It filed zero AI governance legislation. Not one bill addressing how citizens interact with AI systems, how their data is protected, how algorithmic decisions affecting their lives are reviewed, or how the economic transition is managed at the state level. The gap between national urgency and state inaction is the story of 2026.

The Chief AI Officer is building executive standards with no legislative mandate. That’s not a criticism of the office — it’s a description of the structural gap. When the federal conversation is about wealth funds and robot taxes and the state conversation is about tax credits for data centers, something is missing in the middle. The infrastructure layer. The civic layer. The part where citizens aren’t just consumers of policy but participants in it.

What Already Exists

The framework that addresses both the economic floor and the civic power structure already exists. It is called Universal Basic Citizenship and it has sixteen components. It was built for exactly this moment — when the companies creating the transformation finally admit the transformation is coming, and the question becomes whether citizens are passengers or pilots. The components are public. The reasoning is public. The contribution mechanism is public. The only thing missing is the political will to treat it as what it is: not a wish list, but a systems architecture for a country that wants to remain a democracy through the most significant economic transition in human history.


Universal Basic Citizenship is a 16-component framework developed by Humanity and AI LLC.