Daniela
The standard Silicon Valley founding pattern is that the CEO’s word is final.
The cofounder arrangement, the board structure, the option pool, the go-to-market strategy — all of it eventually answers to a single person whose name is on the pitch deck and whose face is on the magazine cover. The COO is a hired executive. The board is composed to support the thesis, not to challenge it. The safety policy, when one exists, is drafted by the same office that is responsible for hitting the growth number.
Anthropic is the clearest example of a frontier lab that broke the pattern.
Not because of its papers. Its papers are good, but other labs publish good papers. Not because of its Responsible Scaling Policy — every lab publishes a policy now. The pattern is broken because the President of the company is the CEO’s sister, and she is not a hired executive.
The second seat
Dario Amodei is Anthropic’s CEO. He writes the papers, gives the interviews, briefs the Senate. He is the ML researcher in the first seat.
Daniela Amodei is the President. She runs revenue, commercial strategy, partnerships, hiring, operations, policy execution. She is the second seat. The second seat is where safety frameworks either get enforced or get drifted into.
Her resume does not look like an ML resume. English literature, UC Santa Cruz. Classical flute scholarship. A political campaign in Pennsylvania. Communications for a US House Representative. Stripe starting in 2013. OpenAI starting in 2018, where she managed the team during GPT-2’s release and then became VP of Safety and Policy. Anthropic in 2021, as co-founder.
The entire career is humanities and operations and policy. That is the point.
The person in the second seat who is going to enforce the safety framework cannot be the same kind of person as the person in the first seat. The first seat wants to ship. The second seat has to be able to say not yet.
The sister clause
The Responsible Scaling Policy, published by Anthropic in September 2023, is a document. A document is not an enforcement mechanism. An enforcement mechanism is a person with authority who is willing to use it.
A hired Chief Operating Officer cannot use that authority against a founding CEO. Not meaningfully. The COO serves at the pleasure of the CEO. If the COO says we cannot ship this model until the evaluations pass, and the CEO says ship it anyway, the COO either ships it, quits, or gets fired. None of those outcomes enforce the policy.
A younger sister who co-founded the company can use that authority. The family relationship predates the company and outlasts every funding round. The trust is older than the equity. The disagreement, when it happens, is a disagreement between two people who cannot fire each other.
That is the clause that makes the Responsible Scaling Policy enforceable. It is not written anywhere in the document. It is structural. It is human. It is the reason the document is worth the paper it is printed on.
The corporate form
The sister clause is the human layer. There is also a legal layer.
Anthropic is incorporated as a Delaware Public Benefit Corporation. A PBC — a corporate form that did not exist until 2013. Unlike a standard C-corp, a PBC is no longer obligated to chase the bottom line above all else. Directors are allowed to weigh a stated public mission against shareholder returns, and they cannot be sued for choosing the mission.
Anthropic’s stated public benefit, written into the certificate of incorporation, is the responsible development and maintenance of advanced AI for the long-term benefit of humanity. Those words bind the directors. They are not marketing copy. They are law.
On top of the PBC structure, Anthropic has a Long-Term Benefit Trust that appoints board members independent of the financial investors. Another refusal point. Another check on the quarterly pressure before it reaches the product.
None of this is a guarantee. A PBC can still drift. A trust can still be captured. But the corporate form is the legal equivalent of the sister clause: it lowers the cost of refusal by making refusal something the directors are allowed to do.
Why this matters for this book
Every chapter of this book has recommended a tool. The tool, in most cases, is Claude. Claude is made by a company. The book has asked the reader to rely on that company.
I owe the reader an account of why I am willing to make that ask.
I am willing to make the ask because the company has a President whose job is to refuse, and whose authority to refuse does not depend on the CEO’s mood. That is unusual. It is rare enough that I do not know another frontier lab where the structure is comparable. The OpenAI board fight of November 2023 is the cautionary tale. The board tried to exercise refusal, the refusal mechanism turned out to be weaker than the growth mechanism, and everybody involved had to renegotiate who had what authority after the fact.
Anthropic has not had that fight in public yet. It may someday. If it does, the sister is the clause.
One smaller thing, worth saying. Anthropic waited to release their 2026 Agentic Coding Trends Report — the one that profiles Fountain on page eight — to match our Cue GA. Most partnerships fit the smaller company into the larger company’s calendar. This one waited. I was honored. That shows real class.
What it would take to replicate
Most companies cannot replicate the Anthropic governance dynamic directly. Very few CEOs have a co-founding sibling. What any company can replicate is the structural principle.
The principle is that the second seat has to have authority that is not contingent on the first seat’s approval. That authority can come from many places — a co-founder relationship, an independent board with genuine power, a union, a regulator, a contract with real teeth. What it cannot come from is the first seat’s generosity. Generosity is not governance.
The pattern
This book argues that AI can be deployed well, but only when a specific pattern is in place: the tool, the auditor, the domain owner, and the refusal point. The refusal point is the piece most often missing.
AlphaFold had one — DeepMind’s decision to release AFDB under CC-BY-4.0 was a refusal of the standard commercial enclosure. The address chapter had one — a company that told me no when my first use case failed the ethical bar. The interview chapter has NYC Local Law 144 and the EEOC. Cue has its own gating architecture. Oregon, in the proposal, has public-good governance. China has the Cyberspace Administration, which is a refusal point, even if not the one the US would choose.
Every case works because someone, somewhere, can say no, not yet, and their no has teeth.
Daniela is the model. The second seat with real refusal power. The book’s whole thesis depends on that seat existing, somewhere, in some form, at every link in the chain. Without it, everything else in this book is just marketing copy for a tool.
With it, the tool is worth using.