Secrets in Global Governance: A Book Review
How disclosure dilemmas undermine cooperation—and what this means for governing AI
There’s a conventional wisdom in international relations that feels almost self-evident: transparency is good for cooperation. If states can see what others are doing, detect violations, and share information openly, then compliance with international rules should improve. This logic has driven decades of institutional reform, from freedom of information policies to open data initiatives across global governance institutions.
Allison Carnegie and Austin Carson’s Secrets in Global Governance challenges this orthodoxy in a way that forces us to rethink some fundamental assumptions about how international cooperation actually works. The book’s central claim is both counterintuitive and, once you understand it, obvious: secrecy, when properly institutionalized within international organizations, can actually enhance rather than undermine cooperation.
The Disclosure Dilemma
The puzzle Carnegie and Carson identify is one that practitioners have grappled with for years but that has remained undertheorized in academic work. States frequently possess evidence of international rule violations but hesitate to share it. The reason isn’t lack of political will or indifference to violations. It’s that disclosure itself carries prohibitive costs.
Intelligence agencies risk exposing their sources and methods. A state might have satellite imagery proving another nation’s nuclear weapons program, but sharing it publicly could reveal surveillance capabilities worth billions and compromise future collection. Companies and governments holding commercially sensitive information face similar dilemmas. A firm may possess evidence of trade rule violations, but disclosing it could expose proprietary business strategies and damage competitive positioning. These costs are also referred to as adaptation costs.
The result of these adaptation costs being insurmountable is a cooperation trap. The very information needed to enforce international agreements remains locked away, not because states lack evidence, but because the act of sharing carries unacceptable risks.
The Institutional Solution
Carnegie and Carson’s proposed solution centers on international organizations equipped with robust confidentiality systems. These IOs function as trusted intermediaries that can receive, analyze, and act on sensitive information while preventing its wide release.
The mechanism works through several pathways. IOs can aggregate information from multiple sources, obscuring any single state’s contribution. They can translate raw intelligence into sanitized reports that preserve analytical value while removing identifying details. They can facilitate enforcement actions without requiring public attribution.
This represents a fundamental reconceptualization of how international organizations contribute to cooperation. Rather than simply increasing transparency to all audiences, effective IOs selectively manage information flows. They create protected channels for sensitive disclosures, establish credible commitments to confidentiality, and build relationships with the intelligence agencies and private actors who control critical information.
Evidence Across Domains
What makes the book particularly compelling is its empirical breadth. Carnegie and Carson test their theory across four substantively different domains: nuclear nonproliferation, international trade disputes, war crimes prosecutions, and foreign direct investment arbitration.
In the nuclear domain, they show how the International Atomic Energy Agency’s ability to receive and analyze intelligence about undeclared facilities depends critically on confidentiality protections. States share satellite imagery and signals intelligence with the IAEA precisely because the agency has developed credible systems to prevent wider disclosure. The World Trade Organization has evolved similar mechanisms for handling commercially sensitive information in subsidy disputes. International Criminal Tribunals use classified evidence procedures to prosecute war crimes without compromising intelligence sources. The International Centre for Settlement of Investment Disputes relies on confidential arbitration to resolve disputes involving proprietary business information.
The methodological approach combines statistical analysis, expert interviews and detailed case studies. This triangulation addresses both the scope conditions under which confidentiality systems operate and the causal mechanisms through which they facilitate cooperation.
Power, Accountability, and Trade-offs
To their credit, Carnegie and Carson don’t shy away from the normative complications their argument introduces. Confidentiality systems may enable cooperation, but they also concentrate informational power and potentially shield actors from accountability.
The nuclear cases reveal this clearly. The IAEA’s confidentiality system facilitates information sharing from the United States, United Kingdom, France, and Israel while doing little to protect states under investigation. The system essentially institutionalizes intelligence dominance, making power projection more efficient and legitimate. When the U.S. shares intelligence about Iranian nuclear activities with the IAEA, this advances nonproliferation goals defined primarily by existing nuclear powers.
There’s also the question of whether confidentiality systems actually work as advertised. The analysis assumes that IO confidentiality mechanisms are essentially secure, but the real-world track record is mixed. The WTO dispute settlement process regularly experiences strategic leaks despite elaborate confidentiality procedures. The IAEA has faced multiple security breaches. War crimes tribunals have struggled with witness protection failures.
Implications for AI Governance
The disclosure dilemma framework has direct relevance for emerging challenges in AI governance, where many of the same tensions around sensitive information are already apparent.
Consider the problem of safety incident reporting. AI developers may discover dangerous capabilities or near-miss incidents in their systems, information that would be valuable for the broader safety community. But public disclosure carries risks. Detailed technical information about model vulnerabilities could enable malicious actors. Information about internal safety practices could expose competitive advantages. Even aggregate incident data might reveal strategic priorities or technical approaches that companies consider proprietary.
The result is predictable underreporting. Just as states withhold intelligence about nuclear programs, AI labs stay silent about safety incidents unless disclosure is essentially costless. The information that would be most valuable for improving AI safety, the detailed technical specifics of failure modes and near-misses, is precisely the information least likely to be shared.
Current proposals for AI governance often assume that transparency will solve coordination problems. Mandatory model registries, public safety evaluations, and open-source development are all premised on the idea that more information sharing improves outcomes. Carnegie and Carson’s work suggests this may be backwards. Without confidentiality systems, transparency requirements might simply ensure that the most safety-critical information never enters the regulatory ecosystem at all.
What would an AI Safety Agency with proper confidentiality systems look like? It would need several features. First, credible technical capacity to analyze sensitive disclosures without requiring full public release. An agency staffed with ML researchers who can verify safety claims, assess reported vulnerabilities, and analyze incident data while protecting sources. Second, legal frameworks that protect disclosures from discovery in litigation. If companies fear that reporting a safety incident creates liability exposure, they won’t report. Third, international coordination mechanisms that allow information sharing across jurisdictions while maintaining confidentiality. A U.S.-based safety incident might have implications for AI systems deployed globally, but unilateral disclosure could compromise competitive positions.
The nuclear domain offers instructive parallels. The IAEA built credibility gradually, starting with simpler verification tasks and demonstrating information security before states entrusted it with more sensitive intelligence. An AI Safety Agency might follow similar developmental paths, beginning with less sensitive aggregated data before building toward the kind of detailed technical disclosure that would enable robust safety governance.
But there are also important differences. AI development is far more distributed than nuclear programs, involving not just state actors but private companies, academic researchers, and open-source communities. The disclosure dilemmas multiply accordingly. Commercial sensitivity matters more than in traditional security domains. The pace of capability development is faster, giving less time to build the trust relationships that effective confidentiality systems require.
Perhaps most challenging is the dual-use problem. Unlike nuclear technology, where the distinction between civilian and military applications is relatively clear, AI capabilities resist clean categorization. The same model architecture that enables beneficial applications might pose catastrophic risks. This makes it harder to define what information should be protected and what should be shared.
Rethinking Transparency
Secrets in Global Governance succeeds in fundamentally reframing how we think about information and cooperation. The traditional model positioning transparency as an unqualified good has dominated scholarship for decades, yet Carnegie and Carson demonstrate this framework fails to account for a critical class of problems where disclosure itself creates obstacles to cooperation.
The question isn’t whether transparency is universally beneficial. It’s when confidentiality systems can resolve specific cooperation problems that transparency alone cannot address. This matters immensely for practitioners designing new institutions. The devil is in the details. Effective confidentiality systems require careful institutional design, trusted relationships between IOs and states, and credible commitments to prevent information leakage.
For AI governance specifically, the timing is opportune. We’re still in the early stages of building international institutions for AI safety. The architectures we choose now will shape what’s possible later. If we build transparency-maximizing institutions without attending to disclosure dilemmas, we may create systems that look good on paper but fail to elicit the information actually needed for effective governance.
The alternative is harder but more promising. Building confidentiality systems requires patience, technical sophistication, and political will. It means accepting that some aspects of AI governance will operate behind closed doors. It means trusting institutions with sensitive information and living with the accountability trade-offs that entails.
These are uncomfortable choices, but ignoring disclosure dilemmas doesn’t make them go away. It just ensures that the information most critical for managing catastrophic AI risks stays locked in corporate vaults and national security bureaucracies, unavailable precisely when we need it most.
“The challenge of modernity is to live without illusions and without becoming disillusioned.” - Antonio Gramsci
Carnegie, Allison and Austin Carson. 2020. Secrets in Global Governance: Disclosure Dilemmas and the Challenge of International Cooperation. Cambridge University Press.