Director’s Corner
December 2023
“The king must die so that the country can live.”
Maximilien Robespierre
Dear Tech & Public Policy community:
The nearly round-the-clock, breathless coverage of the ouster of OpenAI’s Sam Altman played out in real-time like a feud between a king and his nobles over prized territory rather than a Silicon Valley HR soap opera.
In the end, it was a stark reminder of how concentrated power over AI technologies has become and how this power, in the U.S., comes with little to no public accountability.
OpenAI’s board of directors released a short statement saying it had fired Altman over a lack of transparency, apparently unaware of the irony of issuing this statement without providing the public with more detail. There were unconfirmed rumors indicating tension between Altman’s desire to move quickly towards monetization of the company’s software and the board’s preference to err on the side of caution with a slower approach. Though we know how the story ended, with Altman restored as king with the help of his own Cardinal de Richelieu, Satya Nadella, we do not know what truly happened.
The board of directors, charged with oversight, was right to be concerned if Sam Altman acted with impunity and a lack of regard for the organization’s mission (“to ensure that artificial general intelligence benefits all of humanity”). He is one of a handful of people on this Earth who control the present and future of AI technologies and therefore has an outsized influence over the present and futures of, arguably, entire global economies and societies. He sits at the helm of an organization that owns proprietary technology critical to future public systems, with a current market valuation estimated to be between $80-$90 billion. Because of the importance of these technologies, governance in this space should act decisively when necessary while also being as transparent as possible to balance corporate and public interests. For example, the public might have supported Altman’s ouster, if it were in fact because of his desire to move too quickly to monetize–and might have pushed back on Microsoft’s move to quickly hire him. It might have made it harder for Microsoft to take a board seat and further consolidate its control over OpenAI.
The troubling concentration of power that lies within a small group of companies and individuals will not serve the public in artificial intelligence technologies, as it has not in the platform and social technologies. The power centralization has already led to the same corporate harms, playing over and over again like refrains in bad pop songs: speed over safety (“move fast and break things”), profit over people (“you are the product”), forgiveness over permission (“regulation will stifle innovation”) and market domination over everything (“it really is winner take all”).
Consider that the combined $9 trillion in market cap held by U.S. tech firms is larger than the GDP of any nation (except for China and the U.S.). Tech billionaires are the majority of the ten richest people in the world (and their net worths are more than many countries’ GDP). Tech companies continue to gather unprecedented financial resources, exert more and more direct influence over information systems and billions of people, and build and own the most powerful technology tools in history. Those who sit atop become entitled emperors, leaving us, the people, voiceless.
If we buy into OpenAI’s premise that AI can benefit all of humanity and insist that it carry out its mission to ensure this, then we must also demand that the organization adhere to the principles of a democratic society–including transparency and equal representation. This means that decisions like the one to fire Sam Altman must come with more explanation and detail pertinent to the public interest (such as a conflict over safety). It also means that the public must be fairly and adequately represented in reviewing and making decisions about issues consequential in AI, in both the private and public sectors. In short, we should require companies with the kind of market share that OpenAI/Microsoft represents to participate in a different kind of board governance model.
Consider the model of ICANN, the International Corporation for Assigned Names and Numbers. ICANN’s governance model has endured for 25 years, from an internet with 150 million users to an internet with roughly 3.5 billion users. Part of this resilience, I believe, comes from its Board. ICANN’s Bylaws are both specific and robust, with details that require all members, representing each geographic region in the world, to act in the best interest of the organization and its mission. One of the most unique components related to Board governance, however, is its efforts at transparency, primarily through the Cross Community Working Group (CCWG) on Enhancing ICANN Accountability, which is open to the public. Members of the CCWG and other community members can ask questions of the Board or send comments in person or remotely, as well as at Public Forums held by ICANN.
As AI tools get more advanced, and law and policy continue to stall, let’s use the tools we have to make progress, as the White House has done with its Executive Order on AI. Let’s step in now to ensure the public has a voice in the development of these impactful technologies, rather than concede power to modern would-be kings with no demands.
Until next time,
Michelle De Mooy
Director, Tech & Public Policy Program