Deuce Coups
Welcome
Editor in Chief
Dear Reader,
What a busy weekend in tech news. On Friday, we heard that OpenAI, creators of ChatGPT, had fired CEO Sam Altman, and by Monday, he had already found a new job at Microsoft, along with cofounder Greg Brockman. More than 700 OpenAI employees signed a letter saying they would quit – and quite possibly jump to Microsoft – if the OpenAI board didn't hire Altman back and resign. Microsoft said Altman and Brockman would lead Microsoft's new advanced AI research team. OpenAI, on the other hand, went into free fall, announcing an interim CEO whose tenure lasted for two days before another CEO was named.
Wall Street was very happy for Microsoft, driving the share price to a record high. Meanwhile, OpenAI was roundly condemned – both for firing Altman and for the way they did it. The word on the street was that Microsoft pulled off a "coup" by snagging Altman, Brockman, and whoever else they can pull over. Altman and others also referred to his ousting by the OpenAI board as a "coup," with a very different spin on the term. Two coups in four days is a lot – even at the frenetic pace of IT.
From a business viewpoint, Microsoft was simply capitalizing on an opportunity – and acting to protect their investment, because they had acquired a large stake in OpenAI earlier this year and couldn't afford to watch the company self-destruct. But it is worth pointing out that this really isn't all from a business viewpoint. OpenAI is actually ruled by a nonprofit board controlling a for-profit subsidiary. The question of what is better for OpenAI's business interests, which seems to be the fat that everyone is chewing on, might not be the best context for understanding these events.
Altman's disagreement with the board appears to have been about the pace of development and the safety of the tools the company has developed. OpenAI's vision is supposed to be to develop AI "for the benefit of humanity," which is very admirable, but it leaves lots of room for interpretation. Altman, in particular, has occupied an ambiguous space in the press, at once warning about the dangers of AI and simultaneously pledging to press ahead with development. No doubt he felt confident that he was laying down sufficient guardrails along the way, but that is something to communicate with your board about, and it sounds like he wasn't communicating to their satisfaction. Should the board have trusted him and let him forge ahead, knowing that the company was on a roll and potentially on the verge of further innovations? If they were a garden-variety corporate board, possibly yes, but as a board member of a nonprofit, you are really supposed to have more on your mind than power and money. You're supposed to know when to say "no," even if it annoys everyone and stirs up some turmoil.
Of course that is the charitable view of the board's action. A darker (and equally speculative) view is that nonprofit boards can sometimes be highly dysfunctional, with a lot of their own internal power games and politics, and maybe the intrepid Altman was simply unable to steer around a raging Charybdis of group think.
The whole story hung in a state of uncertainty for two days; then lightening struck again: OpenAI hired Altman back. Was this a third coup, or the undoing of a previous coup? Microsoft gave the new plan its full support. OpenAI ditched three of the four board members who voted for Altman's ouster (including the only two women), and the new board has pledged a full investigation into what happened. We might need to wait for that report to know all the details of the internal struggle that led to this unexpected whiplash festival, but one thing seems clear: Altman and the full-steam-ahead faction is the winner and the proceed-with-caution faction is out in the cold. Ousted board member Helen Toner, for instance, recently co-authored a paper that warned of a possible "race to the bottom," in the AI industry, "in which multiple players feel pressure to neglect safety and security challenges in order to remain competitive" [1]. Some are now saying that paper helped to stir up the skirmish in the first place.
Why did Microsoft let Altman go back? It isn't like them to surrender the spoils of victories. Keep in mind that the competition is heating up. Amazon just announced its Olympus AI initiative, and Google, Meta, and several other tech giants are all working on their own AI projects. Microsoft is already committed to building OpenAI's technology into its own products, and they might have realized that, by the time the exiles settle into their new workspace and get down to training models and producing real software, their head start might already be gone.
OpenAI has regained its footing as a business, but as a nonprofit devoted to serving humanity, it appears to have fallen off its pedestal, or at least, dropped down to a lower pedestal. I fear the biggest loser in all this might be the optimistic OpenAI vision of a nonprofit innovator taking a principled stand for methodical and safe development of these revolutionary tools.
Note to governments: Now might be a good time to provide some meaningful restraints for the AI industry – don't expect them to police themselves.
Joe Casad, Editor in Chief
Infos
- "Decoding Intentions: Artificial Intelligence and Costly Signals," by Andrew Imbrie, Owen J. Daniels, and Helen Toner: https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding-Intentions.pdf
Buy this article as PDF
(incl. VAT)