Understanding Sam Altman's firing at OpenAI

NOTE: Things are changing fast — these are my thoughts as of writing at 6pm EST on Sunday, November 19, 2023

The news of Sam Altman’s firing has gone off like a bomb in the tech world. Boards don't just fire popular CEOs of rocketship growth companies. This was an unwanted but necessary move, and likely perceived as an existential threat by the people who made the decision.

Much of the speculation around the firing has Machiavellian tones: power, money, control, scandal. These didn’t feel right to me because all those could have been handled without this scorched earth approach.

I wanted to present a different, simpler framing based on human nature and the personalities of the people involved, mainly the co-founders Sam Altman and Ilya Sutskever.

You can get far in understanding a situation by understanding the personalities of the people involved. People are infinitely complex of course, so we can only speculate about what's really in their minds, but we all fall into select personality types that have certain behaviours in certain situations. To use a simple example, you’re more likely to find extroverts in big crowds and introverts in small ones.

As always, I try to do these analyses in good faith, assuming everyone is smart and doing their best. Let's get into it.

What we know

We start with the board’s official announcement, which says Sam Altman "was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

From the OpenAI boards official statement

We also know the firing decision was not done with the participation of Sam or Greg Brockman (another co-founder), who was also removed from the board, not fired, but would later quit in protest.

From Greg Brockman’s tweet to his team the night Sam Altman was fired

It also appears the split was over a misalignment between two competing forces in the company: profit versus non-profit.

From Twitter

We know the board’s stated duty was safety rather than profit and they were chosen accordingly. The independent board members do not hold any shares in the company and many have strong opinions on AI safety.

The four remaining board members:

  • Tasha McCauley — On the advisory board of the Center for the Governance of AI. With her husband Joseph Gorden-Levitt, signed the Asilomar AI Principles, a set of 23 AI governance principles published in 2017.

  • Helen Toner — Also on the advisory board of the Center for the Governance of AI with McCauley, and director of strategy at Georgetown’s Center for Security and Emerging Technology.

  • Adam D’Angelo — Former CEO of answer site Quora. Has said of OpenAI: “There’s no outcome where this organization is one of the big five technology companies.”

  • Ilya Sutskever — The only remaining OpenAI cofounder on the board and a leading contributor in the AI space as co-author of a key paper in neural networks with legendary AI academic Geoffrey Hinton in 2012. He also helped lead the AlphaGo project, a milestone in modern AI history.

It appears Ilya Sutskever led the action to remove Sam Altman; presumably, he was the one that persuaded the board it was the right thing to do.

From Twitter

Sometime after the firing, The Information reported Sutskever responding in this internal message to employees:

As of November 18, 2023 (the day after the original firing), there are rumours the board is considering bringing Sam Altman back.

This is all we know about the situation as of November 19, 2023 at 5pm. For everything else, we turn to what we can understand about the people involved.


A QUICK SUMMARY

Before I get into the details below I think it'll be helpful to tell you my view of the whole situation, so I’ll briefly state what I think happened and why and explain it in detail the rest of the post.

Here’s an overview of my conclusion: The firing of Sam Altman was the result of a collision between two powerful forces: ambition and caution. Ambition from the side of Altman, and caution from the side of Sutskever. I don't believe this was about money or power; I don't think there was any personal scandal, or a single transgression. I think this has been building for a very long time, possibly since the founding of OpenAI many years ago. Last Friday was just the moment it all blew up.

Sam Altman is a very ambitious person. Ilya Sutskever appears to be a much more serious person. To someone like Ilya, ambition can appear cavalier and even irresponsible, especially considering the high-stakes of AI’s impact on the world. The board (appointed to ensure responsibility) would likely have felt similar to Ilya and also been concerned by Sam’s ambition.

From Twitter

It doesn’t seem shocking that a popular CEO of a $90B company, flying high in the market wouldn’t heed the requests of what’s effectively an oversight board. Sam may have even gone behind their backs and done things without communication or permission, not maliciously just moving fast. All the same, this would have further alarmed them and increased the feeling they had no control over him. This explains the phrase in the official board statement “not consistently candid in his communications.”

As more incidences piled up, and with responsibilities to shareholders and the world (many believe AI is an existential threat), the board and Ilya might have felt like this was a runaway train and they had to do something. Eventually, a breaking point was reached. Perhaps it was the recent Dev Day when Sam went over their heads to release a feature they wanted launched more cautiously? Whenever it was, this alliance of board members decided last Friday was the right time.

I was a decision they felt forced to make. No doubt they knew this was going to be a bombshell and create chaos, but to them the situation was existential and therefore necessary.


An unstoppable force

Sam Altman's past has been well documented. He ran the biggest incubator in Silicon Valley, Y Combinator (YC), for many years and generated billions of dollars. He's a big deal.

A key his success was his extreme ambition. Open AI was just one of his projects among dozens of other equally ambitious projects at YC, including ones in nuclear fusion, quantum computing, curing cancer, synthetic biology, and life extension. Paul Graham, the original founder of YC (who handed it off to Sam to lead in 2014), said “I think his goal is to make the whole future” and venture capitalist Marc Andreessen said “Under Sam, the level of YC’s ambition has gone up 10x.” When Silicon Valley venture capitalists are impressed by your ambition, that says a lot.

An immovable object

What Sam had in ambition, Ilya possessed in caution. Ilya came from an academic background and is much more reserved, preferring to be behind the scenes. Regardless of any achievements at OpenAI, his legacy in the space was guaranteed early. He studied under Geoffrey Hinton, widely recognized as the Father of AI, and co-wrote with him the paper that became the foundation of the current AI breakthrough. Hinton is also worried about AI and hasn't been shy about talking about the risks. He quit Google in May 2023 so he could talk publicly, saying AI could "wipe out humanity” and that a part of him now “regrets his life's work.” It’s not out of the question that Ilya would have absorbed some of the same concerns considering how closely they worked.

Sam

Note: In this section I speculate about Sam’s psychology. Like everyone else, I only have access to what’s publicly available, and what I’ve been able to find (which is not exhaustive).

Credit: Sam Altman via Business Insider

Sam Altman is a very popular figure in the tech world. People describe him as charming and inspiring and a genius, all of which appear to be true. But he also shows some negative traits pointing to grandiose narcissism and Asperger’s.

Note: I’m aware of the negative connotations of certain psychological characteristics and labeling someone with them. I'm also aware that a few examples do not make a diagnosis but the nature of speculation is forming a theory without firm evidence.

From a 2016 New Yorker profile on Sam Altman during his early years running YC, when asked by a blogger “How has having Asperger’s helped and hurt you?” he replied when telling the story:

“I was, like, ‘Fuck you, I don’t have Asperger’s!’ But then I thought, I can see why he thinks I do. I sit in weird ways, I have narrow interests in technology, I have no patience for things I’m not interested in: parties, most people. When someone examines a photo and says, ‘Oh, he’s feeling this and this and this,’ all these subtle emotions, I look on with alien intrigue.”

We can observe a social awkwardness in Sam’s public appearances, and his self-admitted obsessive, narrow interests are characteristics of Asperger’s. Difficulty reading non-verbal signals, like the kind found in facial expressions in a photograph, are also common.

Those with Asperger's have a harder time understanding peoples emotions and the degree to which others feel those emotions. They might, for example, underestimate how strongly someone might feel in reaction to something they did. It wouldn’t be surprising if Sam Altman couldn’t see how concerned the board was with some of his actions.

More concerning would be his traits pointing to grandiose narcissism: an inflated sense of self-importance, attention-seeking, a lack of empathy, tendency to manipulate, and sensitivity to criticism. The anecdotes below touch on each one of these.

Grandiosity comes through in grand ambitions. He planned to build YC into a “trillion dollar enterprise” of hyper-entrepreneurs that will fix the world, has claimed “science is broken,” and is now trying to build artificial general intelligence (AGI).

In that 2016 profile, Sam said about himself:

"The missing circuit in my brain, the circuit that would make me care what people think about me, is a real gift. Most people want to be accepted, so they won’t take risks that could make them look crazy."

Risky behaviour and aloofness can be parts of the attention-seeking aspects of narcissism. In many situations, these are attractive to people and do get attention, but aloof risk-takers don't inspire confidence from others when the stakes are high (like when building a potentially world-changing technology). People want a steady, predictable person behind the wheel.

Manipulation is another negative quality of narcissists and we observe in Sam Altman. There are stories about his cruel treatment of his siblings. They are subtle and plausibly deniable so I won’t go into detail, but they were enough to prompt his mother to say about him and his brothers when living together, “It’s tricky with the power dynamic, and I want it to end before it explodes.”

Glimpses of a sensitivity to criticism are visible too. About Altman from fellow YC partners: "He was a formidable operator: quick to smile, but also quick to anger. If you cross him, he’ll joke about slipping ice-nine into your food." (Ice-nine was a poison from a Kurt Vonnegut book.) For all the confidence of a narcissist, their personalities are delicate and easily threatened, often prompting chilling responses.

Ilya

Ilya Sutskever is similarly brilliant to Sam Altman, and his achievements were one of the reasons Altman asked him to be a co-founder. There is less written about Ilya, probably because what he does is harder to understand and he’s more understated than Altman. What is clear and relevant to this situation though, is that he’s a serious person and has talked a lot about the risks of AI.

That said, he’s no “AI Doomer.” Like Altman, he has grand visions for the possibilities.

From Ilya: “You can do so many amazing things with AGI, incredible things: automate health care, make it a thousand times cheaper and a thousand times better, cure so many diseases, actually solve global warming.”

He even imagines one day “many people will choose to become part AI” (alluding to a brain-computer interface) and that he would consider it for himself.

He is very sober about the risks though and in a video released just 4 days before Sam Altman’s optimistic Dev Day keynote in Nov 2023, he talked about how close we may be to AGI in the importance of what we do in this moment. “Scientists have been accused of playing God for a while,” he said. “If you have an arms race dynamic between multiple teams trying to build AGI first, they will have less time to make sure that the AGI that they will build will care deeply for humans.”

As a researcher who is used to let the science to the talking, he might have found it difficult or taken offence when his concerns weren’t heard. Most engineers and scientists I know aren’t good at persuading and debating, particularly with strong personalities. When rationale doesn’t persuade, they’re out of options. It's easy to see how frustration could build with Sutskever towards Altman.

If indeed Ilya led effort to have Sam fired, it’s not a stretch to say he didn’t do it tactfully and I’d even say he over-reacted. This reaction (again, if indeed it was led by him) suggests a lack of appreciation for the complexity in the world (the number of companies and countries participating in AI) and perhaps a sense of grandeur himself. Is the fate of humanity really being decided in your office and with your work? Maybe, but it’s a stretch to believe it with enough certainty to set off a bomb in your company. More likely, this event is the culmination of a slow build-up of scary narratives, a sense of grandeur masquerading as responsibility, and frustration with not being heard by Sam.

Going rogue

On November 6 at their developer conference, OpenAI announced GPTs: custom versions of chatbots that anybody can train with their own data and make publicly available. This was the first time anyone could create their own ChatGPT chatbot, no programming knowledge required.

Many people speculated this was the last straw that led to Sam Altman's firing. You could certainly imagine how a board — earnest in their duty to ensure responsible and safe AI development — might consider this an unnecessarily risky move. Why no beta or even phased rollout?

You could also imagine Sam Altman, considering this bold and exciting.

If this was indeed against the board’s wishes, this wouldn’t be the first time Sam Altman has gone rogue. In 2015 two YC partners concerned about how fast Sam Altman was going told him to “Slow down, chill out.” Altman replied, “Yes, you’re right” but then went off and did something they didn't know about anyway. In this case, it was a new branch of YC called YC Research, dedicated to moonshot ideas.

From Twitter

It's hard to know how much of a pattern rogue behaviour is for Altman, but it’s easy to see how a mixture of ambition, grandiosity, and lack of empathy would lead to it.

An untenable situation

What existed at OpenAI before this explosion was an untenable situation, with a collection of personalities in an unstable equilibrium. The ambitious optimists charging ahead, deaf or dismissive of the concerns of the cautious realists who were feeling unheard and out of control.

Had the stakes been lower, like those in a regular company whose work doesn’t put human civilization in the balance, this would have ended quietly with the realists quitting and moving on. But this one of those strange places where a few people could reasonably believe their actions will affect billions of lives. It's hard to walk away from that.

What’s next

There are two possibilities emerging at the time of writing this:

  1. The firing holds and Sam Altman goes off to fund another company, or

  2. Sam Altman returns which has been rumoured.

Both are bad for the company in different ways.

If the firing holds, Sam will likely start another company and many loyal, smart people will quit, leaving a temporary talent vacuum that will take time to fill.

If the board brings him back (a very strange decision if it happens), Sam will come back only on the condition of a complete restructuring because no leader can lead with mutineers amongst them. Ilya Sutskever and the current board will be gone, along with a different group of loyal, smart people, also leaving a talent vacuum.

Either way, there is a 6-12 month period of chaos and distraction coming for OpenAI. It may not be the same company on the other side.

Closing thought

We know from history that certain people come along and have outsized effects on the trajectory of the world. The paradox is that these world-changers are rarely balanced individuals. If they were, the simplicity of an ordinary life would be enough. Sam Altman appears to be one of these paradoxical figures.

Relatedly, psychology tells us that extreme ambition springs from a profound sense of inadequacy: the height of ambition often mirrors the depth of insecurity.

Problems arise when someone doesn’t recognize this and brings their insecurities unchecked into the world. These people, often unconsciously, prioritize their needs over others’. In most cases, the impact is limited to a small group of family and friends, but in extraordinary cases, as with Sam Altman, the repercussions extend to companies, nations, and even entire societies.

“How can anyone see straight when he does not see himself and the darkness he unconsciously carries with him into all his dealings?” — Carl Jung

Sam Altman’s ambition unsettles me. It feels wild and unbridled and I worry about him leading a company building such consequential technology. He shows a willingness to take risks and ignore advice, and his success and popularity only bolsters his sense of greatness.

My real wish is a fantasy though. I wish that we didn’t live in a world where a single technology could change everything. Unfortunately we do and it makes for fascinating times, and anxious ones.


PersonalMark Rabo