paint-brush
On OpenAI Failed Board Coup of Sam Altman & the Danger of Leaving AI Fate in the Hands of a Fewby@linh
1,367 reads
1,367 reads

On OpenAI Failed Board Coup of Sam Altman & the Danger of Leaving AI Fate in the Hands of a Few

by Linh Dao SmookeNovember 22nd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Sam's Return I think is both a bad news good news situation...

People Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - On OpenAI Failed Board Coup of Sam Altman & the Danger of Leaving AI Fate in the Hands of a Few
Linh Dao Smooke HackerNoon profile picture


500+ HackerNoon humans voted on our poll yesterday asking what would happen in the unpredictable roller coaster ride that can only be described as an epic-fail, attempted coup by the (now defunct) OpenAI Board, overwhelmingly chosing the option that Sam Altman would eventually return to OpenAI as CEO.


Link to poll here: https://hackernoon.com/polls/what-will-happen-to-openai-and-sam-altman-by-thanksgiving


And HackerNoon readers are right! The last & final negotiation to ‘bring Sam back or we’ll quit’ by OpenAI employees and ‘bring Sam, or we’ll pull out” by OpenAI investors, led largely by Microsoft CEO Satya Nadella and the heavyweights of Silicon Valley (I’ll discuss them later in this article) finally came to its “happy ending.” The coup failed. Sam & Greg would now be welcomed, in their triumphant return, by 95% of OpenAI (738 out of 770 employees, per Bloomberg, signed the letter), a brand new Board, and more fame & power than ever!


The smoking gun is still not out yet (and it’s unclear whether it will ever be clarified), but I’m still semi-confident that more research papers, lawsuits, and even policy changes will come out in the next months or years to shed light on what happened. And this is not the end of the turmoil at OpenAI just yet.

But first, a quick recap

To be caught up 100% on this drama, you should go ahead and read my story on Friday, which announced and speculated on the details of Sam Altman’s firings and the one on Monday, which laid out a chronological timeline of what transpired over the weekend. When we left off, Sam and Greg were still on their way out the door and to join Microsoft, to be just another corporate normie (lol). A new interim CEO was announced (replacing the already interim CEO Mira Murati, who turned out to be a Sam loyalist): Emmett Shear, best known for being the ex-CEO of Twitch. Yup, Twitch. The game streaming platform. Nothing AI-related.


Satya Nadella, Microsoft CEO, while very positive and upbeat in all his public appearances and annoucements, indicated strongly that changes would absolutely need to be made on the Corporate Governance level, and there should be no more surprises for Microsoft, a 49% stakeholder, and the most important institutional partner of OpenAI. I read that as: Board has to be fired, and there’s no way Microsoft will be in this situation ever again with the new Board.


And then, around midnight Tuesday, Nov 21/ early morning Wednesday, Nov 22 - Open AI announced you-know-what, the news that Sam & Co & most Silicon Valley optimists have been waiting for:


SAM 👏 IS 👏 BACK 👏




That same announcement mentioned that the new Board of OpenAI will consist of Bret Taylor (Chair), Adam D’eangelo, and Larry Summer. Notably, it does not mention Sam Altman, Ilya Sutskever, Tasha McCauley, and Helen Toner.

The NonProfit/For-Profit Misalignment: The Bane of OpenAI’s Existence


I think the most plausible theory as to why the coup was allowed to happen is the following: the (now defunct) Board of OpenAI, a nonprofit that controls the for-profit LLC (go figure), is supposed to have no incentive whatsoever to maximize profit, which is the main point of the OpenAI that is the maker of the Chatgpt, that raised 13 billion from Microsoft, and that had a fiduciary duty to make money for its investors. All voting members of the Board, including Altman himself, hold no shares of OpenAI. Their mission, as stated in the company bylaws, is to “ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.” In other words, the nonprofit Board, which oversees the for-profit LLC, has no interest whatsoever in making money back for the investors, Microsoft computing powers (which is absolutely required to do the kind of research needed to roll out AGI), or the fact that OpenAI since the release of ChatGPT had become the de-facto zeitgeist tech company in decades. All it was interested in, and was legally required to oversee was the safe rollout of Artificial General Intelligence, or AGI. It was, in theory, perfectly happy not to make any more money, shrink, and return to a small elite research group.


Thanks Nicolas Boucher for making the chart!


This theory I thought, was the most plausible because (as mentioned in my last story) of the fact that Ilya Sutskever, OpenAI Cofounder and Chief Scientist, was part of the original Board faction that ousted Sam in the first place. (He later on regretted his decision and made a U-turn that it’s safe to say surprised absolutely everybody) He is a brilliant mind and is widely believed to be the brain of the Operation. In fact, he was courted to OpenAI by Elon Musk himself (from Google Deepmind), which, according to Musk, caused an apparent fracture in Elon Musk and Google CEO Larry Page’s longtime friendship. The fracture at the time (back in 2015/2016) was around AI safety. Ilya cared about AI safety, and so did Elon, while Larry didn’t. So, Ilya became team OpenAI.


Another reason why I thought this is a fundamental disagreement/misalignment around AI Safety is the fact that Helen Toner, one of the 3 Board members who voted to oust Sam, recently wrote a research paper that (as reported by the New York Times) was in favor of OpenAI’s direct competitor Anthropic and critical of OpenAI’s approach to roll out AGI. Anthropic, mind you, was created exactly because of this AI safety tension. Its founders are Dario Amodei, OpenAI’s former vice president of research, and his sister, Daniela Amodei, who was OpenAI’s vice president of safety and policy. Several other OpenAI research alumni were also on Anthropic’s founding team. Even weirder, Anthropic CEO Dario was reported to have been poached by the old Board re/ a potential merger with OpenAI (lol, the company it departed from in the first place) and refused.


And lastly, just yesterday, mere hours before the news of Sam and Greg’s triumphant return broke, Elon Musk, the original founder and investor in OpenAI himself, shared on his own owned platform X, a letter from “concerned” former employees of OpenAI about, among many things, a disturbing pattern of behaviors from Sam Altman and co, to not listen to the safety researchers. See the full letter here. In other words, most of the people who have been in major disagreement with Sam re/ AI safety over the years have left. What’s left of OpenAI, for the most part, is Sam’s loyalists.


Meet the OpenAI’s New Board of Directors

In the negotiation to have Sam Altman return as CEO, the (old) Board agreed to have these 3 initial individuals on the Board. So, let’s meet them.


Bret Taylor

Bret Taylor is one of the most well-connected and respected technologists in Silicon Valley. He is best known for serving as the former CEO of Salesforce, the leading company in cloud-based software services. But even more notably, Taylor played a pivotal role as the Chair of (the former)Twitter's Board of Directors, where he was instrumental in (forcing) the acquisition of Twitter by Elon Musk after the billionaire flip-flopped out of the 44 billion dollar deal (see here, here and here).


Larry Summer

Lawrence "Larry" Summers is an American economist, former Vice President of Development Economics and Chief Economist of the World Bank, and former U.S. Secretary of the Treasury. He is best known for serving as the 27th President of Harvard University. During his tenure at Harvard, Summers sparked controversy with his comments about women in science, suggesting intrinsic differences in aptitude between men and women could be a reason for the underrepresentation of women in science and engineering fields. These remarks led to widespread criticism and played a significant role in his eventual resignation from the presidency of Harvard. Go figure.


Adam D’Angelo

Adam D'Angelo the CEO of Quora, a popular question-and-answer website. Prior to Quora, he was the Chief Technology Officer at Facebook, and might also have been one of its many cofounders besides Mark Zuckerberg. Most notably, he’s the only standing member of the OpenAI Board that will still serve in this new board. Yup, the Board that ousted Sam in the first place.


Hmmm a superb business deal-maker (the only adult in a room who could make Elon adhere to his legally binding contract to buy Twitter), an influential albeit controversial economist (Larry’s sexist comments about women's intellect would not have bode well with the ousted board members, both of whom women!), and a mysterious CEO who rarely gave interviews (nobody knows why Adam (allegedly) ousted Sam, even to this day)… those are the people who we current have controlling the fate of the arguably most influential technology of our time.


Concluding thoughts (hopefully for the last time on this saga, for now)



AI might be the most important technology of this decade and the decades to come. But the story unfolded this past weekend just shows us how incredibly flawed the people under peril of whose it’s being controlled are. Savvy tech + flawed humans = pretty disastrous combo, IMHO.


No matter how flawed, irrational, and seemingly out of their mind insane the now-defunct OpenAi board seems in their decision to fire Sam; there’s still a big, giant, and humongous elephant in the room that is the event that triggers the decision in the first place.


It was reported that Sam has fundamentally agreed to a thorough investigation upon returning. Until then, I will remain skeptical at best at this new Board’s ability to simultaneously uphold OpenAI’s lofty mission to save humanity from the super smart robots (paraphrased) while making lots of money simultaneously. Seems a paradox, just like capitalism, effective altruism, and human beings for that matter.