How OpenAI Showed Silicon Valley’s Ugly-Side

The fallout of Sam Altman’s ouster from OpenAI invited speculation and commentary from across the whole tech community. Key to our fascination was the simple truth that nobody really knew exactly what went down.

Image: Upwork / Rolf van Root

Flashback: What happened at OpenAI in November

If you are one of the fortunate few who happened to be away hiking in Patagonia that weekend, here’s the quick summary:

Ilya Sutskever, one of the big-brains behind OpenAI, sided with OpenAI board member Helen Toner on her concerns about Altman’s leadership, which were serious enough to convince the whole board - minus Greg Brockman - to push him out.

In the mind-bending series of events that followed: Brockman stepped down in solidarity with Altman, and through coordinated heart-posts on X (and an open letter) pretty much the entire team signalled that they would do the same. Altman and Brockman briefly floated the idea of a new venture until Satya Nadella, CEO of Microsoft, swept in and offered to scoop them all up.

Finally, the smoke cleared and it seemed like Altman had secured a win. The problematic board was cleared out (with the exception of Ilya who now expressed regret for his initial position) and replaced with a group of ‘more serious people’.

This is where the story ended. The engineers went back to work, and the EAs and the e/accs went back to bickering about exactly how to address the impending supremacy of AI.

If you come at the king…

Like the final season of Game of Thrones, the end of this saga was underwhelming, confusing and full of unanswered questions.
Wasn’t the corporate structure of OpenAI designed to prevent scenarios exactly like this? In years past, Altman had used the board’s authority to reassure sceptics about his position in the organisation. “The board can fire me, and I think that’s important,” he told the interviewer at a Bloomberg conference.

What caused Ilya’s change of heart? What assurances was he given about the future of the company, the direction of OpenAI’s research, or the strategic direction?

What was the reason that Ilya and Helen wanted to remove Altman in the first place? There were a couple of very generic statements made to rule out the obvious causes, but nothing which really points to the cause.

 What of Altman’s Co-Founder and donor, Elon Musk, who has been a vocal critic of OpenAI’s recent direction? Why was he unusually reluctant to weigh in on the debate?

Unfortunately, we don’t know the answers to any of these questions, which is in itself a huge red flag.

OpenAI was envisioned as a non-profit foundation aimed at responsible, open-source AI development. To combat the potential risks associated with AI, how rogue or misused AI technology could wreak havoc on the world, it was built out in the open.

OpenAI, as Musk states, was envisioned as a non-profit foundation aimed at responsible, open-source AI development. Musk wanted to combat the potential risks associated with AI, how rogue or misused AI technology could wreak havoc on the world, by building out in the open. By, at the very least, having one major AI model which had humanities interests at heart.

Image: Unsplash

What we saw unfold over that weekend should trouble anyone who believes in that mission. Not only are Musk’s concerns about OpenAI’s closed, for-profit pivot valid, but it also turns out that - when necessary - Altman can overpower the board with his leverage on employees and shareholders. He can do so from the shadows, with no explanation, no investigation, no transparency.

Hooked on acceleration

Follow the money, and you’ll find the incentives which explain the actions.

An observer might ask why OpenAI, as a non-profit foundation, ever took money from venture capitalists. It’s explained in part by the capped-profit entity owned by the foundation, which allows OpenAI to operate as a business… but it’s still inconsistent with the outsized, fund-returning investments that VCs look to make. Clearly, those early backers like Andreessen Horowitz and Khosla Ventures, expected OpenAI to become a monster long before it was clear to the public.

Similarly, you might ask why OpenAI issues such huge grants ($500k) of ‘profit participation units’ (a form of equity compensation) to employees. If this was to be a non-profit, largely R&D driven endeavour, why would you make ‘profit’ the primary metric of success and reward for your workforce?

Unfortunately for Toner, who tried to execute her duty as a board member seeing OpenAI go wildly off mission, these new profit and growth driven incentives had spread their roots throughout the company. Any threat to Altman, in the name of ‘more responsible development’, threatened that growth and thus the financial future of every employee and shareholder.

In general, these incentives are a net positive. Were this any other company, they would contribute to a fast moving, motivated organisation under the respected leadership of Altman. What is inexplicably weird, is that this was allowed to develop under the guise of a non-profit, funded by donors.

The future of AI governance

We can hope that a stronger board of more engaged figures will help keep OpenAI on the right track, and deliver an outcome which will satisfy both the cautious and the keen. A board with some real heavyweights from the world of tech, with the clout to guide Altman. A board like the OpenAI board of 2019, but with firmer commitment. Even then, it would operate under a ‘Sword of Damoclese’, representing Altman’s willingness to walk away with the team, and Nadella’s willingness to provide a home.

The ultimate solution for a company with the funding, mission and responsibility of OpenAI might be more drastic; there’s a very good argument that OpenAI should be a public company.

Over the years, Musk has expressed regret for taking Tesla public, and flirted with the idea of reversing the move. Mostly that has to do with dealing with short sellers, a phenomenon unique to public companies. The other features, like publishing quarterly statements and shareholders’ meeting reports, have all helped build trust in the Tesla brand through added transparency. Those additional duties certainly don’t appear to have slowed Tesla down.

In fact, if you go back a few years, the argument that OpenAI should go public for the purpose of ensuring transparency was also raised. At the time, the argument against that proposition was that public companies have a duty to ‘maximise shareholder value’ by generating profit - which should not be a priority for OpenAI. Today, it appears that no such conflict exists anyway as OpenAI is already focused on maximising revenue. There’s also no real legal precedent to bind OpenAI to maximising profit, should they decide to change strategy. 

It doesn’t matter how arcane the corporate structure is, ambitious private companies with private investors are fundamentally biassed towards profit and rapid growth. Public companies generally have less extreme growth expectations, and enforce standards on transparency and accountability. That seems like a far more robust and comprehensive solution to OpenAI’s dilemma, and a possible future for all organisations like it.

Previous
Previous

Why did we move to a new office in the age of remote work?

Next
Next

Banks Have Been Caught Red-Handed Committing ESG Fraud – or Have They?