Elon Musk, Sam Altman and the world’s billionaires are terrified of the Google AI genius behind a 25-year-old computer game, because they think he might actually end up controlling god

I’ve been keeping an eye on the various documents released as part of the ongoing Musk vs Altman legal fight, which centres around OpenAI changing to a for-profit structure and Elon Musk’s belief he was deceived by Sam Altman. The documents, which consist mainly of emails and text exchanges between the various figures involved, turn out to have several themes running through them: and an absolutely huge one is the co-founder and CEO of Google DeepMind, Demis Hassabis.

Hassabis is widely regarded as the outstanding talent in the AI field, a fellow of the Royal Society, and in 2024 shared the Nobel Prize in Chemistry with John M. Jumper for their work on AI protein structure prediction. He also began his career in videogames at Bullfrog, before working as lead AI programmer on Lionhead’s Black & White, and founding his own developer, Elixir Studios (which made Republic: The Revolution and Evil Genius).

But we’re talking about Hassabis today because Elon Musk and his various AI buddies have a real obsession with the dude, which at times seems to veer into the unethical. The question really is whether it’s Hassabis’ trustworthiness that bothers them, or the fact that they don’t want Google to create and thus have theoretical control over artificial general intelligence.

Back in February 2016, Musk emailed Sam Altman and Greg Brockman about hiring at OpenAI. “We need to do what it takes to get the top talent,” writes Musk. “Either we get the best people in the world or we will get whipped by Deepmind. Whatever it takes to bring on ace talent is fine by me.”

Musk goes on to outline what he believes to be the core issue: “Deepmind is causing me extreme mental stress. If they win, it will be really bad news with their one mind to rule the world philosophy. They are obviously making major progress and well they should, given the talent level over there.”

Musk and others put that “one mind to rule the world philosophy” squarely on Hassabis’ shoulders, and a few years later comes an even more eye-popping exchange.

“I think there are a lot of no-brainers to explore, and I put some in that doc, but the thing that keeps calling out to me is there is a very low probability of a good future if someone doesn’t slow Demis down,” writes Shivon Zilis to Musk [PDF] on 16 February 2018.

“Slowing him down is the only non-negotiable net good action I can see. You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”

I mean, that’s like a mob boss talking, capiche? Musk responds: “Best to talk by phone about this later tonight. I doubt I could do so in a meaningful way.”

“OK, yes that would be good,” says Zilis. “And, ultimately up to you of course, but I really think you can so would like to at least make the case. In any case, I will sleep better at night for having tried!

“Will leave you be on the Demis stuff. I’m sure it’s hard to think about and you have so much on your shoulders all the time that I always feel terrible pushing you. I just needed to say it once since it’s been plaguing me.”

One intriguing message from Zilis to Musk [PDF] has, unfortunately, been redacted, though again shows how he’s always uppermost in their thoughts: “Will leave you be on the Demis stuff. I’m sure it’s hard to think about [redacted]. I just needed to say it once since it’s been plaguing me. [redacted].”

Later in 2018, Musk is back on Hassabis, and launches a broadside at Altman and Brockman about OpenAI’s relative position.

“My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%,” writes Musk in late December. “Not 1%. I wish it were otherwise.

“Unfortunately, humanity’s future is in the hands of Demis.”

Musk then links this NYT article about the success of Deepmind’s AlphaZero chess software, before continuing:

“And they are doing a lot more than this.

“OpenAI reminds me of Bezos and Blue Origin. They are hopelessly behind SpaceX and getting worse, but the ego of Bezos has him insanely thinking that they are not!

“I really hope I am wrong.”

(Image credit: Dan Kitwood via Getty Images)

A few days later Musk is back on his Hassabis hobbyhorse.

“OpenAI is not a serious counterweight to DeepMind/Google and will only get further behind. It is surprising that this isn’t obvious to you,” writes Musk. “In general, always overestimate competitors. You are doing the opposite.”

But Hassabis also weighed heavily on the minds of others. Satya Nadella gave pre-trial testimony in September 24, 2025 [PDF], and one of the lawyer’s questions is “Google was—I believe you said by far the dominant player in machine learning around 2015?”

“I would say so, yeah,” replies Nadella, before the lawyer mentions Google’s acquisition of DeepMind and asks whether Microsoft believed the company was “making a lot of progress in that field of machine learning?”

“I now can’t recall specifically what state DeepMind’s breakthroughs were, but yeah DeepMind was well known, even in that time frame, and Google had DeepMind, had Google Brain. They had many, many different efforts they were publishing.”

Nadella’s testimony can be condensed into his acknowledgement that Google led the AI race from the 2010s onwards, and that Microsoft was tracking DeepMind’s progress. Nadella says “I probably knew [Hassabis] a little” from around 2015 “just after I became CEO.”

The lawyer drills down and asks why Nadella and Microsoft were tracking DeepMind specifically.

“Just because of the breakthroughs that this particular regime of AI around deep neural networks were showing real promise of making breakthroughs in fields like language translation that had not been seen before. And so that’s why we were waiting to see how we could also participate and make sure that we have those breakthroughs.”

Amusingly enough, the lawyer moves on to OpenAI and Dota 2. One of OpenAI’s first goals was to create a bot that could beat humans at Dota 2, a task in which it eventually succeeded: And as PCG reported last week, that was because Elon Musk had personally called Microsoft CEO Satya Nadella to secure a massive discount on access to Azure, the company’s cloud computing platform.

(Image credit: Bloomberg / Contributor – Getty Images)

Here, the lawyer begins by asking what Nadella understands about the game.

“I’m not a gamer,” says Nadella. “And I think it’s a Steam game, if I’m not mistaken.” He then goes on to give a brief overview of why he thought the game angle with AI was interesting.

“That’s why gaming environments being closed worlds were a great sort of, you know, environment to do reinforcement learning,” says Nadella. “Right? The objective function is clear. The reward function is clear.

“I forget now when and what time frame some of the breakthroughs on AlphaGo and so on happened but, you know, Demis was a game developer. There’s a long history of AI developers who came out of using games in environments, building AI bots in games, so it’s sort of a given.”

Elon Musk also gave pre-trial testimony a few days later, on September 26, 2025 [PDF], in which he reveals that poaching people from DeepMind caused him to have a fallout with Google co-founder Larry Page.

“I talked to dozens and people over the years,” says Musk. “But I think the most crucial recruit was Ilya Sutskever. In fact, the recruitment of Ilya was what actually caused Larry Page to stop being friends with me.

“So Ilya went back and forth multiple times saying he would join OpenAI or stay at Google, and ultimately agreed to join; and Larry Page and Sergey [Brin] and Demis Hassabis did everything they could to keep Ilya. When Ilya finally decided to join OpenAI, that’s what ended the friendship with Larry Page. He didn’t talk to me after that.”

The lawyer asks if Musk is still getting the silent treatment a decade later, and it’s a simple “yes” before adding “they were very upset about [Sutskever].”

Let’s end on the funny note of Sam Altman’s delusions of grandeur. This document [PDF] was filed on 6 January 2026, but unfortunately is not otherwise dated. Based on what he’s saying though I would date it in the region of 2016-2018.

“Progress fundamentally has to be made by non-profit, interesting direction you could go,” writes Altman. “Everything I perceive with OpenAI, race dynamics vs Demis + brain + whatever, gotta get there first.”

I’m not quite sure but “brain” may be a reference to neural chips, though of course what’s amusing here is that once again Hassabis and the “race dynamics” of competing with him is living rent-free in Altman’s head. Then, the ego really takes over.

(Image credit: Bloomberg via Getty Images)

“Another angle: [DeepMind will] never do that much that’s interesting,” writes Altman. “It is better for us to become increasingly kings of this industry. The choice defines us.

“You say we should have been more ok with giving it to Elon. While it’s true, it’s also the case he’s now given it to us. The grand upside is I want it. Need to stop letting distractors get to me/us. Being the Kings of AI is not so bad.”

The Kings of AI! Good band name. Also: Jesus wept. Do we really want this dweeb potentially making big decisions about humanity’s future?

In early 2019 Altman is still trying to tempt Musk into phone calls by promising “some mild Demis updates to share” [PDF] while in 2023 Mira Murati is emailing Nadella [PDF] saying “it is very important that we don’t lose researchers to Demis or Elon.”

Let’s end on one of the least-crazy things anyone says about Hassabis, and it’s from Ilya Sutskever [PDF] who actually worked with the guy.

“The goal of OpenAI is to make the future good and to avoid an AGI dictatorship,” Sutskever writes to Musk in September 2017. “You are concerned that Demis could create an AGI dictatorship. So do we.

“So it is a bad idea to create a structure [for OpenAI] where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.”

It’s a great point: but answer came there none.

Leave a Reply

Your email address will not be published.

Previous post Valve made more than 27 million unique images for Counter-Strike listings as part of its new ‘major update’ to Steam’s Community Market
Next post Gang of Dragon studio website disappears without notice as NetEase funding dries up