Imagine walking into a bio-ethics conference and attempting to give an hour-long presentation about the best ways to build fences to contain a cloned Tyrannosaurus. Your fellow scientists would immediately interrupt you, demanding to know why, exactly, you’re so convinced that we’ll soon be able to bring dinosaurs back to life. And if you didn’t have a realistic and specific answer—something that went beyond wild extrapolations and a general vibe that genetics research is moving fast—they’d laugh you out of the room…

But in certain AI Safety circles (especially those emanating from Northern California), such conversations are now commonplace. Superintelligence as an inevitability is just taken as an article of faith.

Here’s how I think this happened…

In the early 2000s, a collection of overlapping subcultures emerged from tech circles, all loosely dedicated to applying hyper-rational thinking to improve oneself or the world.

One branch of these movements focused on existential risks to intelligent life on Earth. Using a concept from discrete mathematics called expected value, they argued that it can be worth spending significant resources now to mitigate an exceedingly rare future event, if the consequences of such an event would be sufficiently catastrophic. This might sound familiar, as it’s the logic that Elon Musk, who identifies with these communities, uses to justify his push toward us becoming a multi-planetary species.

[…] The key point about all of this philosophizing is that, until recently, it was all based on a hypothetical: What would happen if a rogue AI existed?

Then ChatGPT was released, triggering a general vibe of rapid advancement and diminishing technological barriers. As best I can tell, for many in these rationalist communities, this event caused a subtle, but massively consequential, shift in their thinking: they went from asking, “What will happen if we get superintelligence?” to asking, “What will happen when we get superintelligence?”

These rationalists had been thinking, writing, and obsessing over the consequences of rogue AI for so long that when a moment came in which suddenly anything seemed possible, they couldn’t help but latch onto a fervent belief that their warnings had been validated; a shift that made them, in their own minds, quite literally the potential saviors of humanity.

[…] For the rest of us, however, the lesson here is clear. Don’t mistake conviction for correctness. AI is not magic; it’s a technology like any other. There are things it can do and things it can’t, and people with engineering experience can study the latest developments and make reasonable predictions, backed by genuine evidence, about what we can expect in the near future.

[…] I’ll start worrying about Tyrannosaurus paddocks once you convince me we’re actually close to cloning dinosaurs. In the meantime, we have real problems to tackle.

— Cal Newport, Why Are We Talking About Superintelligence?, 2025