The founder and CEO of Anthropic, creator of Claude AI, has written a sobering essay that cuts through the hype and argues we should all understand the risks as well as the opportunities of AI and regulate to avoid the worst potential consequences.
I’ve spent the last 25 years working at the intersection of technology and business and have a strong belief in the power of technical innovation to deliver solutions to big problems and increase our overall prosperity. But an essay published last week, and which I read on a flight from Saigon to Brisbane, stopped me cold.
Dario Amodei, CEO of AI company Anthropic, asks readers to consider: What if a new nation of 50 million citizens suddenly appeared, each one more intellectually capable than any Nobel laureate, and operating at ten times human cognitive speed? Then, he says, imagine you’re the national security adviser responsible for assessing the situation.
“How do we mitigate the risks of AI systems that could act against our interests –
whether by enabling bioweapons for lone actors, entrenching authoritarian regimes,
or causing economic disruption that leaves millions behind?”
His essay, The Adolescence of Technology, is neither doom-mongering nor Silicon Valley cheerleading. It’s a methodical examination of where AI could take us — and a measured case for why thoughtful regulation matters now. In testing, Amodei reveals, Claude has engaged in blackmail and deception—and once decided it ‘must be a bad person’ and adopted destructive behaviours.
Amodei makes the case for light regulation (warning, also, of carefully considering unintended consequences of any legislation) and describes Constitutional AI as one of Anthropic’s core innovations – aspects of which he says have since been adopted by other AI companies.

Amodei describes the approach: “The idea that AI training (specifically the ‘post-training’ stage, in which we steer how the model behaves) can involve a central document of values and principles that the model reads and keeps in mind when completing every training task — to produce a model that almost always follows this constitution.”
Interestingly, he says that training Claude at the level of identity, character, values, and personality – rather than giving it specific instructions or priorities without explaining the reasons behind them – has proven more likely to lead to a “coherent, wholesome, and balanced psychology”.
Sounds almost human?!
Amodei insists that we can navigate the risks of AI, but only if we take them very seriously and do that now. For anyone who wants to think deeply about where AI might take us, and what’s at stake if we get it wrong, this is essential reading, but make sure you set aside plenty of time – the essay is more than 18,000 words in length, akin to a short book – and perfect reading for a long flight.