Artificial Intelligence – humanity’s last great adventure? We consider the implications of introducing a second, vastly superior, intelligence to the world

Inside your cranium is the thing that does the reading. This thing, the human brain, has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that we owe our dominant position on the planet. Other animals have stronger muscles and sharper claws, but we have cleverer brains.

Our modest advantage in general intelligence has led us to develop language, technology and complex social organisation. The advantage has compounded over time, as each generation has built on the achievements of its predecessors. If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would come to depend on the actions of the machine superintelligence.

We do have one advantage: we get to build the stuff. In principle, we could build a kind of superintelligence that would protect human values. We would certainly have strong reason to do so. In practice, the control problem – the problem of how to control what the superintelligence would do – looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Then our fate would be sealed.

This is quite possibly the most important and most daunting challenge humanity has ever faced, and – whether we succeed or fail – it is probably the last challenge we will ever face.

Soft robotics expert Conor Walsh shares his greatest challenges 


Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk.

Words: Nick Bostrom
Photography: Vincent Fournier

email hidden; JavaScript is required