A proposed new center at Cambridge University will examine technologies, from biotechnology to artificial intelligence, that could perhaps threaten the future of our species.
“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of the Centre for the Study of Existential Risk (CSER)’s three founders.
“Nature didn’t anticipate us, and we in our turn shouldn’t take [artificial general intelligence] AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous.”
While there’s little doubt that advances in engineering – from longer life to global networks – have brought great benefits to humanity, Price and his colleagues question whether the acceleration of human technologies will increase our chances of long-term survival – or do the opposite.
The critical point, they say, will come if computers match human capacity to develop their own technologies and start adapting thew world to suit themselves.
“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”
The center says it’s signed up luminaries in science, policy, law, risk and computing from across the university and beyond to become advisors.
“The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price.
“We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”