When I began working on my PhD in cognitive neuroscience at UCL in 2005, seeking inspiration for artificial intelligence from the inner workings of the brain, AI was almost an embarrassing term in academic circles.
If you talked about bringing intelligence to machines, many professors would instantly dismiss you as a serious scientist.
Fast forward to today, and there is widespread recognition that AI will play a critical role in the next chapter of our society and economy.
By accelerating science to operate at digital speed, transforming industries, and enabling new forms of creativity and artistic expression, AI is the defining technology of our time.
Yet, there are also growing concerns about potential misuses of AI, such as disinformation, cyber attacks and the impact of automation on jobs.
As I joined Prime Minister Rishi Sunak to open London Tech Week, we agreed that 2023 could be a defining moment for the UK and the world when it comes to the emergence of AI.
If we manage this moment well – which I believe we can – this could mark the beginning of a new era of growth, innovation and scientific discovery for the UK. But how can we seize this opportunity as a society?
First, we need to recognise the massive potential of AI to improve people’s lives.
At Google DeepMind we created a system called AlphaFold which was used to predict the structure of more than 200 million proteins – essentially all the proteins known to science – and which promises to help transform our understanding of diseases and the process of drug discovery.
In the next few years, we will see many other applications of AI that will be equally astonishing and useful. As a country we must have the boldness and imagination to harness this technology.
This does not mean ignoring the challenges. We must have a balanced perspective on how AI could impact our society.
Building this technology safely and responsibly is a major focus for Google DeepMind, which is why last month I joined other AI leaders in signing an open letter from the Center for AI Safety, calling for AI’s most severe risks to be treated as a global priority.
We should apply a safety-first approach to any powerful new technology, and it doesn’t mean that we should ignore near-term potential harms of AI in order to focus on the long-term.
We must prepare for both, recognising that if we only respond when more powerful systems exist, it will be too late. This is what responsible development of AI looks like.
I’ve been grateful that this is the approach the Prime Minister, Sir Keir Starmer, and many other leaders from across the political spectrum have shared in engaging us and other tech leaders over recent months.
The UK government’s AI white paper demonstrates a commitment to supporting bold yet responsible development of AI, with regulators empowered to confidently move forward and respond as risks emerge.
At the same time, just as the challenges of AI cannot be managed by any one company, we must recognise they cannot be managed by any one country or institution. We must explore multiple solutions to drive global cooperation.
It is critical that democratic states work together to ensure this technology is deployed safely and responsibly.
I was heartened by what I heard from President Biden and Vice President Harris when I visited the White House with other tech leaders last month, as well as last week’s announcement from the UK government that the first global summit on AI Safety will take place here later this year.
For many years I have strongly believed that new forms of governance are needed to provide oversight and transparency of this technology, drawing on government, industry, researchers and civil society.
These efforts now take on new urgency, and over the coming months I hope the US, UK, EU, and others will advance this vision together.
The AI age promises immense opportunities for the UK.
Our tech and university ecosystem has unique strengths, which is why my co-founders and I chose to build DeepMind in the UK in 2010 and were able to grow it into one of the world’s leading AI labs.
If we act quickly and responsibly, promoting a sober, balanced perspective of AI’s challenges and opportunities, I believe AI can make life better for billions of people around the world.
That is the optimistic vision of our future we must focus on achieving.
Demis Hassabis is the chief executive of Google DeepMind