We're living through a period of unprecedented change. The world is transforming before our eyes faster than ever before, driven by political, economic, and technological forces. This moment demands a new way of thinking, a willingness to collaborate, and a commitment to using our collective ingenuity to build a better future.
The AI revolution is a powerful example of this opportunity, but it requires a united effort to ensure its benefits are shared widely and responsibly.
In this Think with Google article, we'll unpack key insights from our leadership event Google Zeitgeist and share the perspectives of visionaries — who are shaping the world — on how AI will impact business and society.
Put AI to work to boost productivity and creativity
AI is not new — at Google we've been pioneering its use across products to improve language, image and video understanding for years. That work's led to the breakthroughs we see around us today. AI is a broad technology, way more than the early chatbots that have caught the imagination recently. Harnessing its potential can benefit humanity hugely, and it will take all of us working together.
We're making extraordinary scientific breakthroughs that will improve billions of lives — in disease research, material science, health screening, energy and sustainability and more...as well as harnessing the same technology in the tools you use everyday to boost productivity and creativity.
In Europe alone, generative AI can increase the size of the economy by €1.2 trillion, creating new jobs and helping open up new business opportunities. For most roles, AI will help people get tasks done better and faster, freeing them up to do more of what only people can do.
People need to be ready for sustainable jobs and future opportunities, that's why we work closely with governments, unions, and business organisations to bring digital skills training to everyone; and why we launched a €25million fund to help with the transition.
AI also has important implications for global security and stability. In this election year, a responsible approach to misinformation is critical for safeguarding the democratic process. That’s why to help people identify AI-generated content we’ve introduced new tools and policies, including SynthID, which embeds a digital watermark into images and audio created by Google AI tools.
There's a growing understanding of the positive potential of AI — to ensure they are safe and inclusive governments, tech companies, and communities must work together on the rules, skills, and tools that challenge misuse and bring out the best in humanity.
Building trust is critical to the future of AI
The ability of large language models (LLMs) to extract and summarise vast quantities of information from a wide range of sources is going to have a profound impact on the future of media consumption by changing the way people fundamentally learn about the world.
This shift in behaviour allows for laser-focused prompts and is incredibly efficient, enabling people to learn much faster. But there is a risk in not knowing where the source information came from, or if it can be trusted.
This is why I believe quality news brands will always survive as it is the role of journalists to provide accountable trust by investigating and verifying complex information from multiple sources.
Take, for example, the UK parliamentary expenses scandal that uncovered around 40 MPs fraudulently over claiming costs that were funded by taxpayers. Whilst AI could have indeed made the job of querying the thousands of documents and summarising the findings much easier, you needed human wisdom to judge what was newsworthy and what was not.
I see AI evolving in the same way as most internet technologies in an open and competitive marketplace that empowers startups and hobbyists. That doesn't mean you won't have leaders, but there won’t be a situation where there are only four big players that can do any of this stuff and the rest of us might as well pack up and go home. For example, a small company with open source GenAI access can be super powerful.
And what does the future of Wikipedia look like? I think LLMs will be used to enhance the community, such as scanning entries at scale to look for statements that are biased or improving the interface to allow users to ask specific questions and be guided to the right answer. But ultimately, the core tenets will remain the same: Sharing ideas for discussion and debating to get to the truth. After all, that’s what we humans do best.
AlphaFold has unlocked the building block of life
I've spent the last 50 years looking at proteins. Proteins are the building blocks of life. When I started, we knew less than 20 protein structures, now there are over 200,000. And with the AlphaFold Protein Structure Database, co-developed by Google DeepMind and EMBL’s European Bioinformatics Institute, the possibilities are limitless as we can predict structures for more or less any protein. This will have a transformational impact on the future of medicine.
Life has emerged through evolution and that really favours a learning system because everything is a consequence of something that happened before. AI can learn those patterns and, combined with an open data ecosystem, that's very powerful in aiding our understanding of what is going on at the molecular level in our bodies and the world around us.
My recent work on enzymes, which are the bio-catalysts of life, and on the “disease” of ageing, have both seen huge progress. From designing drugs that fight infectious and antibiotic resistance diseases to new treatments for dementia and cancer that allow for healthy ageing.
The ability to design new proteins is also incredibly exciting for biodiversity as it could solve many of our current environmental problems. For example, developing plastic-eating and radioactive-eating enzymes that can reduce harmful waste.
In other areas we’re seeing scientific breakthroughs in imaging that enable us to see life in action at the cellular level. And in medicine where new computational tools can help doctors diagnose more accurately and, when combined with robotics, perform complex surgery.
AI can do the jazzy science stuff, but its value in performing the nuts and bolts that improve medical practice should not be underestimated, like preparing drug submissions to medical regulators or in research. It reminds me of the gold rush, where the people who got rich weren’t the miners but the people who provided the buckets and spades.
Blinded by data? AI can help us make sense of it all
At the Poverty Action Lab at MIT, we test ideas rigorously to see if they solve the problems facing the poorest in society, from healthcare and education inequalities to access to credit and social mobility.
These large-scale, randomised controlled trials involve millions of people and are designed with the goal of eliminating global poverty. After all, that is the most important economic problem society will ever encounter.
Whether AI can help alleviate poverty is uncertain and could go in many directions. One thing that is clear, is that inequality is not just the by-product of technology change, it is the result of policy decisions. We need scientific evidence to assess those policies.
We know that many low to middle-skilled jobs, for instance in the tech sector in India, will be affected. What we don't know is where new jobs will be created, how many and what is the demand for what those jobs will produce. The upside of AI is that productivity growth will be higher. In my own research, I am already seeing the benefits of how AI can run through millions of simulations quickly and summarise complex findings.
In healthcare, it could be transformative for physicians, for example by running diagnostics, and could save many lives. For governments, it could help them target social programs to reach people most in need.
But many applications of AI could also have unintended consequences. We need more rigorous evaluation to help us maximise the benefits of AI, while protecting against its downsides. I like the idea of AI keeping us honest by detecting misinformation (though I worry about its capacity to generate misinformation too).
As humans we’re often overwhelmed with the volume of data that exists in the world, perhaps in the future, AI can help us make sense of it all.