The combination of AI and science, or AI with culture, will deliver profound benefits when rooted within this larger economy of information abundance.
Tim O’ReillyFounder and CEO of O'Reilly Media
This is alongside many of the other ways in which AI will help us deal with the world’s very largest problems, some of which I explored in my 2016 book WTF? What’s the Future and Why It’s Up to Us. By automating intellectual work, AI can help us manage the demographic inversion, in which we will see more elderly than young people in many countries. It will give us more leisure time to spend on friendships and creative pursuits, as well as on caring for each other. Lifestyles that are currently only available to the wealthiest could be available to many more people.
However, while there is enormous potential for enhancing human creativity and productivity, we need to use this AI moment as one for deep self-reflection about who we want to be, and how we want to act. We need to understand that when an AI shows us bias, we should be thinking about how we trace that back to the source. For example, racial bias in sentencing algorithms originates from the biased decisions of human judges. It takes humans to elicit that bad behavior. A model doesn’t just do it on its own. We need to fix us, not just the mirror.
In response, I’ve been spreading the idea that we don’t understand this technology well enough at all to regulate it effectively yet. The very first set of regulations should be ones that are designed to increase our understanding. That means formalizing the ways companies currently govern their AI. However, my approach is slightly different from most policymakers and activists. These companies are all saying they want their AI to be fair, unbiased, and helpful to humanity. But what do they actually measure? We don’t know the details of their attempts at “regulation.” There have been many horror stories in the press about AI, and safeguards against misuse are being built reactively. Instead, I think that a repeatable metrics framework, akin to financial reporting but focused on the “operating metrics” that companies use to evaluate and manage the systems they create, would be an ideal place to start.
Those standards can then evolve as we learn more. We can also then see what’s not being measured – after all, these are often large centralized systems, and they could just as easily report the number of people using AI to do bad things as any other usage statistic. We want a framework that encourages a lot of experimentation, but we also don’t want companies to go rogue.
I’m optimistic about the very same things that I’m afraid of. In the Whole Earth Catalog, Stewart Brand wrote, “We are as gods and might as well get good at it.” And then there’s Stan Lee’s line from Spider-Man: “With great power comes great responsibility.” They’re both the same sentiment. We are unleashing amazing capabilities that can be used for good: tackling climate change, geopolitical conflict, and economic inequality. But they can also be used for evil, and we must take that seriously. If our old patterns are built into the systems of the future, it will reinforce the idea that only a select few people are meant to become insanely powerful and insanely wealthy – thereby reinforcing one of the worst biases of all.
Explore more
Tim O’Reilly is the founder and CEO of O’Reilly Media, an American learning company that publishes books and provides an online technology learning platform that is used by thousands of companies and millions of users worldwide. He is also a Visiting Professor of Practice with the Institute for Innovation and Public Purpose at University College London. Tim attended a Bellagio convening in 2019 titled “Designing a Future for AI in Society.”
For more of Tim’s insights on AI, he authored “We Have Already Let The Genie Out of The Bottle” for The Rockefeller Foundation in 2020. To find out more about Tim’s work, read his bio on O’Reilly Media, or follow him on Twitter.
Related
August 2023
Welcome to a special edition of the Bellagio Bulletin, where you’ll have a chance to hear from leading voices within the alumni network on one of the greatest global challenges of our time – the ethical application and governance of artificial intelligence. We hope you’ll find their points of view as illuminating as we have […]
More