There was long an assumption in the U.S. that technologies would contribute to the advancement of democracy – but they didn't.
Marietje SchaakeInternational Policy Director of Stanford University’s Cyber Policy Center, International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence
The Russian interference in the 2016 election began to change that thinking, but the watershed was January 6th 2021, with the storming of the U.S. Capitol. In the E.U., we were already working on laws to offer countervailing powers to the outsized power of technology companies, but because these are U.S.-based companies, legislation – or the absence thereof – in America has strong ripple effects around the world.
With AI specifically, this new focus on its potential harms is also shaped by the belated recognition of the harms of social media. That’s frontloaded a lot of concerns, such as disinformation, discrimination, and automation. However, the single greatest challenge in terms of mitigating those harms is the concentration of corporate power. That single issue, with companies holding vast data sets and computing power, is the cause of so many second-tier problems. It prevents academics, journalists, and civil society leaders from independently investigating the workings of AI systems.
When I talk to people still in government about this, my general principle is that access to information about AI systems is crucial, whether it’s access for academic researchers or for regulators. I regularly speak with engineers within these companies, and a number of them have told me that even they struggle to understand how these products work. The unpredictability of AI is unprecedented, and we must be able to probe these systems to develop a better understanding that will allow us to hold their producers accountable.
From a regulatory perspective, AI applications are difficult to grab hold of. The challenges are so individualized, so fluid, and so proprietary. What kind of policies do we need around AI? Are our existing policies well-enforced? I also think that the recent calls from figures like Sam Altman for regulation of the AI industry by the E.U. are calculated, and not necessarily genuine. As soon as there’s a tangible regulatory proposal on the table, he’ll announce that OpenAI is leaving Europe because of over-regulation. We need to force both ourselves, and these CEOs in particular, to be specific, because right now “regulation” essentially means nothing. Regulation is a process that can lead to an endless number of different destinations. I think the tendency to speak about it as something singular shows how the discussion is not as sophisticated as it should be. The lack of a well-informed, common understanding of what AI is and how it works prevents debate about the guardrails we want it to have. It doesn’t help the advancement of public policy, and therefore it doesn’t help democracy. Meanwhile, these companies are racing ahead into the new realities created by their products and services.
It’s become increasingly clear that the whole field of technology policy is Western-centric. People look a lot to Brussels and Washington, and maybe to Beijing or Delhi, but vast numbers of communities around the world are excluded from the discussion. In response, I approached Francis Fukuyama so we could co-edit a volume of papers focused on digital technologies in emerging countries, and at a recent convening at the Bellagio Center we brought together experts from around the world – including from the Global South – to discuss building a more substantial and permanent hub at Stanford for research, education, and analysis, with a global perspective on technology policy that we feel is missing.
We have the outlines of the program, but we’re only set in our intentions, not our conclusions. Our hope is to fund and build a great program at Stanford, and also facilitate new connections between people from the Global South and Silicon Valley, with new partnerships that support local communities without fostering a brain drain. It’s important to do this in a constructive way that doesn’t replicate the wrongs of our colonial past. The harms of these new technologies are very specific, and there are millions of people who feel that the big technology companies are making crucial decisions about their lives, their educations, their democracies, and their freedoms from a place that’s completely out of reach.
People seem either terrified or excited about AI – but while I’m not in the “existential risk” school, I see a lot of reason for concern. What’s crucial is the principle that AI must be governed, and I want that governance to be democratic. I don’t want to live in a society where corporations make decisions about our lives and societies instead. These are big questions, but meanwhile a lot of the attention goes to how cool ChatGPT is. Excitement can cloud people’s analysis, and unfortunately I think democracy is much more fragile than when I was first elected in 2009. Unaccountable technologies and their corporate governors form a challenge to democracy – and we should not take it lightly.
Explore more
Marietje Schaake is International Policy Director at Stanford University’s Cyber Policy Center, and an International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. From 2009 to 2019, she was a Member of the European Parliament for the Dutch political party Democrats 66. She attended Bellagio in 2019 for a convening titled “Designing a Future for AI in Society,” and she organized another convening in 2023 titled “Emerging Technologies in Global Contexts: Identifying Education and Policy Needs”.
For more of her insights into AI governance, see “AI’s Invisible Hand: Why Democratic Institutions Need More Access to Information for Accountability,” a report she authored for The Rockefeller Foundation in 2020. More information about her work is available on her her faculty bio, or you can follow her on Twitter.
Related
August 2023
Welcome to a special edition of the Bellagio Bulletin, where you’ll have a chance to hear from leading voices within the alumni network on one of the greatest global challenges of our time – the ethical application and governance of artificial intelligence. We hope you’ll find their points of view as illuminating as we have […]
More