Any AI does not represent some kind of “typical” humanity, and its responses are always relative to the data we've given it.
Mary L. GraySenior Principal Researcher at Microsoft Research; Faculty in the Luddy School of Informatics, Computing, and Engineering at Indiana University; Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society
We should always have transparency about what kind of data was used to train an AI model – what those missing things are, how that decision was made, and who made that decision to exclude certain human signals from their “foundational” model.
Our second priority should be to ask whether foundational consent was sought. If a group of people consent to contribute data to train an AI model, did they actually understand what they were agreeing to? Governance requires that people be made aware that they have the right to choose whether or not they want to be part of training a large-scale AI model. The future of AI will not be productive if its development isn’t rooted in mutual agreement and societal harmony.
Our third priority should be to get away from this sense that once a model has been released, that’s it. Currently, models are being created by institutions and private companies without guardrails or ongoing monitoring. There should be a regulatory commitment and responsibility, on the part of anybody profiting from a generative AI, to be transparent about its risks and benefits from development to deployment, including its integration into other systems. What did the creators think would happen before developing it, and what type of impact does it have after it’s released? Are the creators keeping track of their responsibilities?
At the moment, outside of privacy and security, we are missing meaningful regulation for the tech industry. History offers important lessons about what happens when powerful industries break public trust. For example, in the field of biomedicine, we expect clinical trials to be conducted with respect, transparency, justice, and beneficence, with clear statements of risks and benefits for participants. Those guardrails didn’t appear out of the blue – public health and related industries, which depend on public participation to innovate, learned some very hard lessons in the 1960s and 1970s when they were regulated for trampling over public expectations.
We also know from history – from the telephone, the railroads, and other utilities that the public came to depend on – that they’re not just a “nice to have.” They’re essential, and so we’ve found ways to regulate them. In the case of AI, which likely will be neither “just” a product nor “just” a service – especially given that it relies on people interacting online to build its datasets – the private sector should be held accountable for a set of public obligations that companies take on when they have the power to shape how society operates at scale. I can imagine AIs being labeled with a “nutrition label” of sorts; in fact, there’s a great organisation, the Data Nutrition Project, that does just that, offering details about where a model’s information comes from, where it will be going, and who to blame for inappropriate outputs.
We have a fundamental human right to claw back our autonomy, and we have the right to be respected for who we are, who we engage with, and what those connections mean to us before they are extracted and then dumped into a model that may erase our value as individuals. And we must assert these rights, because those are also the three ingredients of AI.
These rights are not just rights of consumption, but of our essential humanity. At stake are our citizenship, our humanity, and our very global flow of connections. That’s what we built over the last 40 years through the large-scale diffusion of computing. We can either walk away from the internet and every other infrastructure that’s been digitized in that time, or we can realize that we’re in it too deep not to reorient AI towards what we truly want and need from technology.
Explore more
Mary L. Gray is Senior Principal Researcher at Microsoft Research; Faculty in the Luddy School of Informatics, Computing, and Engineering at Indiana University; and Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society. Her work focuses on how people’s everyday use of new technologies can transform labor, identity, and human rights. In 2020, she was named a MacArthur Fellow for her contributions to anthropology and the study of technology and digital economies. In September 2022, she was a resident at Bellaggio, where she worked on her upcoming book Banality of Scale.
For more information on Mary’s work, you can visit her website or follow her on Twitter.
Related
August 2023
Welcome to a special edition of the Bellagio Bulletin, where you’ll have a chance to hear from leading voices within the alumni network on one of the greatest global challenges of our time – the ethical application and governance of artificial intelligence. We hope you’ll find their points of view as illuminating as we have […]
More