If we want AI to benefit most people on the planet, we have to take on the difficult task of fully recognizing the inherent value of all people and work towards the resolution of, or make peace with, our differences. Only after taking on that grand challenge can AI technologies even attempt to work for the entire human family.
Stephanie DinkinsTransdisciplinary artist; Professor of Art at Stony Brook University
As an artist who’s playing with some of these generative systems, it’s interesting to run into situations beyond the bounds of the “content policy” put in place by widely available generative systems. Various companies seem to be trying to curtail some of the biases people have been advocating against, like the way AI systems represent Blackness, and trying to insert equity. It’s a clumsy process. It feels like they’re just taking a giant ax to the problem. They decide, “Okay, this term or idea is not allowed, that’ll make the equity people happy.” The problem is the fix is so broad that it feels like it’s actually truncating history, especially within systems that regenerate and re-inscribe inequity themselves.
If I were a maker of one of these generative systems, I’d be very specific in the ways I would try to make those systems more equitable. For instance, if we said, “Every use of the n-word is not allowed,” how would we historically refer to what has been done and said? With policy, the tendency is to make small, piecemeal rules that end up not holding in truly significant ways. Instead, it would be useful to ask people, “What does AI need from you?” That way, we can factor in not only what that technology is, but how it’s impacting society globally – because our international boundaries are still there, but these systems are bigger than that. AI is global.
One thing that concerns me is how distracting and distracted this subject can get. We’re pointed towards all these ways that technology feels pressing, even threatening – it’s taking jobs, for example. By hyper-focusing on that, we get pulled out of the base foundational ideas that we urgently need to focus on. How do we keep our eye on the prize of what’s really going on at the fundamental level beneath these technologies? How do we not get distracted?
Perhaps that’s where the public sector comes in. I would urge the public sector not to close its eyes, and to try to understand this technology at least a little bit. There are enough systems around that you can play with. But I think that we need to think about what change means to us. I’ve come to the conclusion that continuous learning is no joke, and that we’re going to have to be fluid in order to find ways to not only be survivors within this system, but thrivers.
That doesn’t mean not fighting the threatening thing that is coming directly at you, but rather finding ways to cooperate and adapt so that, at the risk of becoming obsolete, you still have a place in this world. We can see obsolescence coming at us in many different ways. How do we craft our responses in ways that position us to benefit, instead of being sidelined? And how do we not get stuck in the loop of fighting AI technologies until we relent, and admit that we’ll have to figure out another way?
I’m always about that – about not getting so far behind that you can’t catch up again, about taking advantage of opportunities when and where they are found, and about bending tech to serve the global majority well – even when it seems untamable.
Explore more
Stephanie Dinkins is a transdisciplinary artist based in Brooklyn, New York, and a professor of art at Stony Brook University. Her work often focuses on technologies such as AI as they intersect with race, gender, and our future histories. She is particularly focused on working with communities of color to co-create more inclusive, fair, and ethical AI ecosystems. In May 2023, she was the inaugural winner of the LG Guggenheim Award, which recognizes artists working at the intersection of art and technology. She was a resident at The Bellagio Center in 2022 with a project titled “Binary Calculations are Inadequate to Assess Us: Data Commons.”
More information about her work is available on her website, and you can follow her on Twitter.
Related
August 2023
Welcome to a special edition of the Bellagio Bulletin, where you’ll have a chance to hear from leading voices within the alumni network on one of the greatest global challenges of our time – the ethical application and governance of artificial intelligence. We hope you’ll find their points of view as illuminating as we have […]
More