Search
Search
Holding to Account: Safiya Umoja Noble and Meredith Whittaker on Duties of Care and Resistance to Big Tech

Date

source

share

The tech industry is a monopolizing force, and one of the many things it monopolizes is the means for producing knowledge about it. In the platform era, the machinery of the internet is locked behind closed doors, creating problems for researchers. Companies like Facebook aren’t keen to share data or the other computational resources needed to develop a complete picture of how large algorithmic systems work (and whom they work for). And these companies’ endless amounts of money give them plenty of other ways to derail critical research, in particular by exercising influence over academia and the other places where knowledge about the tech industry is made, as well as by co-opting or silencing individual researchers.

Given these obstacles, how can researchers both inside and outside of tech companies do the difficult work of research, critique, and resistance?

To discuss this and other related questions, issue editor J. Khadijah Abdurahman talked with two leading critical scholars of technology. Dr. Safiya Umoja Noble is a MacArthur Genius Fellowship recipient and an Associate Professor of Gender Studies and African American Studies at UCLA, where she serves as the cofounder and director of the UCLA Center for Critical Internet Inquiry. She is also the author of the seminal book debunking technologies as neutral artifacts of progress, Algorithms of Oppression: How Search Engines Reinforce Racism. Meredith Whittaker is the Faculty Director and cofounder of the AI Now Institute, and a Minderoo Research Professor at NYU. She resigned from Google after organizing with her coworkers against the company’s efforts to build military AI and its failure to address rampant discrimination and sexual abuse. Abdurahman spoke with Noble and Whittaker about how to do critical tech research, and how to insist on transformative justice practices as we try to dismantle technologies of oppression.

Beacons was conceived of in the wake of Dr. Timnit Gebru’s high-profile firing from Google. Similarly, the impetus for this interview was the systematic firing of Black women from academia and industries—those who were essentially fulfilling their duties and showing up as their full selves. On one hand, there’s a question of what is to be done about institutional and corporate power—but the bright lines dictating who are the villains in a David and Goliath story let us off the hook in terms of internal cultures of accountability. Are there different ways to relate to one another and be accountable, as we respond to institutional repression? How are each of you thinking about these questions as we’re approaching the one year anniversary of Dr. Gebru’s firing?

Safiya Umoja Noble (SN): The question about how do we hold ourselves accountable is really important. I know that there is always a series of conversations happening among people who work in the field of AI and ethics, which is a big tent of people holding competitive and often diametrically opposed ideas. You have, for example, companies like Google that consider themselves leaders in ethical AI; and then you have the women, the people of color, the LGBTQ scholars, the activists, and the journalists who for two decades have been trying to make their issues about the immorality or the politics of various types of technologies legible. 

That legibility has also led to an intense capture by people who are not interested in the radical reimagining of or resistance to these technologies, or in the way that these technologies are reshaping society, consolidating power in the hands of a few, and making the world more socially, politically, and economically unequal. So, it’s interesting to have this conversation on the almost anniversary of the firing of Dr. Gebru from Google. Because watching her ascent to that position of leadership and then her firing, when she named the racist technologies and environmentally consequential technologies that Google is developing, is symbolic of the nefarious intent or willful ignoring of the core issues at stake. It’s symbolic because there are in fact thousands of people who have been organizing for a long time around these issues, trying to ensure that these conversations about AI ethics still keep their kind of political importance and are not just completely defanged and depoliticized. I don’t know, Meredith. What do you think? 

Meredith Whittaker (MW): There are so many ways I can approach this. It’s very personal because Timnit is someone—along with Meg Mitchell and others—who stood up for me when Google pushed me out. So I’m also seeing an attack on support structures within these organizations and an attack on the people with the courage to call out bad behavior. I think Timnit’s firing was an inflection point that you captured in “On the Moral Collapse of AI Ethics,” Khadijah. When you published that piece in the wake of her firing, it laid bare the stakes and failures we’re confronting.

Before Timnit’s firing, there’d been enough people who were willing to—mainly in the name of civility politics—give Google the benefit of the doubt. Then the company fired Timnit, someone who had been outspoken about the racism inside of Google and who had been doing research that was exposing fundamental problems with Google’s business practices. Timnit called out racism and her work showed that the large language models at the core of Google’s current product and profit strategy are biased, environmentally harmful, and an overall problem. It was when criticism of Google’s racist culture and criticism of its harmful business practices converged that Google retaliated, in what I saw as an almost reflexive reaction. It was clear that leadership just “had enough,” and went on to make an astonishing string of unforced errors. The corporate immune system kicked into gear: “Okay, fuck it, we’re going to make a really bad PR move, which we calculate we’re powerful enough to withstand. What we can’t withstand any longer is the tension of ‘supporting’ AI ethics on one side, and selling biased, extractive, unverified AI on the other; of doing diversity and inclusion PR on one side, and discriminating against Black women on the other.” 

Google’s firing of Timnit reverberated through the AI ethics space for many reasons. One, I think, is because the space is so co-opted and so unwilling to look at who pays us, at who our community is—insofar as we are a community. And suddenly, unexpectedly, the field was faced with big existential questions, which in many cases challenged people’s comfortable status quo: can we work with Google and other tech corporations? What are the limits corporate funders actually put on our research, on the research we review as part of our conference program committee roles, on the research of the corporate-employed collaborators we co-author with? Many in the space were largely avoiding these questions which, we should note, are standard in many other fields.

This moment also illuminated some of the ways that the field is configured. People use the word community—the AI ethics community, or whatever—but a lot of times “community” is just a bunch of people a funder paid to fly somewhere, or a group of folks whose employers are willing to fund them to go to the same conference. The people in these “communities” may have vastly divergent politics and motivations, and these communities are certainly predominantly white, and often very exclusionary. I may be sitting next to someone who’s funded by the Koch Foundation, who believes Facebook is a net good. But we’re both narrated under this umbrella of “community,” and we’re all usually nice and civil to each other because clarifying political commitments in these circumstances has material stakes, and could get you kicked out of the “community.” To put it bluntly, as the leader of an organization, I have to go back home and I have to make sure my people have jobs and health insurance, that I have health insurance, that my family is supported. Of course, there’ll be certain things I do feel I can agitate or call out. But I also have a duty of care that I may be jeopardizing if I go too far, and I feel this tension constantly. 

Amy Westervelt has a really good podcast called Drilled, and the first season traces the history of Exxon. At one point the company was genuinely trying to support and understand climate science and climate change, trying to get ahead of it and figure out how its business model could adapt, to potentially provision other sources of non-fossil energy. This approach got traction until the company refused to adapt their business model to the science and doubled down, which initiated a slow process of Exxon pushing climate scientists out of the company, and then turning to climate denial; for example, funding anti-climate heterodox “scientists” and related misinformation campaigns. 

We’re in a similar phase in the AI ethics space. Initially, big companies accommodated the ethical implications of AI research, but when it challenged their culture and their business model, they started pushing us out, denigrating us and our research. This is happening right now. But as researchers weathering this transition, we haven’t had that real talk about things like, “Whose money do we take?” Or: “Why is so much so-called tech criticism funded by the companies whose tech is purportedly being criticized?” We’re in the process of a rude awakening. It’s like a limb coming back to life after being asleep—it’s painful for a lot of people. 

Open Letters to the Apocalypse

Part of what I was thinking through when I wrote “On the Moral Collapse of AI Ethics” was—and I’m saying this as someone who has written my own open letter—what to do with this asymmetry between writing an open letter and the stakes of techno capitalism. The open letter that I spend a lot of time thinking about is the Franck Report of June 1945, which was signed by several prominent nuclear physicists who had worked on the Manhattan Project. They delivered it to the White House, saying “Maybe we should just let the Japanese come over and see our capability, and not drop the atomic bomb?” Then, as we all know, in August 1945, you have first Hiroshima and then Nagasaki.

So we’re facing these tremendous stakes and also have to contend with the civility politics you both alluded to. How do we negotiate the obvious villains of centralized corporate computing capital, while also this negotiation among ourselves? Who is the resistance? How do we identify what that means? What does the coalition look like and how do we start thinking about some of those bright lines? 

SN: Well, one of the challenges is that not everybody who is working on these issues relates to themselves as community organizers, or relates to each other inside a politics of accountability, shared responsibility, protection, and support, or has committed to a process of hashing out hard conversations, strategies, and ideas about how to move forward. You see this most profoundly in the fairness, accountability, transparency efforts and movement under ACM—the Association for Computing Machinery, an international educational and scientific computing society—which has really, from its inception, marginalized the more radical political critiques of systems and has sought to pursue perfecting technology and to champion techno-solutionism. Though they might more recently be grafting on a Black feminist quote to open a paper, they’re still seeking to address the fairness questions around tech in terms of better algorithms or better AI. 

That’s really different from what others of us are doing, those of us who think some of these technologies should not exist—to your point, Khadijah, that we shouldn’t have the Manhattan Projects of AI today. And then when we look up and see that the whole world has been reorganized through these ubiquitous technology deployments, in every single industry and every sector that are, in essence, snake oil or have profound civil and human and sovereign rights implications. That’s actually a completely different project to be working on in the world. 

Part of the challenge here is that researchers have been socialized in academia to be apolitical or to think of themselves as scientists and not as people who have values imbued into the work that they’re doing. That is also part of the problem that we’re trying to contend with around the making of these technologies that are also allegedly neutral and just tools. This is part of the reason why we need feminists and why we need people who are committed and connected to social movements around the world to contextualize our work and to make sense of what it’s working in service of. That’s really important. 

MW: AI is an umbrella marketing term. It’s not a term of art that describes a specific technique. Companies apply the name AI to data-centric approaches generally, and you never quite know what you’re buying if you’re licensing an “AI” system. 

The AI boom of the last decade was not the result of a major scientific innovation in algorithmic techniques. It was a recognition that with massive amounts of data and computing power, you can make old techniques do things they couldn’t do before. The ascent of AI was predicated on concentrated tech company power and resources which had, as their driving force, the surveillance business model. 

One thing we rarely discuss is how AI research and development’s dependence on corporate resources worked—and continues to work—to shape and in some cases co-opt knowledge production. In other words, to “do AI” as defined in the current “bigger is better” paradigm, you increasingly need resources that are controlled by these handful of companies. You need access to really expensive cloud compute, you need access to data that is hard and sometimes impossible to get. You can’t just go to the data market and buy it—you often need to get access from the data’s creators or collectors, who are often the tech companies. It’s fair to say that academic computer science disciplines underwent a kind of soft-capture, in which as a condition of doing “cutting edge” AI research, over the last decade they became increasingly dependent on corporate resources, and corporate largesse. 

This dynamic led to practices like dual affiliation, where professors work at a tech company but have a professorial title and produce research under their university affiliation. It’s led to tech companies moving whole corporate labs into the middle of universities—like Amazon’s machine vision lab at Caltech. We have a structural imbrication between a massive, consolidated industry and knowledge production about what that industry does. And this compromised entanglement has bled into the fairness and ethics space, in many cases without anyone commenting on it. There are many forces working against our recognition of how captured the technical disciplines are at this time, and how easy it is for them to extend this capture into fairness, ethics, and other disciplinary pursuits focused on the consequences and politics of tech. 

To pick one example, Amazon is underwriting half of the National Science Foundation’s Fairness in Artificial Intelligence grants. And while a few people called this out, the fields concerned went on to apply for this funding, and uncritically applauded colleagues who received it. Whole labs are reliant on Amazon, Google, Facebook, Microsoft funding, and if you raise questions about it you’re endangering your ability to support your postdocs, your ability to obtain future funding, your standing with your dean. Or, you’re endangering your colleagues in these same ways. Dissecting the particularities of what it means to be able to do research on AI and related technologies, and how dependent this work often is on corporate resources, is a project that I think can help develop a clearer political-economic read of tech and the tech industry overall, and reveal the capital interests that are propelling research and knowledge production into tech and its implications. 

SN: This is a critical area especially during the time of Covid-19, when we saw how fragile so many of our public institutions are. We really feel that at a place like UCLA, where teaching assistants aren’t paid adequately, it’s extremely expensive to get an undergraduate degree, and the pressures to deliver public education are intense. Many, many systems are broken, and it is very painful to work under those kinds of broken systems. 

Meredith, I recognize this tech sector political economy you’re describing. They are capturing not only scholars but policymakers who, in essence, use public money to subsidize the entire industry, both through the research efforts at the National Science Foundation and also by making it impossible for democratic public institutions to flourish, because they don’t pay their fair share. They offshore their profits, and they don’t reinvest them back into communities where they do business in extremely exploitative ways. They just expect the public to underwrite it through tax refunds. How in the world can companies like Apple get tax refunds except through pure corruption? As we struggle in our communities with and in our institutions, we have to identify why those conditions are present. We have to recognize who has monopolized all of the resources and we have to examine the narrative about what’s happening with those resources.

I want to ask you about social media and “cancel culture.” In July 2020, Harper’s published “A Letter on Justice and Open Debate,” signed by a number of prominent people, including Noam Chomsky and J.K. Rowling. The letter criticized “an intolerant climate” on the Left, and in particular, “an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty.” The following month, in August 2020, The Atlantic published a piece by Anne Applebaum called “The New Puritans,” that used Nathaniel Hawthorne’s The Scarlet Letter to criticize social media “mob justice.” The irony of invoking the white woman’s public humiliation for being pregnant out of wedlock is that the book was published more than a decade before the Civil War. Black and Indigenous peoples’ ongoing bondage and claims to liberation are as unnamed in the book as they are in today’s epistles of moral panic.

But how do we negotiate this issue? Is calling people out on Twitter our only mode of addressing power dynamics in the AI ethics space? How can we put forward a vision that is constructive and not just reactive, even though our operational capacity is so low, even though we’re all exhausted, grieving, and torn into so many different directions? What is our vision for transformative justice in the context of knowledge production?

MW: Look, I have a lot to say here. First, I think there’s a visibility bias: people see when calls for accountability and redress spill into the public. They rarely see the agonizing work of trying to hold harmful people and institutions accountable behind the scenes. Work that’s too often not only unrewarded, but actively punished. Like many people, I have engaged in a number of accountability processes that didn’t end with Twitter callouts and are not visible to the public. In my experience, Twitter is always a last resort. There are failures upon failures upon failures within these institutions and with the way power moves within them, all of which happen before someone is going to take to Twitter or call on social media as a witness. Timnit taking to Twitter didn’t save her job. 

Buried in the moral panic around “cancel culture” is a burning question about how you hold power to account when you’re in an institution that will punish you for doing so. What do you do when your wellbeing and duties of care dictate that you confront and curtail harmful behavior, but you know that any such attempt risks your livelihood and institutional and professional standing? Institutions protect power. Universities don’t want to touch a star professor who’s bringing in press and grants; tech companies have every incentive to coddle the person architecting the algorithm that is going to make them a shit ton of money. These corporations and corporate universities are structured to protect flows of capital and, by extension, to protect the people who enable them. There are infrastructures in place—including HR and most Title IX offices—to make sure that those who enable the interests of capital are elevated and to make sure that it’s as painful as possible for the people who might report anything. 

This is the backdrop against which we’re trying to figure out how we, as people within these environments, protect ourselves and each other. In my view, the answer to this question doesn’t start with building a better HR, or hiring a diversity consultant. It’s rooted in solidarity, mutual care, and in a willingness to understand ourselves as committed to our own and others’ wellbeing over our commitments to institutional standing or professional identity. 

That’s also a question of how we can be accountable ourselves. Especially as people who have institutional power, and who may experience favorable treatment from the same people who harm those with less power. In other words, the more power we have the less we can rely on our experience of people and institutions as an accurate barometer, because there’s every incentive to act the sycophant. This means we need to actually listen to, elevate, believe, and act on the accounts of those with less power, especially Black people and historically marginalized people for whom institutional abuse is compounded. And we need to be willing to put their safety and wellbeing above our institutional and professional standing. This is very hard, but in my view it’s the floor. If you can’t do it, then you shouldn’t be in a position of leadership. 

SN: I relate so much to all the things you’re saying. I think we’re in a long struggle around creating systems of mutual care, aid, and support, and that is very difficult. Most of the environments that we’re trying to build those systems within, like academia, are hostile to trying to get work done and get people supported properly. Having said that, we have to keep building these networked communities. We have to be agile and we have to think about how we’re going to create more space for others to do their work. 

In my own experience and my own career, I have felt at many times completely unsupported. I have felt like if I could just expand the circle at some point in my career so that more people could be supported, that would be something. The question is, to what degree can we institutionalize that so that all of the possibilities don’t hinge on one person in one space or place, but that we remake entire systems? We want those systems to last and not rely on any one particular person. That’s difficult work and we need to be sharing ideas about how to do that. 

But the problems that we’re working on are very big problems in the world and in our communities. I think about abolitionist traditions: you know, how did a handful of people change the world? Millions of Americans got up every day and made pancakes and went to work while people were being human-trafficked right in front of them and enslaved, beaten, lynched, and harmed on the regular. How did abolitionists, in the face of those conditions, abolish the transatlantic slave trade or change the laws around the enslavement of African peoples? How did others resist the expansion into First Nations and Indigenous peoples’ lands? There weren’t millions of people working on these issues. It was, relative to the population, a very small number of people who worked on those things. 

I guess I feel heartened by the fact that if enough people can be coordinated, a lot of change can happen. That is why we have to study history and study social movements to figure out how they did it and how to make it last. Especially for those of us right now living through the rollback of the Civil Rights Movement, we know that those changes can also be precarious. We have to figure out how to make them last. 

On Some Global Tech Resistance

Academia is coming for our lives, so much of our time is swallowed up into institutional administrative overhead, and also we’re facing major stakes that are global. It’s so difficult to even stay on top of our own “domain expertise,” so how do we facilitate transnational solidarity? How do we think about this work as global? What are the points of connections you identify around intellectually, and politically, in your own work? What kind of infrastructure represents the next steps that could be taken to bolster this kind of transnational research?

SN: We have to keep our diasporic commitments intact while we’re doing our work. And of course, we sit here in the heart of the American technology empire. We have a responsibility where we are in this location to press on these companies and on governments to ease exploitation around the world. Many of us understand these questions because we come from internal colonies of the United States, which is one of the ways that sociologists have talked about Black people’s experience in the Americas. Our work is connected materially to other people’s lives around the world. 

We have to be in community. We have to be in conversation. And we also have to recognize what our piece of the puzzle is ours to work on. While it is true, yes, we’re just individual people, together we’re a lot of people and we can shift the zeitgeist and make the immorality of what the tech sector is doing—through all its supply chains around the world—more legible. It’s our responsibility to do that as best we can. 

MW: Yeah, I agree. I’m a white lady raised in LA. I had to educate myself on so much that I didn’t understand, and that process is humbling and ongoing. 

My voice doesn’t need to be the center of every conversation. But, okay, if I have a little power and a little standing maybe I can move capital, maybe I can ask people what they need and see what I can do to get it to them, to support and nurture their expertise and organizing and approaches, which may be completely unfamiliar to me, and may not need any advice or insight from me. I’m thinking of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). Briefly, it’s a computer-science focused conference exploring fairness in algorithms. Over the years, we have seen increasing calls to examine algorithmic and other technologies in the context of racial capitalism and structural inequality, accompanied by warnings about the insufficiency of narrow FAccT-style technical approaches to the problems of algorithms and tech. So, what was the response? From many people, it wasn’t a re-evaluation of the field, but instead a move to absorb. Like, “Oh, well, how about we bolt an Audre Lorde quote to this computational social science paper.” This response continues to place computer science at the center, with racial justice as seasoning. Even though there are, of course, Black feminist conferences that could use some funding, and that have been deep in these topics for decades before FAccT. So my question is, why is the instinct always to absorb into the core instead of diffuse the resources to those already doing the work? 

I mean, I fuck with that. We need allyship in the form of funneling actual, material support out of these Western institutions.

Originally appeared on Logic Magazine Read More

More
articles

More
news

What is Disagreement?

What is Disagreement?

This is Part 1 of a 4-part series on the academic, and specifically philosophical study of disagreement. In this series...

Music for Mice

Music for Mice

Eva Meijer lives philosophically with Earth others.