Gender, Power and AI: Wrestling for the soul of the network, again
Stanford’s Clayman Institute ran a virtual panel this morning called “Gender, Power, and Artificial Intelligence,” with Safiya Noble (UCLA), Catherine D’Ignazio (MIT), Angèle Christin (Stanford), and moderator Genevieve Smith, a Clayman Institute Postdoctoral Fellow. The panel applied principles from feminist tech studies to the current moment, and covered how gender norms get encoded in data and reproduced by AI systems, and discussed whether the technology has real capacity for equitable design and implementation at scale.
Noble’s argument throughout is that the governance conversation has gotten too high-level and universalizing while the actual outputs of these systems have profound day-to-day consequences for specific people today. She named the role of AI in the recent gerrymandering of Louisiana and Indiana as examples, and called for tripling down on long-term social science research about AI’s impacts. She also pointed out that philanthropy is retreating from feminist academic and organizational work because that work originates from the same dynamics that critique philanthropy itself, precisely at a point when this research is sorely needed. A lot of money is moving in AI, and very little of it is funding the people best positioned to study how it impacts everyone downstream.
D’Ignazio was asked directly whether feminist generative AI at scale is possible. Her answer was no, with caveats, given who owns the technology today and the current emphasis on profit motive. She suggested it is more important to consider how to organize around our relationship to technology, and how we might approach questions of profit and ownership, policy and decision-making, and data and tech governance.
She provided an example of a reasonable use case by walking us through a project from her Data + Feminism Lab. The example is documented at length in her recent book “Counting Feminicide: Data Feminism in Action,” where her team partnered with activists who scour news reports to document the gender-related killing of women and girls, including cisgender and transgender women. The lab built a very lightweight AI-based approach that streamlines the scanning and identification of news stories as possible cases to include in their project, setting up the activists to supercharge their work (note: very similar to how the NYT uses AI to analyze data for reporting). In this example, the AI’s job is task-scoped, democratically co-determined with the people who use it, and small. Smith picked this up: there is an idea baked into the current LLM moment that AI must scale to make it marketable, and the alternative is using purpose-built models that are right-sized against a body of work.
Christin spoke at length about how embodiment is one of the primary focuses of feminist theory, and how AI perpetuates the “disembodied” illusion of technology, and how this dynamic shows up in everything from the marketing to UX to user comprehension. This spoke to my thoughts on how the single-interface design of LLM chat reproduces Haraway’s “god trick,” knowledge that presents as universal while concealing the specific and situated position it comes from.
The parallel I kept returning to, listening to this, is one I think about often with my own cohort of early bloggers, women who grew up alongside the rise of the internet — and then the rise of ad tech. The internet of the late 1990s and early 2000s was being shaped by several camps: writers, students, information architects, and user-centric researchers who saw it as an information access network and a space of possibility; entrepreneurs and opportunists who saw it as a channel for marketing, monetization and extraction; and a smaller boycott camp that wanted to limit and refuse the whole personal computing and digital revolution altogether.
It was generally considered weird to be a girl on a computer or a woman on the internet — so weird that many of our peers didn’t recognize us at all — and we were there anyway, making stuff, witnessing, learning, advocating, producing, influencing. So when I watch some of my old peers, many of whom are professional writers and academics today, treat LLMs as a question of refusal rather than a condition to engage with critically, I worry we are abdicating a responsibility at precisely the moment when our technical and rhetorical expertise applies. Their refusal has good logic: user-centric researchers and communities engaged extensively with the early internet and the extractive camp won anyway, so why expect a different outcome here?
But Noble’s work on algorithmic bias attributes that failure not to engagement, but to the institutional and financial disadvantages that user-centric approaches operated under relative to gargantuan commercial interests. David and Goliath. That gap does not close through abstention. Understanding the trade-offs around tech, producing knowledge and analysis that does not depend on investors and marketers to frame the platform and the questions, requires presence. Refusal cedes so much ground.
Overall, the recommendations from the panel were practical. Noble called for people with capital (and the political will to spend it) to consider how to put money toward socially responsible research and development. D’Ignazio called for alternative funding infrastructure outside of venture capital logic, and pointed at European digital sovereignty models as worthy of consideration here. She also gestured at the popular AI Skeptics reading group as one current example of mad-and-commiserating-as-organizing that is creating safe psychological space for people to talk about AI and its tradeoffs. Christin’s recommendation was community organizing, on the grounds that LLMs are unpopular with a lot of people who feel there is no space to say so, and that finding those spaces is itself worthy because it provides shared language and awareness of others’ knowledge and experiences.
Personally, it was refreshing to hear reflections on the work (and the feelings) of being inside institutions that are being reshaped by AI, and being responsible for some of how that reshaping gets communicated and absorbed. I’m thinking about the incredible value of interdisciplinary governance, and how the commitment to governance is a specific position, and all the margins to consider.
Further reading:
Catherine D’Ignazio and Lauren Klein, Data Feminism. The foundational text on applying intersectional feminist thinking to data science practice.
Catherine D’Ignazio, Counting Feminicide: Data Feminism in Action. Extended case study of the grassroots data activism project D’Ignazio described on the panel.
D’Ignazio et al., “Feminicide and Counterdata Production.” Research paper on the counterdata methodology behind the femicide tracking project.
D’Ignazio et al., “Data Feminism for AI.” Conference paper extending the data feminism framework to questions specific to AI systems.
Safiya Noble, Algorithms of Oppression. Noble’s study of how commercial search engines reinforce racism and sexism through their ranking systems.
Donna Haraway, “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective” (1988). The original essay where Haraway introduces the god trick and the case for situated, embodied knowledge against the view from nowhere.