To give women academics and others their well-deserved—and overdue—time in the spotlight, TechCrunch is publishing a series of interviews focusing on notable women who have contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Sarah Myers West is the managing director at AI Now, an American think tank that studies the social impacts of artificial intelligence and policy research that addresses the concentration of power in the technology industry. She previously served as a senior adviser on artificial intelligence at the US Federal Trade Commission and is a visiting scholar at Northeastern University, as well as a researcher at Cornell’s Citizens and Technology Lab.
Briefly, how did you get started with AI? What drew you to the space?
I have spent the last 15 years investigating the role of technology companies as powerful political actors that have emerged at the forefront of international governance. Early in my career, I had a front-line seat observing how American tech companies emerged around the world in ways that changed the political landscape—in Southeast Asia, China, the Middle East, and elsewhere—and I wrote a book that delves into how industry lobbying and regulation shaped the beginnings of the surveillance business model for the Internet despite technologies that offered alternatives in theory that in practice failed to materialize.
At many points in my career, I’ve asked myself, “Why are we locked into this very dystopian vision of the future?” The answer has little to do with the technology itself and a lot to do with public policy and commercialization.
This has pretty much been my work ever since, both in my research career and now in my politics as co-director of AI Now. If artificial intelligence is part of the infrastructure of our daily lives, we need to critically examine the institutions that produce it and make sure that as a society there is sufficient friction — whether through regulation or organization — to ensure that it is the needs of the public that are served. at the end of the day, not those tech companies.
What work in AI are you most proud of?
I’m really proud of the work we did while at the FTC, which is the US government agency that, among other things, is at the forefront of AI regulatory enforcement. I loved rolling up the sleeves and working on cases. I was able to use my training methods as a researcher to engage in research work as the toolkit is essentially the same. It’s been gratifying to use these tools to hold power directly accountable and to see this work have a direct impact on the public, whether it’s how AI is used to devalue workers and raise prices, or fighting anti-competitive behavior by big tech companies.
We’ve been able to bring on board a fantastic group of technologists working under the White House Office of Science and Technology Policy, and it’s been exciting to see that the groundwork we’ve laid there is directly related to the emergence of genetic artificial intelligence and the importance of cloud infrastructure.
What are some of the most pressing issues facing artificial intelligence as it evolves?
First and foremost, AI technologies are widely used in very sensitive environments – hospitals, schools, borders, etc. – but remain insufficiently tested and validated. This is error-prone technology, and we know from independent research that these errors are not evenly distributed. they disproportionately harm communities that have long borne the brunt of discrimination. We should set the bar much, much higher. But I am concerned about how powerful institutions are using AI—whether it works or not—to justify their actions, from using weapons against civilians in Gaza to disenfranchising workers. This is not a problem of technology, but of discourse: how we orient our culture around technology and the idea that if AI is involved, certain choices or behaviors become more “objective” or somehow get a pass.
What’s the best way to build responsible AI?
We should always start from the question: Why create AI at all? What requires the use of artificial intelligence and is AI technology suitable for this purpose? Sometimes the answer is to build better, and in this case developers should ensure compliance with the law, document and validate their systems vigorously, and make everything open and transparent so that independent researchers can do the same. But other times the answer is not to manufacture at all: We don’t need more “responsibly manufactured” weapons or surveillance technology. The end use matters to this question, and that’s where we should start.