To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Miranda Bogen is the founding director of the Center for Democracy and Technology’s AI Governance Lab, where she works to help create solutions that can effectively regulate and govern artificial intelligence systems. She helped guide responsible AI strategies at Meta and previously worked as a senior policy analyst at Uptown, an organization that seeks to use technology to advance equality and justice.
Briefly, how did you get started with AI? What drew you to the space?
I was drawn to work in machine learning and artificial intelligence by seeing how these technologies collided with fundamental conversations about society — values, rights, and which communities are being left behind. My early work exploring the intersection of AI and civil rights reinforced for me that AI systems are much more than technical objects. They are systems that shape and are shaped by their interaction with people, bureaucracies and policies. I’ve always been adept at translating between technical and non-technical contexts and was excited by the opportunity to help cut through the appearance of technical complexity to help communities with different kinds of expertise shape how AI is built from the ground up.
What work are you most proud of (in AI)?
When I first started working in this space, many people had yet to be convinced that AI systems could discriminate against marginalized populations, let alone that anything should be done about those harms. While there is still a very wide gap between the status quo and a future where prejudice and other harms are systematically addressed, I am pleased that the research conducted by my colleagues and I distinctions in personalized online advertising and my industry work on algorithmic justice helped lead to major changes to Meta’s ad delivery system; and progress towards reducing inequalities in access to meaningful economic opportunities.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I was lucky to work with great colleagues and teams who were generous with both opportunities and honest support, and we tried to bring that energy to any room we found ourselves in. In my most recent career transition, I was pleased that almost all of my choices involved working in teams or organizations led by exceptional women, and I hope that the field will continue to lift up the voices of those not traditionally focused on technology-oriented conversations.
What advice would you give to women looking to enter the AI field?
I give the same advice to anyone who asks: find supportive managers, mentors and teams who energize and inspire you, who value your opinion and perspective, and who put themselves on the line to defend you and your work.
What are some of the most pressing issues facing artificial intelligence as it evolves?
The impacts and harms that AI systems already have on humans are well known at this point, and one of the most pressing challenges is to move beyond problem description to developing robust approaches to systematically address these harms and providing incentives for their adoption. We started it AI Governance Lab in the CDT to drive progress in both directions.
What are some issues AI users should be aware of?
For the most part, AI systems still lack seat belts, airbags and traffic signals, so proceed with caution before using them for follow-up tasks.
What’s the best way to build responsible AI?
The best way to build responsible AI is with humility. Think about how success is defined for the AI system you’re working on, who that definition serves, and what context might be missing. Think about who the system might fail for and what will happen if it does. And build systems not only with the people who will use them but with the communities who will be subject to them.
How can investors best push for responsible AI?
Investors need to make room for technology makers to move more deliberately before rushing half-baked technologies to market. The intense competitive pressure to launch the newest, biggest, and shiniest new AI models leads to underinvestment in responsible practices. While unfettered innovation sings an enticing siren song, it’s a mirage that will leave everyone worse off.
AI is not magic. it’s just a mirror held up to society. If we want it to reflect something different, we have work to do.