To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Irene Solaiman began her career in AI as a researcher and public policy officer at OpenAI, where she led a new approach to the release of GPT-2, a predecessor to ChatGPT. After serving as AI policy manager at Zillow for nearly a year, she joined Hugging Face as head of global policy. Her responsibilities there range from building and leading global AI corporate policy to conducting socio-technical research.
Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association of electronics engineers, on artificial intelligence issues and is a recognized AI expert at the Intergovernmental Organization for Economic Co-operation and Development (OECD).
Irene Solaiman, Head of Global Policy at Hugging Face
Briefly, how did you get started with AI? What drew you to the space?
A completely non-linear career path is common in AI. My interest began the same way many teenagers with awkward social skills find their passions: through the medium of science fiction. I first studied human rights policy and then took IT courses as I saw AI as a means of working for human rights and building a better future. Being able to do technical research and lead policy in an area with so many unanswered questions and untrodden paths keeps my job exciting.
What work are you most proud of (in AI)?
I’m most proud of when my expertise resonates with people across the AI field, especially my publishing writing in the complex landscape of AI systems publishing and open-source. Seeing my paper in one Technical development of AI Release Gradient framework Direct discussions between scientists and used in government reports are reassuring — and a good sign that I’m working in the right direction! Personally, some of the work that motivates me the most is cultural alignment, which is dedicated to ensuring that systems work best for the cultures in which they are developed. With my incredible co-author and now dear friend, Christy Dennison, working on a Process of adapting language models to society it was a comprehensive project (and many hours of debugging) that has shaped security and alignment work today.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I found, and continue to find, my people — from working with incredible company leaders who care deeply about the same issues I prioritize to great research co-authors with whom I can start every work session with a mini-therapy session . Affinity groups are extremely useful in building community and sharing advice. Intersectionality is important to emphasize here. My communities of Muslims and BIPOC researchers are constantly inspiring.
What advice would you give to women looking to enter the AI field?
You have a support team whose success is your success. In youth terms, I think this is a “girl’s girl”. The same women and allies I entered this field with are my favorite coffee dates and panicked late-night phone calls before a deadline. One of the best career tips I’ve read was from Arvind Narayan on the platform formerly known as Twitter who established the ‘Liam Neeson Principle’ of not being the smartest of all but having a certain skill set.
What are some of the most pressing issues facing artificial intelligence as it evolves?
The most pressing issues evolve on their own, so the meta-response is: International coordination for safer systems for all peoples. People who use and are affected by systems, even in the same country, have different preferences and ideas about what is safest for them. And the issues that arise will depend not only on how artificial intelligence evolves, but on the environment in which they are developed. our security priorities and definitions of capabilities differ regionally, such as the higher threat of cyberattacks on critical infrastructure in more digitized economies.
What are some issues AI users should be aware of?
Technical solutions rarely, if ever, address risks and harms holistically. While there are steps users can take to increase their AI literacy, it’s important to invest in a variety of safeguards against risks as they evolve. For example, I am excited for more research on watermarking as a technical tool, and we also need concerted guidance from policy makers on the distribution of content created, especially on social media platforms.
What’s the best way to build responsible AI?
With the populations affected we are constantly re-evaluating our methods for evaluating and implementing security techniques. Both beneficial applications and potential harms are constantly evolving and require iterative feedback. The means by which we improve AI security should be considered collectively as a field. The most popular ratings for models in 2024 are much stronger than the ones I ran in 2019. Today, I am much more positive about the technical ratings than the red team. I find human ratings extremely useful, but as more evidence emerges about the mental burden and disparate costs of human feedback, I’m increasingly enthusiastic about standardizing ratings.
How can investors best push for responsible AI?
It is already! I am pleased to see many investors and venture capital firms actively participating in security and policy discussions, including through open letters and congressional testimony. I look forward to hearing more from investors’ expertise about what’s driving small businesses across the board, especially as we see more use of AI from sectors outside of the core tech industries.