To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Kristine Gloria leads the Emerging and Intelligent Technologies Initiative at the Aspen Institute — the Aspen Institute is the Washington, DC-based think tank focused on values-based leadership and policy expertise. Gloria holds a PhD in cognitive science and an MA in media studies, and her previous work includes research at MIT’s Internet Policy Research Initiative, the San Francisco-based Startup Policy Lab, and the Center for Society, Technology, and Policy at UC Berkeley.
Q&A
Briefly, how did you get started with AI? What drew you to the space?
To be honest, I certainly didn’t start my career aiming to be in AI. First, I was really interested in understanding the intersection of technology and public policy. At the time, I was working on my master’s degree in media studies, exploring ideas around remix culture and intellectual property. I was living and working in DC as an Archer Fellow for the New America Foundation. One day, I distinctly remember sitting in a room full of public policymakers and politicians who were saying terms that didn’t quite match their actual technical definitions. Shortly after that meeting I realized that in order to move the needle in public policy, I needed the credentials. I went back to school, earning my PhD in cognitive science with a focus on semantic technologies and online consumer privacy. I was very fortunate to find a mentor and advisor and a lab that fostered an interdisciplinary understanding of how technology is designed and built. Thus, I honed my technical skills while developing a more critical view of the many ways technology intersects our lives. In my role as director of artificial intelligence at the Aspen Institute, I was then privileged to think, engage, and collaborate with some of the leading thinkers in artificial intelligence. And I’ve always found myself drawn to those who have taken the time to deeply question whether and how artificial intelligence will affect our daily lives.
Over the years, I have led various AI initiatives and one of the most important is to launch. Now, as a founding team member and director of strategic partnerships and innovation at a new nonprofit, Young Futures, I’m excited to combine this way of thinking to achieve our mission of making the digital world an easier place to grow up . In particular, as genetic AI becomes table stakes and as new technologies come online, it is urgent and critical to help teens, tweens and their support units navigate this vast digital wilderness together.
What work are you most proud of (in AI)?
I am very proud of two initiatives. First is my work related to highlighting the tensions, pitfalls and impacts of AI on marginalized communities. Published in 2021, Power and Progress in Algorithmic Bias articulates months of stakeholder engagement and research around this issue. In the report, we ask one of my all-time favorite questions: “How can we (data and algorithm operators) reframe our own models to predict a different future, one that focuses on the needs of the most vulnerable?” Safiya Noble is the original author of this question and it is a constant consideration throughout my work. The second most important initiative came recently from my time as Head of Data at Blue Fever, a company with a mission to improve the well-being of young people in a judgment-free and inclusive online space. Specifically, I led the design and development of Blue, the first AI emotional support companion. I learned a lot in this process. Most importantly, I gained a deep new appreciation for the impact a virtual companion can have on someone who is struggling or who may not have the support systems in place. Blue was designed and built to bring its “big brother energy” to help users reflect on their mental and emotional needs.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
Unfortunately, the challenges are real and still very relevant. I have experienced my share of mistrust of my skills and experience among all types of colleagues in the field. But, for each of these negative challenges, I can point to an example of a colleague who is my fiercest cheerleader. It’s a tough environment, and I keep these examples to help manage. I also think so much has changed in this space even in the last five years. The necessary skill sets and professional experiences that qualify as part of “AI” are no longer strictly computer science focused.
What advice would you give to women looking to enter the AI field?
Enter and follow your curiosity. This space is in constant motion, and the most interesting (and probably most productive) pursuit is to constantly be critically optimistic about the field itself.
What are some of the most pressing issues facing artificial intelligence as it evolves?
In fact, I think some of the most pressing issues facing AI are the same issues we haven’t gotten right since the web was first introduced. These are issues about agency, autonomy, privacy, justice, equality and so on. These are the core of how we position ourselves among machines. Yes, AI can make it much more complicated – but so can socio-political changes.
What are some issues AI users should be aware of?
AI users should be aware of how these systems complicate or enhance their own agency and autonomy. Additionally, as the debate surrounds how technology, and especially artificial intelligence, can affect our well-being, it’s important to remember that there are tried and true tools for managing more negative outcomes.
What’s the best way to build responsible AI?
A responsible AI build is not just the code. A truly responsible manufacturing takes into account the design, governance, policies and business model. Everything leads to the other and we will continue to fall behind if we try to tackle only one part of the build.
How investors can better push for responsible AI
One particular task, which I admire Mozilla Ventures for its diligence, is an AI model card. Developed by Timnit Gebru and others, this model carding practice enables groups—such as funders—to assess the risks and security issues of the AI models used in a system. Also, in relation to the above, investors should holistically evaluate the system in terms of its capacity and ability to be responsibly built. For example, if you have trust and security features in the works or have published a model card, but your revenue model exploits vulnerable population data, then there is a misalignment with your intent as an investor. I think you can build responsibly and be profitable. Finally, I would love to see more co-financing opportunities between investors. In the field of wellness and mental health, the solutions will be varied and vast as no one person is the same and no one solution can solve for everyone. Collective action among investors interested in solving the problem would be a welcome addition.
