To give women academics and others their well-deserved—and overdue—time in the spotlight, TechCrunch is publishing a series of interviews focusing on notable women who have contributed to the AI revolution. We publish these pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Chinasa T. Okolo is A companion at the Brookings Institution in the Center for Technological Innovation’s Governance Studies program. Prior to this, he served on the ethics and social impact committee that helped develop Nigeria’s National Artificial Intelligence Strategy and has served as an AI policy and ethics advisor for various organizations, including the African Union Development Agency and the Artificial Intelligence Institute of Quebec. She recently received her PhD in computer science from Cornell University, where she researched how artificial intelligence affects the Global South.
Briefly, how did you get started with AI? What drew you to the space?
I originally switched to AI because I saw how computational techniques could advance biomedical research and democratize access to healthcare for marginalized communities. During my final year of undergrad [at Pomona College], I began research with a professor of human-computer interaction, who exposed me to the challenges of bias within artificial intelligence. During my PhD, I was interested in understanding how these issues would affect people in the Global South, who represent the majority of the world’s population and are often excluded and underrepresented in the development of artificial intelligence.
What work are you most proud of (in AI)?
I am incredibly proud of my work with the African Union (AU) to develop the AU-AI Continental Strategy for Africa, which aims to help AU member states prepare for the responsible adoption, development and governance of AI. The strategy took 1.5 years to draft and was released at the end of February 2024. It is now in an open feedback period with the aim of being formally adopted by the AU member states in early 2025.
As a first-generation Nigerian-American who grew up in Kansas City, MO, and didn’t leave the United States until studying abroad during undergrad, I always aimed to focus my career in Africa. Taking on such an important job so early in my career makes me excited to pursue similar opportunities to help shape inclusive global AI governance.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
Finding community with those who share my values has been essential to navigating the male-dominated tech and AI industries.
I have been fortunate to see many advances in responsible AI and prominent research exposing the harms of AI led by Black women scholars such as Timnit Gebru, Safiya Noble, Abeba Birhane, Ruha Benjamin, Joy Buolamwini, and Deb Raji, many of whom I have managed to connect with over the past few years.
Seeing their leadership motivated me to continue my work in this area and showed me the value of going “against the grain” to make a meaningful impact.
What advice would you give to women looking to enter the AI field?
Don’t be intimidated by your lack of technical background. The field of artificial intelligence is multi-dimensional and requires expertise from various fields. My research has been heavily influenced by sociologists, anthropologists, cognitive scientists, philosophers, and others within the humanities and social sciences.
What are some of the most pressing issues facing artificial intelligence as it evolves?
One of the most important issues will be improving the fair representation of non-Western cultures in prominent languages and multimodal models. The vast majority of AI models are trained in English and on data representing predominantly Western contexts, which leaves out valuable perspectives from the majority of the world.
In addition, the race to build larger models will lead to greater depletion of natural resources and greater impacts on climate change, which already disproportionately affect countries in the South.
What are some issues AI users should be aware of?
A significant number of AI tools and systems put into public development overestimate their capabilities and simply don’t work. Many tasks that humans aim to use AI for could likely be solved through simpler algorithms or basic automation.
Additionally, genetic AI has the ability to exacerbate impairments observed by previous AI tools. For years, we have seen how these tools bias and lead to harmful decision-making against vulnerable communities, which will likely increase as genetic AI grows in scale and reach.
However, enabling knowledgeable people to understand the limitations of AI can help improve the responsible adoption and use of these tools. Improving AI and data literacy among the general public will become fundamental as AI tools are rapidly integrated into society.
What’s the best way to build responsible AI?
The best way to build AI responsibly is to be critical of the intended and unintended use cases of these tools. People building AI systems have a responsibility to oppose the use of AI for harmful scenarios in warfare and policing, and should seek outside guidance on whether the AI is suitable for other use cases they may target. Since AI is often an amplifier of existing social inequalities, it is also imperative that developers and researchers be careful in how they create and curate datasets used to train AI models.
How can investors best push for responsible AI?
Many argue that growing VC interest in “cashing in” on the current AI wave has accelerated the rise of “AI snake oil,” coined by Arvind Narayanan and Sayash Kapoor. I agree with this sentiment and believe that investors must take the lead, along with academics, civil society actors and industry members, to support the responsible development of AI. As an angel investor myself, I’ve seen a lot of dubious AI tools on the market. Investors should also invest in AI expertise to vet companies and request external reviews of tools presented in pitch decks.
Anything else you want to add?
This ongoing “summer of artificial intelligence” has led to a proliferation of “artificial intelligence experts” who often remove important discussions about the current risks and harms of artificial intelligence and present misleading information about the capabilities of AI-enabled tools. I encourage those interested in learning about AI to be critical of these voices and seek out reliable sources to learn.