To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Mutale Nkonde is the founding CEO of the non-profit AI for the People (AFP), which seeks to increase the number of black voices in technology. Prior to that, he helped introduce into the US House of Representatives the Algorithmic Algorithms and Deep False Algorithms Acts, except for the No Biometric Barriers to Housing Act. He is currently a Visiting Policy Fellow at the Oxford Internet Institute.
Briefly, how did you get started with AI? What drew you to the space?
I became curious about how social media worked after a friend of mine posted that Google Pictures, the forerunner of Google Image, cast two black men as gorillas in 2015. I was involved in a lot of “Black in Tech” circles and we were outraged, but I didn’t begin to understand that this was due to algorithmic bias until the publication of “Weapons of Math Destruction” in 2016. This inspired me to start applying for grants where I could study this further, and it ended with my role as a phone call from a co -author of a report Promoting racial literacy in technology, published in 2019. This was noticed by the people at the McArthur Foundation and started the current leg of my career.
I was drawn to questions about racism and technology because they seemed poorly thought out and contradictory. I like to do things that other people don’t, so learning more and spreading that information in Silicon Valley seemed like a lot of fun. Since Advancing Racial Literacy in Tech, I have started a non-profit called AI for humans which focuses on supporting policies and practices to reduce the expression of algorithmic bias.
What work are you most proud of (in AI)?
I’m really proud to be the lead sponsor of the Algorithmic Accountability Act, which was first introduced in the House of Representatives in 2019. It established AI for the people as a key thought leader on how to develop protocols to guide design, the development and governance of artificial intelligence systems that comply with local non-discrimination laws; This led to us being included in the Schumer AI Insights channels as part of an advisory group for various federal agencies and some exciting upcoming work on the Hill.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I actually had more trouble with the academic gatekeepers. Most of the men I work with at tech companies have been tasked with developing systems for use in black and other non-white populations, so they were very easy to work with. Mainly because I act as an external expert who can either validate or challenge existing practices.
What advice would you give to women looking to enter the AI field?
Find a niche and then become one of the best people in the world at it. I had two things that helped me build credibility. The first was that I was advocating for policies to reduce algorithmic bias, while people in academia began to discuss the issue. This gave me a first-mover advantage in the “solutions space” and made AI for the People a startup on the Hill five years before the executive order. The second thing I would say is look at your shortcomings and address them. AI for humans is four years old, and I’ve acquired the academic credentials I need to ensure I don’t move out of thought leadership spaces. I look forward to graduating with a master’s degree from Columbia in May and hope to continue research in this area.
What are some of the most pressing issues facing artificial intelligence as it evolves?
I think strongly about strategies that can be pursued to involve more Black people and people of color in building, testing, and commenting on foundational models. This is because technologies are only as good as their training data, so how do we create comprehensive datasets at a time when DEI is under attack, Black venture funds are being sued for targeting Black and female founders, and Black academics are being publicly attacked, who will this work in the industry?
What are some issues AI users should be aware of?
I think we should be thinking about the development of AI as a geopolitical issue and how the United States could become a leader in truly scalable AI by creating products that have high rates of effectiveness for people in every demographic group. This is because China is the only other major producer of AI, but they produce products in a largely homogenous population and even though they have a large footprint in Africa. The US tech sector can dominate this market if aggressive investments are made to develop anti-bias technologies.
What’s the best way to build responsible AI?
There needs to be a multidimensional approach, but one thing to consider would be to pursue research questions that focus on people living on the fringes. The easiest way to do this is by taking notes on cultural trends and then looking at how that affects technological development. For example, asking questions like how do we design scalable biometric technologies in a society where more people identify as trans or non-binary?
How can investors best push for responsible AI?
Investors should look at demographic trends and then ask, will these companies be able to sell to a population that is increasingly black and brown due to declining birth rates in European populations around the world? This should prompt them to ask questions about algorithmic bias in the due diligence process, as this will increasingly become an issue for consumers.
There is so much work to be done to reskill our workforce for a time when artificial intelligence systems are doing low-cost labor-saving tasks. How can we ensure that people living on the margins of our society are included in these programmes? What information can they give us about how AI systems work and how they don’t work, and how can we use that information to make sure AI is really for humans?