To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Abba Kak is the executive director of the AI Now Institute, where she helps create policy recommendations to address concerns about artificial intelligence. She was also senior artificial intelligence advisor at the Federal Trade Commission and previously worked as a global policy advisor at Mozilla and network legal advisor to India’s telecom regulator.
Briefly, how did you get started with AI? What drew you to the space?
It’s not a simple question, because “AI” is a term that has been in vogue to describe practices and systems that have been evolving for a long time. I have been involved in technology policy for over a decade and in many parts of the world and witnessed when it was all about ‘big data’ and then it was all about ‘AI’. But the core issues we were concerned with—how data-driven technologies and economies affect society—remain the same.
I was drawn to these questions early in law school in India, where, amid a sea of decades, sometimes centuries, of precedent, I found myself motivated to work in an area where “pre-policy” questions, normative questions about what the world is that we want; What role should technology play in this? Stay open and questioning. Globally, at the time, the big debate was whether the Internet could be regulated at all at the national level (which now seems pretty obvious, yes!), and in India, there were heated debates about whether a database of biometrics entire population created a dangerous agent of social control. In the face of narratives of inevitability around AI and technology, I think regulation and advocacy can be a powerful tool in shaping the trajectories of technology to serve the public interest rather than corporate interests or just interests those who hold power in society. Of course, over the years, I’ve also learned that regulation is often chosen entirely by these interests and can often work to maintain the status quo rather than challenge it. So this is the job!
What work are you most proud of (in AI)?
Our AI Landscape 2023 report was released in April amid a chatGPT-fueled AI crescendo — it was part diagnosis of what should keep us awake for the AI economy, part action-oriented manifesto aimed at in the wider community of civil society. He met the moment—a moment when both diagnosis and what to do about it were sorely lacking, and in its place were narratives about the omniscience and inevitability of artificial intelligence. We pointed out that the AI boom has further entrenched the concentration of power in a very narrow segment of the tech industry, and I think we’ve successfully cut through the hype to refocus attention on the societal and economic implications of AI… and not assume any of which it was inevitable.
Later in the year, we were able to bring this argument to a room full of government leaders and top AI executives at the UK IT Security Summit, where I was one of only three civil society voices representing the public interest. It was a lesson in realizing the power of a compelling counter-narrative that refocuses attention when it’s easy to get caught up in curated and often self-serving narratives from the tech industry.
I’m also really proud of a lot of the work I did during my tenure as the Federal Trade Commission’s Senior Counsel on artificial intelligence, working on emerging technology issues and some of the key enforcement actions in this area. It was an incredible group to be a part of, and I also learned the crucial lesson that even one person in the right room at the right time can really make a difference in influencing policy making.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
The tech industry, and AI in particular, remains overwhelmingly white and male and geographically concentrated in very wealthy urban bubbles. But I like to move away from the white dude AI problem again not only because it’s now well-known, but also because it can sometimes create the illusion of quick fixes or diversity theater that alone won’t solve structural inequalities and the power imbalances embedded in how the tech industry currently operates. It does not solve the deep-rooted “solutionism” that accounts for many harmful or exploitative uses of technology.
The real issue we have to deal with is the creation of a small group of companies and, within them — a handful of individuals who have amassed unprecedented access to capital, networks and power, reaping the rewards of the surveillance business model that fueled the last internet decade. And this concentration of power is set to get much, much worse with AI. These individuals act with impunity, even as the platforms and infrastructures they control have enormous social and economic impacts.
How do we navigate it? Exposing the power dynamics that the tech industry tries so hard to hide. We talk about the incentives, infrastructure, labor markets and environment that fuel these waves of technology and shape the direction they will take. That’s what we’ve been doing at AI Now for nearly a decade, and when we do it well, we make it hard for policymakers and the public to look away — creating counternarratives and alternative imaginings about the appropriate role of technology in society.
What advice would you give to women looking to enter the AI field?
For women, and other minority identities or perspectives seeking to critique outside of the AI industry, the best advice I could give is to stand your ground. This is a field that will systematically and systematically try to discredit criticism, especially when it comes from non-traditional STEM backgrounds – and it’s easy to do since AI is such an opaque industry that it can make you feel like you’re always trying to push back from the outside. Even when you’ve been in the field for decades like I have, powerful voices in the industry will try to undermine you and your valid criticism simply because you challenge the status quo.
You and I have as much say in the future of AI as Sam Altman, as the technologies will affect us all and potentially disproportionately affect people with minority identities in harmful ways. Right now, we are in a fight for who will claim expertise and authority on technology issues within society…so we really need to claim that space and hold our ground.