To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Brandy Nonnecke is the founding director of the CITRIS Policy Lab, based at UC Berkeley, which supports interdisciplinary research to address questions about the role of regulation in fostering innovation. Nonnecke also co-directs the Berkeley Center for Law and Technology, where she leads projects on artificial intelligence, platforms and society, and the UC Berkeley AI Policy Hub, an initiative to train researchers to develop effective AI governance and policy frameworks.
In her spare time, Nonnecke hosts a video and podcast series, TecHype, which analyzes emerging technology policies, regulations and laws, provides insight into the benefits and risks, and identifies strategies for harnessing technology for good.
Q&A
Briefly, how did you get started with AI? What drew you to the space?
I have been working on responsible AI governance for almost a decade. My background in technology, public policy and their intersection with social impact drew me to the field. AI is already pervasive and has a profound effect on our lives — for better and for worse. It is important to me to make a meaningful contribution to society’s ability to take advantage of this technology for good instead of being left on the sidelines.
What work are you most proud of (in AI)?
I’m really proud of two things we’ve accomplished. First, the University of California was the first university to establish responsible AI principles and a governance structure to better ensure the responsible procurement and use of AI. We take seriously our commitment to serve the public in a responsible manner. I was honored to co-chair the UC Presidential Task Force on Artificial Intelligence and the subsequent standing AI Council. In these roles, I was able to gain first-hand experience thinking about how best to implement our responsible AI principles in order to protect faculty, staff, students, and the broader communities we serve. Second, I believe it is critical that the public understand emerging technologies and their real benefits and risks. We launched TecHype, a video and podcast series that demystifies emerging technologies and provides guidance for effective technical and policy interventions.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
Be curious, persistent and undeterred by impostor syndrome. I have found it important to seek out mentors who support diversity and inclusion and offer the same support to others entering the field. Building inclusive communities in technology has been a powerful way to share experiences, advice and encouragement.
What advice would you give to women looking to enter the AI field?
For women entering the AI field, my advice is threefold: Seek knowledge relentlessly, as AI is a rapidly evolving field. Embrace networking, as connections will open doors to opportunities and provide invaluable support. And advocate for yourself and others, as your voice is essential to shaping an inclusive, just future for AI. Remember, your unique perspectives and experiences enrich the field and drive innovation.
What are some of the most pressing issues facing artificial intelligence as it evolves?
I think one of the most pressing issues facing AI as it evolves is not getting caught up in the latest hype cycles. We are seeing this now with genetic artificial intelligence. To be sure, genetic AI represents major advances and will have a huge impact — good and bad. But today, other forms of machine learning are being used to secretly make decisions that directly affect an individual’s ability to exercise their rights. Instead of focusing on the latest marvels of machine learning, it is more important to focus on how and where machine learning is applied regardless of its technological prowess.
What are some issues AI users should be aware of?
AI users should be aware of issues related to data privacy and security, the potential for bias in AI decision-making, and the importance of transparency in how AI systems operate and make decisions. Understanding these issues can empower users to demand more responsible and fair AI systems.
What’s the best way to build responsible AI?
Responsible creation of artificial intelligence involves incorporating ethical criteria at every stage of development and growth. This includes diverse stakeholder engagement, transparent methodologies, bias management strategies and ongoing impact assessments. Prioritizing the public good and ensuring the development of AI technologies with a core of human rights, justice and participation is fundamental.
How can investors best push for responsible AI?
This is such an important question! For a long time we never explicitly discussed the role of investors. I can’t express enough how impactful investors are! I think the trope that “regulation stifles innovation” is overused and often untrue. Instead, I strongly believe that smaller companies can experience a late-mover advantage and learn from the larger AI companies that have developed responsible AI practices and the guidance that comes from academia, civil society, and government. Investors have the power to shape the direction of the industry by making responsible AI practices a critical factor in their investment decisions. This includes supporting initiatives focused on addressing societal challenges through AI, promoting diversity and inclusion in the AI workforce, and supporting strong governance and technical strategies that help ensure AI technologies benefit society total.