To give women academics and others their well-deserved—and overdue—time in the spotlight, TechCrunch is publishing a series of interviews focusing on notable women who have contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Miriam Vogel is the CEO of EqualAI, a non-profit organization created to reduce unconscious bias in AI and promote responsible AI governance. He also serves as chair of the newly formed National Artificial Intelligence Advisory Committee, which is mandated by Congress to advise President Joe Biden and the White House on AI policy, and teaches technology law and policy at Georgetown University Law Center.
Vogel previously served as deputy attorney general at the Department of Justice, advising the attorney general and deputy attorney general on a wide range of legal, policy and operational matters. As a board member at the Responsible AI Institute and a senior advisor to the Center for Democracy and Technology, Vogel’s advised White House leadership on initiatives ranging from women, economic, regulatory and food safety policy to criminal justice issues. justice.
Briefly, how did you get started with AI? What drew you to the space?
I began my career working in government, initially as a Senate intern the summer before 11th grade. I got the policy bug and spent the next several summers working on the Hill and then the White House. My focus at that point was on civil rights, which is not the conventional path to AI, but looking back, it makes perfect sense.
After law school, my career evolved from an entertainment attorney specializing in intellectual property to working on civil rights and social impact in the executive branch. I had the privilege of leading the Equal Pay Task Force while serving in the White House and, while serving as Deputy Attorney General under former Deputy Attorney General Sally Yates, I led the creation and development of indirect bias training for federal law enforcement.
I was tapped to lead EqualAI based on my experience as a technology attorney and my background in policy addressing bias and systemic harm. I was drawn to this organization because I realized that AI represented the next frontier of civil rights. Without vigilance, decades of progress could be undone in lines of code.
I’ve always been excited about the possibilities that innovation creates, and I still believe that AI can present amazing new opportunities to empower more people — but only if we’re careful at this critical juncture to ensure that more people are able to participate substantially in its creation and development.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I fundamentally believe that we all have a role to play in ensuring that our AI is as effective, efficient and beneficial as possible. That means we’re making sure we do more to support women’s voices in its development (who, by the way, represent more than 85% of the US market, so ensuring their interests and safety are included is a smart business move ), as well as the voices of other underrepresented populations of various ages, regions, ethnicities and nationalities who are under-participated.
As we work towards gender equality, we need to ensure that more voices and perspectives are considered in order to develop AI that works for all consumers — not just AI that works for developers.
What advice would you give to women looking to enter the AI field?
First, it’s never too late to start. Never. I encourage all grandparents to try using OpenAI’s ChatGPT, Microsoft’s Copilot, or Google’s Gemini. We will all need to become AI-savvy in order to thrive in what is to become an AI-driven economy. And that’s exciting! We all have a role to play. Whether you’re starting a career in AI or using AI to support your work, women should be testing AI tools, seeing what those tools can and can’t do, whether they work for them, and in general they become AI savvy.
Second, responsible AI development requires more than just ethical computer scientists. Many people think that the field of artificial intelligence requires a computer science or other STEM degree, when in fact, artificial intelligence needs perspectives and expertise from women and men from all backgrounds. Jump! Your voice and perspective are needed. Your commitment is crucial.
What are some of the most pressing issues facing artificial intelligence as it evolves?
First, we need greater AI literacy. We’re “purely AI-positive” at EqualAI, meaning we believe AI will provide unprecedented opportunities for our economy and improve our daily lives — but only if those opportunities are equally available and beneficial to a larger segment of our population. We need our current workforce, the next generation, our grandparents — all of us — have the knowledge and skills to take advantage of artificial intelligence.
Second, we need to develop standardized measures and metrics for evaluating AI systems. Standardized assessments will be critical to building trust in AI systems and allowing consumers, regulators and downstream users to understand the limits of the AI systems they are dealing with and determine whether that system is worthy of our trust . Understanding who a system is built to serve and the intended use cases will help us answer the key question: Who could it fail for?
What are some issues AI users should be aware of?
Artificial intelligence is just that: artificial. It is made by humans to “mimic” human knowledge and empower humans in their pursuits. We must maintain an appropriate amount of skepticism and insist on due diligence when using this technology to ensure that we believe in systems that are worthy of our trust. Artificial intelligence can augment – but not replace – humanity.
We must remain clear that AI consists of two main components: algorithms (created by humans) and data (reflecting human conversations and interactions). As a result, AI reflects and adapts to our human flaws. Biases and biases can be built into the entire AI lifecycle, whether through the algorithms that humans write or through the data that is a snapshot of human lives. However, every human touch point is an opportunity to identify and mitigate potential harm.
Because one can imagine as much as one’s own experience allows, and AI programs are limited by the constructs under which they are built, the more people with diverse perspectives and experiences on a team, the more likely they are to catch biases and other concerns about security built into their AI.
What’s the best way to build responsible AI?
Building AI that deserves our trust is all our responsibility. We cannot expect someone else to do it for us. We need to start by asking three basic questions: (1) Who is this AI system being built for (2), what were the intended use cases, and (3) who might it fail for? Even with these questions in mind, there will inevitably be pitfalls. To mitigate these risks, designers, developers, and programmers must follow best practices.
At EqualAI, we promote good “AI health,” which includes planning your framework and ensuring accountability, test standardization, documentation, and routine testing. We also recently published a guide to designing and operating a responsible AI governance framework, which outlines the values, principles and framework for the responsible implementation of AI in an organization. The document serves as a resource for organizations of any size, sector or maturity in the midst of adopting, developing, using and implementing AI systems with an internal and public commitment to do so responsibly.
How can investors best push for responsible AI?
Investors have a big role to play in ensuring our AI is safe, effective and responsible. Investors can ensure that companies seeking funding are aware of and consider mitigating potential harms and liabilities in their AI systems. Even asking, “How have you established AI governance practices?” it is an essential first step to ensure better results.
This effort is not only good for the common good. It is also in the best interest of investors who will want to ensure that the companies they invest in and are associated with are not associated with bad securities or burdened with litigation. Trust is one of the few non-negotiables for a company’s success, and a commitment to responsible AI governance is the best way to build and maintain public trust. Strong and reliable AI makes good business sense.