To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Heidy Khlaaf is director of engineering at cybersecurity firm Trail of Bits. He specializes in evaluating software and artificial intelligence applications in “safety-critical” systems such as nuclear power plants and autonomous vehicles.
Khlaaf received her Ph.D. from University College London and her BA in computer science and philosophy from Florida State University. He has led safety and security audits, provided assurance case consultations and reviews, and contributed to the creation of standards and guidelines for security-related applications and their development.
Q&A
Briefly, how did you get started with AI? What drew you to the space?
I was drawn to robotics at a very young age and started programming at the age of 15 as I was fascinated by the prospects of using robotics and artificial intelligence (as they are inexplicably linked) to automate workloads where they are needed most. As in manufacturing, I saw robotics being used to help the elderly – and to automate dangerous manual work in our society. I did however receive my Ph.D. in a different subfield of computer science, because I believe that having a strong theoretical foundation in computer science allows you to make educated and scientific decisions about where AI may or may not be appropriate and where there may be pitfalls.
What work are you most proud of (in AI)?
Using my strong expertise and background in security engineering and safety-critical systems to provide context and critique where needed on the emerging field of AI “security”. Although the field of AI security has attempted to adapt and report on established security and safety techniques, various terminologies have been misinterpreted as to its use and meaning. There is a lack of consistent or purposeful definitions that do compromise the integrity of the security techniques currently used by the AI community. I am especially proud of “Towards integrated risk assessments and assurance of artificial intelligence-based systems” and “A risk analysis framework for large code composition language modelswhere I deconstruct false narratives about AI security and assessments and provide concrete steps to bridge the security gap within AI.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
Recognizing how little the status quo has changed is not something we often discuss, but I think it’s really important for me and other technical women to understand our place in the industry and have a realistic view of the changes that are needed. Retention rates and the ratio of women in leadership positions have remained largely the same since I entered the field, and that was over a decade ago. And as TechCrunch has aptly pointed out, despite the enormous discoveries and contributions of women in AI, we remain on the sidelines of conversations that we ourselves have defined. Recognizing this lack of progress helped me realize that building a strong personal community is far more valuable as a source of support than relying on DEI initiatives that have unfortunately not moved the needle, given that bias and skepticism towards technical women it is still quite widespread. technical
What advice would you give to women looking to enter the AI field?
Not to reach out to power and find a line of work you truly believe in, even if it goes against popular narratives. Given the power that AI labs wield politically and financially right now, there’s an instinct to assume that AI “thought leaders” are stating fact, when it’s often the case that many AI claims are marketing speak that overstate its capabilities. artificial intelligence to benefit. a bottom line. However, I see significant reluctance, especially among younger women in the field, to express skepticism against claims made by their male peers that cannot be substantiated. Imposter syndrome has a strong influence on women in technology and leads many to question their own scientific integrity. But it is more important than ever to challenge claims that exaggerate the capabilities of artificial intelligence, especially those that are not falsifiable by the scientific method.
What are some of the most pressing issues facing artificial intelligence as it evolves?
No matter what developments we see in artificial intelligence, it will never be the only solution, technologically or socially, to our problems. Currently, there is a trend to incorporate shoe AI into every possible system, regardless of its effectiveness (or lack thereof) in many areas. AI should augment human capabilities rather than replace them, and we are witnessing a complete disregard for the pitfalls and failure modes of AI that lead to real tangible harm. Just recently, an AI system ShotSpotter recently led a police officer to shoot a child.
What are some issues AI users should be aware of?
How unreliable artificial intelligence is. AI algorithms are notoriously flawed with high error rates seen in all applications that require precision, accuracy and safety criticality. The way AI systems are trained incorporates human bias and discrimination into their results which become “de facto” and automated. And that’s because the nature of AI systems is to provide results based on statistical and probabilistic inferences and correlations from historical data, rather than any type of reasoning, factual evidence, or “causal cause.”
What’s the best way to build responsible AI?
To ensure that AI is developed in a way that protects the rights and safety of people by creating verifiable claims and holding AI developers accountable to them. These claims should also be covered by regulatory, safety, ethical or technical application and should not be falsifiable. Otherwise, there is a significant lack of scientific integrity to properly evaluate these systems. Independent regulators should also evaluate AI systems against these claims, as is currently required for many products and systems in other industries — for example, those evaluated by the FDA. Artificial intelligence systems should not be exempted from standard control procedures established to ensure the protection of the public and consumers.
How can investors best push for responsible AI?
Investors should partner with and fund organizations that seek to establish and advance audit practices for artificial intelligence. Most of the funding is currently invested in the AI labs themselves, in the belief that their security teams are sufficient to advance AI evaluations. However, independent auditors and regulators are key to public trust. Independence allows the public to trust the accuracy and integrity of assessments and the integrity of regulatory outcomes.