To give women academics and others their well-deserved—and overdue—time in the spotlight, TechCrunch is publishing a series of interviews focusing on notable women who have contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Catherine Breslin is its founder and director Kingfisher Labs, where he helps companies develop AI strategies. He has spent more than two decades as an artificial intelligence scientist and has worked for the University of Cambridge, Toshiba Research, and even Amazon Alexa. She was previously an advisor at VC fund Deeptech Labs and was the Solutions Architect Director at Cobalt Speech & Language.
She attended the University of Oxford for an undergraduate degree before earning her MA and PhD at the University of Cambridge.
Briefly, how did you get started with AI? What drew you to the space?
I always liked maths and physics at school and chose to study engineering at university. That’s where I first learned about AI, although it wasn’t called AI at the time. I was intrigued by the idea of using computers to process speech and language that we humans find easy. From there I ended up studying for a PhD in voice technology and working as a researcher. We’re at a point in time where there have been huge strides forward in artificial intelligence recently, and I feel like there’s a huge opportunity to build technology that improves people’s lives.
What work in AI are you most proud of?
In 2020, in the early days of the pandemic, I founded my own consulting firm with a mission to bring real-world expertise and leadership to organizations. I am proud of the work I have done with my clients on different and interesting projects and also that I have been able to do this in a really flexible way around my family.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
It’s hard to measure exactly, but about 20% of the AI field is female. My understanding is also that the percentage goes down as you get older. For me, one of the best ways to navigate this is to build a support network. Of course, support can come from people of any gender. Sometimes, though, it’s reassuring to talk to women who are going through similar situations or who have seen the same problems, and it’s great not to feel alone.
The other thing for me is to think carefully about where to spend my energy. I believe we will only see lasting change when more women take senior and leadership positions, and that won’t happen if women spend all their energy on fixing the system instead of advancing their careers. There is a realistic balance between pushing for change and focusing on my day job.
What advice would you give to women looking to enter the AI field?
AI is a vast and exciting field with many happenings. There is also a huge amount of noise with what can seem like a constant stream of papers, products and models being released. It’s impossible to keep up with everything. Furthermore, not every paper or research result will be significant in the long run, no matter how impressive the press release. My advice is to find a position you’re really interested in making progress in, learn as much as you can about the position, and tackle the problems you’re motivated to solve. This will give you the solid foundation you need.
What are some of the most pressing issues facing artificial intelligence as it evolves?
Progress over the past 15 years has been rapid, and we’ve seen AI come out of the lab and into products without really stepping back to properly assess the situation and predict the consequences. An example that comes to mind is how much of our voice and language technology performs better in English than other languages. This does not mean that researchers have ignored other languages. Considerable effort has gone into non-English language technology. However, the unintended consequence of better English language technology means that we build and deploy technology that does not serve everyone equally.
What are some issues AI users should be aware of?
I think people need to know that AI is not a silver bullet that will solve all problems in the coming years. It may be quick to create an impressive demo, but it takes a lot of dedicated effort to create an AI system that works consistently well. We should not lose sight of the fact that AI is designed and built by humans, for humans.
What’s the best way to build responsible AI?
Building AI responsibly means including diverse viewpoints from the start, including your customers and anyone affected by your product. Thoroughly testing your systems is important to make sure you know how well they perform in various scenarios. Testing gets a reputation as boring work compared to the excitement of dreaming up new algorithms. However, it is important to know if your product actually works. Then there’s the need to be honest with yourself and your customers about the capabilities and limitations of what you’re building, so your system isn’t abused.
How can investors best push for responsible AI?
Startups are creating many new AI applications, and investors have a responsibility to be careful about what they choose to fund. I’d love to see more investors express their vision for the future we’re building and how responsible AI fits in.