To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI revolution. We’ll be publishing several pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
Francine Bennett is a founding board member at the Ada Lovelace Insitute and currently serves as the organization’s interim director. Before that, he worked in biotechnology, using artificial intelligence to find medical treatments for rare diseases. He also co-founded a data science consultancy and is a founding trustee of DataKind UK, which helps UK charities with data science support.
Briefly, how did you get started with AI? What drew you to the space?
I started out with pure math and wasn’t that interested in anything applied – I liked working with computers, but I thought all applied math was just calculations and not very intellectually interesting. I came to AI and machine learning later when it started to become obvious to me and everyone else that because data was becoming much more abundant in many environments, this opened up exciting possibilities for solving all kinds of problems in new ways using AI and machine learning, and they were much more interesting than I had realized.
What work are you most proud of (in AI)?
I’m very proud of the work that isn’t the most technically complex but unlocks some real improvement for people – for example, using ML to try to find previously unnoticed patterns in patient safety incident reports at a hospital to help its professionals medical field to improve future patient outcomes. And I’m proud to represent the importance of putting people and society, not technology, at the center of events like this year’s UK AI Security Summit. I think it’s only possible to do this with authority because I’ve had experience both working with technology and being excited by it and understanding how it really affects people’s lives in practice.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
Mainly choosing to work in places and with people who care about the person and their gender skills and seeking to use the influence I have to make that the norm. Also, working in diverse teams whenever I can – being in a balanced team rather than being an exceptional ‘minority’ creates a really different atmosphere and makes it much more possible for everyone to reach their potential. More generally, because AI is so multi-faceted and likely to impact so many walks of life, especially marginalized communities, it is clear that people from all walks of life need to be involved in building and shaping it if it is to work well.
What advice would you give to women looking to enter the AI field?
Enjoy! This is such an interesting, intellectually challenging and endlessly changing field – you will always find something useful and expansive to do, and there are many important applications that no one has even thought of yet. Also, don’t stress too much about needing to know every technical thing (literally nobody knows every technical thing) – just start by starting with something that interests you and work from there.
What are some of the most pressing issues facing artificial intelligence as it evolves?
Right now, I think there isn’t a shared vision of what we want AI to do for us and what it can and can’t do for us as a society. There is a lot of technical progress going on right now, which is likely to have very high environmental, economic and social impacts, and a lot of enthusiasm for the development of these new technologies without an informed understanding of the potential risks or unintended consequences. Most of the people building the technology and talking about the risks and consequences come from a pretty narrow demographic. We have a window of opportunity now to decide what we want to see from AI and work to make it happen. We can think about other types of technology and how we’ve handled their evolution or what we wish we could do better – what are our equivalents of the AI products of the new crash testing cars. hold a restaurant responsible for accidentally giving you food poisoning. consulting affected people during planning permission; appealing to an AI decision as you might to a human bureaucracy.
What are some issues AI users should be aware of?
I would like people using AI technologies to be confident about what the tools are and what they can do and to talk about what they want from AI. It’s easy to see AI as something unknown and uncontrollable, but really it’s just a set of tools – and I want people to feel empowered to take responsibility for what they do with those tools. But it shouldn’t just be the responsibility of the people using the technology – government and industry should create the conditions for people using AI to feel confident.
What’s the best way to build responsible AI?
We ask this question a lot at the Ada Lovelace Institute, which aims to make data AI work for people and society. It’s hard, and there are hundreds of angles you could take, but there are two really big ones from my perspective.
The first is to be willing sometimes not to build or to stop. All the time, we see artificial intelligence systems with a lot of momentum, where manufacturers try to add “guardrails” afterwards to mitigate problems and failures, but they are not put in a situation where disruption is possible.
The second is to really engage and try to understand how all kinds of people will experience what you’re building. If you can really get into their experiences, then you have a much better chance of the positive kind of responsible AI – creating something that actually solves a problem for people, based on a shared vision of what would be good – as well as avoiding the negatives – not accidentally making someone’s life worse because their day-to-day existence is simply very different from yours.
For example, the Ada Lovelace Institute worked with the NHS to develop an algorithmic impact assessment that developers must do as a condition of accessing healthcare data. This requires developers to assess the potential social impacts of their AI system prior to implementation and bring the lived experiences of the people and communities that could be affected to bear.
How can investors best push for responsible AI?
Asking questions about their investments and possible futures – for this AI system, how does it look like you’re running smoothly and being responsible? Where could things go? What are the possible negative effects on people and society? How would we know if we should stop building or change important things and what would we do then? There is no one-size-fits-all recipe, but simply by asking the questions and showing that accountability is important, investors can change where their companies give their attention and effort.
