To give women academics and others their well-deserved—and overdue—time in the spotlight, TechCrunch is publishing a series of interviews focusing on notable women who have contributed to the AI revolution. We publish these pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
In focus today: Anna Korhonen is a natural language processing (NLP) professor at University of Cambridge. She is also a senior researcher at Churchill Collegefellow at the Association for Computational Linguistics, and fellow at the European Laboratory for Learning and Intelligent Systems.
Korhonen previously served as a fellow at Alan Turing Institute and holds a PhD in computer science and an MA in computer science and linguistics. Researches NLP and how developing, adapting and applying computational techniques to meet the needs of artificial intelligence. It is of particular interest to responsible and ‘human-centred’ NLP which – in her own words – ‘is rooted in understanding human cognitive, social and creative intelligence’.
Q&A
Briefly, how did you get started with AI? What drew you to the space?
I have always been fascinated by the beauty and complexity of human intelligence, particularly in relation to human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in artificial intelligence because it is a field that allows me to combine all these interests.
What work in AI are you most proud of?
While the science of building intelligent machines is fascinating and one can easily get lost in the world of language modeling, the ultimate reason we build AI is its practicality. I am very proud of the work where my fundamental research in natural language processing has led to the development of tools that can support social and global good. For example, tools that can help us better understand how diseases such as cancer or dementia develop and can be treated, or applications that can support education.
Much of my current research is driven by the mission to develop AI that can improve human lives for the better. Artificial intelligence has enormous positive potential for social and global good. A big part of my job as an educator is to encourage the next generation of AI scientists and leaders to focus on harnessing this potential.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I’m lucky to work in a field of artificial intelligence where we have a fairly large female population and established support networks. I have found these extremely helpful in navigating career and personal challenges.
For me, the biggest problem is how the male-dominated industry sets the agenda for AI. The current arms race to develop ever larger AI models at any cost is a prime example. This has a huge impact on the priorities of both academia and industry, as well as broad socio-economic and environmental implications. Do we need larger models and what are their global costs and benefits? I believe we would be asking these questions much earlier in the game if we had a better gender balance on the field.
What advice would you give to women looking to enter the AI field?
AI desperately needs more women at all levels, but especially at the leadership level. The current leadership culture is not necessarily attractive to women, but active participation can change that culture — and ultimately the AI culture. Women are not always good at supporting each other. I would really like to see a change of attitude on this: We need to actively network and help each other if we want to achieve a better gender balance in this area.
What are some of the most pressing issues facing artificial intelligence as it evolves?
Artificial intelligence has developed incredibly fast: It has evolved from an academic field to a global phenomenon in less than a decade. During this time, most of the effort has been towards scaling through big data and computing. Little effort has been devoted to thinking about how this technology should be developed so that it can better serve humanity. People have good reason to be concerned about the safety and reliability of artificial intelligence and its impact on jobs, democracy, the environment and other areas. We urgently need to put human needs and safety at the center of AI development.
What are some issues AI users should be aware of?
Current artificial intelligence, even when it seems very fluent, ultimately lacks the global knowledge of humans and the ability to understand the complex social contexts and norms in which we operate. Even the best in today’s technology make mistakes, and our ability to prevent or predict these mistakes is limited. AI can be a very useful tool for many tasks, but I wouldn’t trust it to train my children or make important decisions for me. We the people must remain responsible.
What’s the best way to build responsible AI?
AI developers tend to think of ethics as an afterthought — after the technology has already been built. The best way to think about it is before any development begins. Questions like, “Do I have a diverse enough team to develop a fair system?” or “Is my data truly free to use and representative of all user populations?” or “Are my techniques strong?” it really needs to be asked from the start.
While we can address some of this problem through education, we can only enforce it through regulation. The recent development of national and global AI regulations is important and must continue to guarantee that future technologies will be safer and more reliable.
How can investors best push for responsible AI?
AI regulations are emerging and companies will eventually have to comply. We can think of responsible AI as sustainable AI that is actually worth investing in.