To give women academics and others their well-deserved—and overdue—time in the spotlight, TechCrunch is publishing a series of interviews focusing on notable women who have contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
In focus today: Rachel Coldicketc is its founder Careful Industries, which investigates the social impact technology has on society. Clients include Salesforce and the Royal Academy of Engineering. Prior to Careful Industries, Coldicutt was CEO at the think tank Doteveryone, which also conducted research on how technology affects society.
Before Doteveryone, he spent decades working in digital strategy for companies such as the BBC and the Royal Opera House. She studied at the University of Cambridge and was awarded an OBE (Order of the British Empire) for her work in digital technology.
Briefly, how did you get started with AI? What drew you to the space?
I started working in technology in the mid-90s. My first proper tech job was at Microsoft Encarta in 1997, and before that, I helped build content databases for reference books and dictionaries. Over the past three decades, I’ve worked with all kinds of new and emerging technologies, so it’s hard to pinpoint the exact moment I “got into AI,” because I’ve been using automated processes and data to drive decisions, create experiences, and produce projects. art since the 2000s. Instead, I think the question is probably, “When did artificial intelligence become the set of technologies that everyone wanted to talk about?” and I think the answer is probably around 2014, when DeepMind was acquired by Google — that was the moment in the UK that artificial intelligence surpassed everything else, even though a lot of the underlying technologies we now call ‘AI’ were things that it was already quite common.
I started working in technology almost by accident in the 1990s, and what has kept me in the field through many changes is the fact that it is full of fascinating contradictions: I love how stimulating it can be to learn new skills and make things. I am fascinated by what we can discover from structured data and could happily spend the rest of my life observing and understanding how people make and shape the technologies we use.
What work in AI are you most proud of?
Much of my work in AI has been in policy making and social impact assessments, working with government agencies, charities and all kinds of businesses to help them use AI and related technology in purposeful and credible ways.
Back in the 2010s, I ran Doteveryone – a responsible tech think tank – which helped change the framework for how UK policymakers think about emerging technology. Our work has made it clear that AI is not a set of technologies without consequences, but something that has pervasive real-world implications for people and societies. In particular, I’m really proud of the freebies Consequence scan tool we developed, which is now used by teams and businesses around the world, helping them predict the social, environmental and political impacts of the choices they make when they ship new products and features.
Most recently, in 2023 AI and Society Forum it was another proud moment. Ahead of the UK Government’s AI Security Forum, my team at Care Trouble quickly convened and curated a gathering of 150 people from across civil society to collectively argue that it is possible to make AI work for 8 billion people, not just 8 billionaires.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
As a relative old-timer in the tech world, I feel like some of the gains we’ve made in tech gender representation have been lost in the last five years. Research from the Turing Institute shows that less than 1% of AI investment has gone into startups led by women, while women still only make up a quarter of the total tech workforce. When I go to AI conferences and events, the mix of genders – particularly in terms of who has a platform to share their work – reminds me of the early 2000s, which I find really sad and shocking.
I’m able to navigate the sexist attitudes of the tech industry because I have the enormous privilege of being able to found and run my own organization: I spent much of my early career experiencing sexism and sexual harassment on a daily basis—dealing with what prevents good job and is an unnecessary cost of entry for many women. Instead, I’ve prioritized building a feminist business where, collectively, we fight for equality in everything we do, and I hope we can show that other ways are possible.
What advice would you give to women looking to enter the AI field?
Don’t feel like you have to work in a ‘women’s issue’ field, don’t be put off by the hype and seek out peers and friendships with other people so you have an active support network. What has kept me going over the years is my network of friends, former colleagues and allies — we offer each other mutual support, an endless supply of conversation and sometimes a shoulder to cry on. Without it, he can feel very lonely. you’ll so often be the only woman in the room that it’s vital to have somewhere safe to turn to decompress.
The moment you get the chance, hire well. Don’t reproduce structures you’ve seen and reinforce the expectations and standards of an elitist, sexist industry. Challenge the status quo every time you hire and support your new hires. That way, you can start building a new normal, wherever you are.
And seek out the work of some of the great women pursuing great AI research and practice: Start by reading the work of pioneers like Abeba Birhane, Timnit Gebru, and Joy Buolamwini, all of whom have done seminal research that has shaped our understanding of how artificial intelligence is changing and interacting with society;
What are some of the most pressing issues facing artificial intelligence as it evolves?
AI is an amplifier. We may feel that some of the uses are inevitable, but as societies, we must be able to make clear choices about what is worth intensifying. Right now, the main thing the increased use of AI is doing is increasing the power and bank balances of a relatively small number of male CEOs, and it seems unlikely that [it] it shapes a world in which many people want to live. I’d love to see more people, particularly in industry and policy-making, engage with the questions of what a more democratic and accountable AI looks like and whether it’s even possible.
The climate impacts of AI — its use of water, energy and critical minerals — and the health and social justice implications for people and communities affected by the exploitation of natural resources must be at the top of the list for responsible development. The fact that LLMs, in particular, are so energy intensive speaks to the fact that the current model is not fit for purpose. in 2024, we need innovation that protects and restores the natural world, and extractive models and ways of working must be retired.
We must also be realistic about the surveillance implications of a more data-driven society and the fact that — in an increasingly volatile world — any general-purpose technologies will likely be used for unimaginable horrors in war. All those working in AI must be realistic about the historical, long-standing connection of R&D technology to military development. We need to support, support and demand innovation that is initiated by and driven by communities, so that we get results that strengthen society and don’t lead to increased destruction.
What are some issues AI users should be aware of?
In addition to the environmental and financial implications embedded in many of the current AI business and technology models, it is very important to think about the day-to-day implications of the increased use of AI and what this means for everyday human interactions.
While some of the topics that made headlines were more existential risks, it’s worth keeping an eye on how the technologies you use help and hinder you on a daily basis: which automations you can turn off and work with, which ones provide real benefit, and where you can vote with legs as a consumer to argue that you really want to continue talking to a real person, not a bot? We don’t need to settle for poor quality automation and we need to unite to demand better results!
What’s the best way to build responsible AIs?
Responsible AI starts with good strategic choices — instead of just throwing in an algorithm and hoping for the best, it’s possible to be intentional about what to automate and how. I’ve been talking about the idea of ”Just enough internet” for a few years now and it seems like a really useful idea to guide how we think about building any new technology. Instead of constantly pushing the limits, can we create AI in a way that maximizes benefits for people and the planet and minimizes harm?
We have developed a robust process That’s why at Careful Trouble, where we work with boards and senior teams, starting with mapping out how AI can and can’t support your vision and values. understanding the problems that are too complex and variable to improve with automation, and where it will create benefits; and finally, developing a proactive risk management framework. Responsible development is not a one-time application of a set of principles, but an ongoing process of monitoring and mitigation. Continuous development and social adaptation means that quality assurance cannot be something that ends once a product is shipped. As AI developers, we need to build the capacity for iterative, social sense and treat responsible development and growth as a living process.
How can investors best push for responsible AI?
By making more patient investments, supporting more diverse founders and teams, and not chasing exponential returns.