Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Co-founders behind Reface and Prisma join hands to improve on-device model inference with Mirai

Rivian owners will soon be able to access vehicle controls using their Apple Watch

Ali Partovi’s Neo appears to upgrade the throttle model in low dilution terms

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    ‘Toy Story 5’ takes aim at creepy AI toys: ‘I’m always listening’

    21 February 2026

    Great news for xAI: Grok is now very good at answering questions about Baldur’s Gate

    21 February 2026

    UAE’s G42 partners with Cerebra to deploy 8 exaflops of computers in India

    20 February 2026

    Why these startup CEOs don’t think AI will replace human roles

    20 February 2026

    Reliance unveils $110bn AI investment plan as India boosts tech ambitions

    19 February 2026
  • Apps

    India’s Sarvam launches Indus AI chat app as competition heats up

    21 February 2026

    Remember HQ? “Quiz Daddy” Scott Rogowsky is back with TextSavvy, a daily mobile game show

    21 February 2026

    As the browser war heats up, Chrome is adding new productivity features

    20 February 2026

    Google says its AI systems helped prevent Play Store malware in 2025

    20 February 2026

    Mastodon, a decentralized alternative to X, plans to target creators with new features

    19 February 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    InScope raises $14.5M to solve financial reporting pain

    20 February 2026

    OpenAI deepens India push with Pine Labs fintech partnership

    19 February 2026

    Cash app adds payment links so you can get paid in DMs

    11 February 2026

    MrBeast’s company buys Gen Z fintech app Step

    9 February 2026

    Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

    5 February 2026
  • Hardware

    Joseph C Belden: Last Chance for Innovators to Earn Scaling Privileges

    20 February 2026

    At a critical time, Snap is losing a top spec executive

    20 February 2026

    Freeform Raises $67M Series B to Scale Laser AI Production

    19 February 2026

    India’s Sarvam wants to bring its AI models to phones, cars and smart glasses

    19 February 2026

    Google debuts $499 Pixel 10a

    18 February 2026
  • Media & Entertainment

    Google adds music-making capabilities to its Gemini app

    21 February 2026

    Disrupt 2026 Super Early Bird pricing expires in 1 week

    20 February 2026

    YouTube’s latest experiment brings its AI chat tool to TVs

    20 February 2026

    OpenAI, Reliance partner to add AI search to JioHotstar

    19 February 2026

    SeatGeek and Spotify are teaming up to offer concert ticket discounts within the music platform

    19 February 2026
  • Security

    Ukrainian man jailed for identity theft that helped North Koreans get jobs at US companies

    21 February 2026

    Cellebrite cut off Serbia citing misuse of its phone unlocking tools. Why not others?

    20 February 2026

    FBI says ATM ‘jackpot’ attacks on the rise, hackers net millions in stolen cash

    20 February 2026

    Sex toy maker Tenga says hacker stole customer information

    19 February 2026

    Hacker conference Def Con bans three people linked to Epstein

    19 February 2026
  • Startups

    Co-founders behind Reface and Prisma join hands to improve on-device model inference with Mirai

    21 February 2026

    Nominations for the Startup Battlefield 200 are now open

    21 February 2026

    The OpenAI mafia: 18 startups founded by graduates

    20 February 2026

    Nvidia deepens early-stage push into India’s AI startup ecosystem

    20 February 2026

    Kana emerges from stealth with $15M to build flexible AI agents for marketers

    19 February 2026
  • Transportation

    Rivian owners will soon be able to access vehicle controls using their Apple Watch

    21 February 2026

    Lucid Motors is cutting 12% of its workforce as it pursues profitability

    21 February 2026

    New York puts the brakes on robotaxi expansion plan

    20 February 2026

    AI data center boom fuels Redwood’s energy storage business

    20 February 2026

    Tesla avoids 30-day suspension in California after removing ‘Autopilot’

    18 February 2026
  • Venture

    Ali Partovi’s Neo appears to upgrade the throttle model in low dilution terms

    21 February 2026

    Peak XV Raises $1.3B, Doubles In AI As Global India VC Competition Heats Up

    21 February 2026

    General Catalyst commits $5 billion to India over five years

    20 February 2026

    Reload wants to give your AI agents a shared memory

    20 February 2026

    This VC’s best advice for building a founding team

    19 February 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»Women in AI: Sandra Watcher, professor of data ethics at Oxford
AI

Women in AI: Sandra Watcher, professor of data ethics at Oxford

techtost.comBy techtost.com9 March 202408 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Women In Ai: Sandra Watcher, Professor Of Data Ethics At
Share
Facebook Twitter LinkedIn Pinterest Email

To give women academics and others well-deserved—and overdue—time in the spotlight, TechCrunch is launching a series of interviews focusing on notable women who have contributed to the AI ​​revolution. We’ll be publishing several pieces throughout the year as the AI ​​boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.

Sandra Wachter is Professor and Senior Researcher in Data Ethics, Artificial Intelligence, Robotics, Algorithms and Regulation at the Oxford Internet Institute. He is also a former fellow of the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence.

While at the Turing Institute, Watcher assessed the ethical and legal aspects of data science, highlighting instances where opaque algorithms have become racist and sexist. It also looked at ways to control artificial intelligence to counter misinformation and promote fairness.

Q&A

Briefly, how did you get started with AI? What drew you to the space?

I can’t remember a time in my life when I didn’t believe that innovation and technology have incredible potential to make people’s lives better. However, I also know that technology can have devastating effects on people’s lives. And so, I’ve always been driven – mostly by my strong sense of justice – to find a way to guarantee that perfect middle ground. Enabling innovation while protecting human rights.

I have always felt that law has a very important role to play. Law can be the middle ground that protects people but allows innovation. Law as a discipline came very naturally to me. I like challenges, I like to understand how a system works, see how I can game it, find loopholes and then close them.

AI is an incredibly transformative force. It applies to finance, employment, criminal justice, immigration, health and the arts. This can be good and bad. And whether that’s good or bad is a matter of planning and politics. Of course I was drawn to it because I felt that legislation can make a real contribution to ensuring that innovation benefits as many people as possible.

What work are you most proud of (in AI)?

I think the project I’m most proud of right now is a project co-authored by Brent Mittelstadt (philosopher), Chris Russell (computer scientist), and myself as a lawyer.

Our latest work on prejudice and justice, “The unfairness of fair machine learning”, revealed the harmful impact of the imposition of many measures of “group justice” in practice. In particular, justice is achieved by “equalizing” or making everyone worse off, rather than helping disadvantaged groups. This approach is highly problematic under EU and UK non-discrimination law, as well as ethically problematic. In a piece in Wired We discussed how harmful leveling can be in practice — in health care, for example, enforcing group justice could mean missing more cancer cases than absolutely necessary, while making a system less accurate overall.

For us this was scary and something that is important for people in technology, politics and really every person to know. In fact, we have worked with UK and EU regulators and shared our alarming results with them. I sincerely hope that this will give policymakers the necessary leverage to implement new policies that prevent AI from causing such serious harm.

How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry

The interesting thing is that I never saw technology as something that “belonged” to men. It wasn’t until I started school that society told me that technology has no place for people like me. I still remember when I was 10 years old the curriculum dictated that the girls should be knitting and sewing while the boys were building birdhouses. I also wanted to build a birdhouse and asked to be transferred to the boys’ class, but my teachers told me “girls don’t do that”. I even went to the school principal trying to overturn the decision but unfortunately failed at that time.

It’s very hard to fight against a stereotype that says you shouldn’t be part of this community. I wish I could say that things like this don’t happen anymore, but that’s unfortunately not true.

However, I was incredibly fortunate to work with allies like Brent Mittelstadt and Chris Russell. I was privileged to have incredible mentors as my Ph.D. supervisor and I have a growing network of like-minded people of all genders who are doing their best to steer the path forward to improve the situation for everyone interested in technology.

What advice would you give to women looking to enter the AI ​​field?

Above all try to find like-minded people and allies. Finding your people and supporting each other is vital. My most impressive work always comes from conversing with open-minded people from other backgrounds and industries to solve common problems we face. Accepted wisdom alone can’t solve new problems, so women and other groups that have historically faced barriers to entry in AI and other tech fields have the tools to truly innovate and deliver something new.

What are some of the most pressing issues facing artificial intelligence as it evolves?

I believe there is a wide range of issues that need serious legal and political consideration. To name a few, AI is plagued by biased data that leads to biased and unfair results. AI is inherently opaque and hard to understand, yet it is tasked with deciding who gets a loan, who gets the job, who should go to jail, and who is allowed to go to university.

Generative AI has relevant issues, but it also contributes to misinformation, is full of delusions, violates data protection and intellectual property rights, puts people’s jobs at risk, and contributes more to climate change than the airline industry.

We have no time to waste. we should have addressed these issues yesterday.

What are some issues AI users should be aware of?

I think there is a tendency to believe a certain narrative along the lines of “AI is here and here to stay, get on board or be left behind”. I think it’s important to think about who is pushing this narrative and who is profiting from it. It is important to remember where the real power lies. The power does not belong to those who innovate, but to those who buy and implement AI.

So consumers and businesses should ask themselves, “Is this technology really helping me, and how?” Electric toothbrushes now have “AI” built into them. Who is this for? Who needs this? What is improving here?

In other words, ask yourself what is broken and what needs fixing, and whether AI can actually fix it.

This type of thinking will shift market power and innovation will hopefully head in a direction that focuses on utility for a community rather than just profit.

What’s the best way to build responsible AI?

Having laws in place requiring responsible AI. Here again, a very unhelpful and untrue narrative tends to prevail: that regulation stifles innovation. This is not real. Regulation is suffocating harmful innovation. Good laws promote and nurture moral innovation. This is why we have safe cars, planes, trains and bridges. Society does not lose if regulation prevents it
creating artificial intelligence that violates human rights.

Traffic and safety regulations for cars are also said to “stifle innovation” and “limit range”. These laws prevent people from driving without a license, prevent cars from entering the market without seat belts and airbags, and penalize people who do not drive within the speed limit. Imagine what the safety record of the auto industry would be like if we didn’t have laws to regulate vehicles and drivers. Artificial intelligence is currently at a similar inflection point, and heavy industry lobbying and political pressure mean it’s still unclear which way it will go.

How can investors best push for responsible AI?

I wrote a paper a few years ago entitled “How fair AI can make us richer.” I strongly believe that artificial intelligence that respects human rights and is unbiased, explainable and sustainable is not only the legally, ethically and morally right thing to do, but it can also be profitable.

I really hope that investors will understand that if they push for responsible research and innovation, they will also get better products. Bad data, bad algorithms and bad design choices lead to worse products. Even if I can’t convince you that you should do the moral thing because it’s the right thing, I hope you can see that the moral thing is also more profitable. Ethics should be seen as an investment and not as an obstacle to be overcome.

All included data ethics Oxford professor Sandra Watcher women Women in AI
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleFortnite is back on iOS in Europe (for real this time)
Next Article Baron boosts Swiggy valuation to $12.16 billion, above previous private market valuation
bhanuprakash.cg
techtost.com
  • Website

Related Posts

‘Toy Story 5’ takes aim at creepy AI toys: ‘I’m always listening’

21 February 2026

Great news for xAI: Grok is now very good at answering questions about Baldur’s Gate

21 February 2026

UAE’s G42 partners with Cerebra to deploy 8 exaflops of computers in India

20 February 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Co-founders behind Reface and Prisma join hands to improve on-device model inference with Mirai

21 February 2026

Rivian owners will soon be able to access vehicle controls using their Apple Watch

21 February 2026

Ali Partovi’s Neo appears to upgrade the throttle model in low dilution terms

21 February 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

InScope raises $14.5M to solve financial reporting pain

20 February 2026

OpenAI deepens India push with Pine Labs fintech partnership

19 February 2026

Cash app adds payment links so you can get paid in DMs

11 February 2026
Startups

Co-founders behind Reface and Prisma join hands to improve on-device model inference with Mirai

Nominations for the Startup Battlefield 200 are now open

The OpenAI mafia: 18 startups founded by graduates

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.