A feature Google unveiled at its I/O confab yesterday, using genetic AI technology to scan voice calls in real-time for conversation patterns related to financial scams, sent a collective shudder to privacy and security experts who are warning the function represents the thin end of the wedge. They warn that when client-side scanning becomes integrated into mobile infrastructure, it could usher in an era of centralized censorship.
Google’s demo of its call fraud detection feature, which the tech giant said will be built into a future version of Android OS — estimated to run on about three-quarters of the world’s smartphones — is powered by the Gemini Nano, the smaller than their current generation of AI models intended to run entirely on the device.
It’s essentially client-side scanning: A nascent technology that has caused huge controversy in recent years over attempts to detect child sexual abuse material (CSAM) or even grooming activity on messaging platforms.
Apple abandoned a plan to deploy client-side scanning for CSAM in 2021 after a huge privacy backlash. But policymakers continued to pressure the tech industry to find ways to detect illegal activity taking place on their platforms. Any industry move to build on-device scanning infrastructure could therefore pave the way for all kinds of content scanning by default — whether government-led or related to a particular commercial agenda.
Answer to Google’s call scanning demo in a posting on X, Meredith Whittaker, president of US-based encrypted messaging app Signal, warned: “This is incredibly dangerous. Sets the path for central client-side scanning at the device level.
“From spotting ‘cheats’ it’s a short step to ‘detecting patterns commonly associated with w[ith] seeking reproductive care” or “commonly associated w[ith] providing LGBTQ resources” or “commonly associated with tech worker complaints”.
Cryptography expert Matthew Green, a professor at Johns Hopkins, as well went to X to sound the alarm. “In the future, AI models will run inferences on your texts and voice calls to detect and report illegal behavior,” he warned. “For your data to pass through service providers, you will need to attach a zero-knowledge proof that a scan has taken place. This will exclude open clients.”
Green suggested that this dystopian future of censorship by default is only a few years away from being technically possible. “We are a little bit away from this technology being quite effective, but only a few years. A decade at most,” he suggested.
European privacy and security experts were also quick to object.
Reaction to the Google demo in X, Lukasz Olejnik, a Poland-based independent privacy and security researcher and consultant, hailed the company’s anti-fraud operation but warned that the infrastructure could be repurposed for social surveillance. “[T]it also means that technical capabilities have already been developed or are being developed to monitor calls, create, compose texts or documents, for example to look for illegal, harmful, hateful or otherwise unwanted or unfair content — relative to one’s standards. He wrote.
“Going further, such a model could, for example, display a warning. Or block the ability to continue,” Olejnik continued emphatically. “Or mention it somewhere. Technological modification of social behavior or something similar. This is a significant threat to privacy, but also to a number of basic values and freedoms. The potential is already there.”
Further highlighting his concerns, Olejnik told TechCrunch: “I haven’t seen the technical details, but Google assures that the detection will be done on the device. This is great for user privacy. However, much more than privacy is at stake. This highlights how AI/LLM embedded in software and operating systems can be turned into detection or control for various forms of human activity.
“So far it’s thankfully for the better. But what lies ahead if the technical ability exists and is incorporated? Such strong features signal potential future risks associated with the ability to use AI to control the behavior of societies at scale or selectively. These are perhaps the most dangerous information technology capabilities ever developed. And we’re getting close to that point. How do we govern this? Are we going too far?’
Michael Veale, associate professor in technology law at UCL, also raised the chilling specter of functional creep flowing from Google’s chat-scanning AI — warning of a backlash posting on X that it “creates infrastructure for client-side scanning on the device for more purposes than those that regulators and lawmakers will want to abuse.”
Privacy experts in Europe have particular cause for concern: The European Union has on the table a controversial legislative proposal to scan messages from 2022, which critics — including the bloc’s Data Protection Supervisor — warn is a tipping point for democratic rights in the region as it would force platforms to scan private messages by default.
While the current legislative proposal claims to be technology agnostic, it is widely expected that such a law would lead to platforms deploying client-side scanning in order to be able to respond to a so-called detection mandate that requires them to detect both known as well as unknown CSAMs and also get real-time grooming activities.
Earlier this month, hundreds of privacy and security experts wrote an open letter warning that the plan could lead to millions of false positives a day as client-side scanning technologies are likely to be deployed by platforms in response. in a legal order they are unproven, deeply flawed and vulnerable to attack.
Google was contacted for a response to concerns that its chat-scanning AI could erode people’s privacy, but had not responded at the time of publication.
We’re launching an AI newsletter! Sign up here to start receiving it in your inbox on June 5th.