Late Friday afternoon, a window of time companies usually reserve for unflattering disclosures, startup Hugging Face said its security team earlier this week detected “unauthorized access” to Spaces, Hugging Face’s platform for creating , sharing and hosting AI models and resources.
In a suspensionHugging Face said the hack related to Spaces secrets, or private information that acts as keys to unlock protected resources such as accounts, tools and developer environments, and that it “suspiced” that some secrets could be accessed by an unauthorized third party .
As a precaution, Hugging Face has withdrawn a number of tokens on these secrets. (Tokens are used to verify identities.) Hugging Face says users whose tokens have been revoked have already received an email notification and recommends that all users “renew any key or token” and consider change to discrete access tokens, which Hugging Face claims are more secure.
It was not immediately clear how many users or apps were affected by the potential breach.
“We are working with external cyber security forensics experts to investigate the issue as well as review our security policies and procedures. We have also reported this incident to law enforcement and Data [sic] protection principles,” Hugging Face wrote in the post. “We deeply regret the disruption this incident may have caused and understand the inconvenience it may have caused you. We are committed to using this as an opportunity to strengthen the security of our entire infrastructure.”
In an emailed statement, a representative for Hugging Face told TechCrunch:
“We’ve seen the number of cyberattacks increase significantly in recent months, probably because our usage has increased significantly and AI is becoming more mainstream. It is technically difficult to know how many site secrets have been breached.”
The potential hack of Spaces comes as Hugging Face, which is one of the largest platforms for collaborative AI and data science projects with more than a million AI-powered models, datasets and applications, faces increasing scrutiny of its security practices .
In April, researchers at cloud security firm Wiz discovered one vulnerability — since fixed — this would allow attackers to execute arbitrary code during build time of a Hugging Face-hosted application that would allow them to examine network connections from their computers. Earlier in the year, security firm JFrog uncovered evidence that the code uploaded to Hugging Face secretly installed backdoors and other types of malware on end-user machines. And security startup HiddenLayer has identified ways in which Hugging Face’s seemingly safer form of serialization, Safetensors, could be abused to create compromised AI models.
Embraced face he said recently that it would work with Wiz to use the company’s cloud configuration and vulnerability scanning tools “with the goal of improving security across our platform and the AI/ML ecosystem at large.”