Google on Thursday is issuing new guidelines for developers who build artificial intelligence apps distributed through Google Play, hoping to curb inappropriate and otherwise prohibited content. The company says apps that offer AI features should prevent the creation of restricted content — which includes sexual content, violence and more — and should offer a way for users to flag content they find offensive. In addition, Google says developers must “rigorously test” their AI tools and models to ensure they respect user security and privacy.
It also fights apps where marketing materials promote inappropriate use cases, such as apps that undress people or create non-consensual nudity. If the ad copy says the app can do this, it can be banned from Google Play, regardless of whether the app is actually capable of doing so.
The guidelines follow a growing plague of AI stripping apps that have been marketed on social media in recent months. An April report by 404 Media, for example, found that Instagram was hosting ads for apps that claimed to use AI to deepfake nudes. An app was launched using a photo of Kim Kardashian and the slogan “Strip every girl for free.” Apple and Google pulled the apps from their respective app stores, but the problem is still widespread.
US schools report problems with Students passage about AI fake nudes of other students (and sometimes teachers) for bullying and harassment, among other things unsuitable AI content. Last month, he led a racist AI of a school principal in arrest in Baltimore. Worse, the problem is even affecting high school studentsin some cases.
Google says its policies will help remove apps from Google Play that feature AI-generated content that may be inappropriate or harmful to users. It points to the existing AI-Generated Content Policy as a place to review the requirements for app approval on Google Play. The company says AI apps cannot allow the creation of any restricted content and must also give users a way flag offensive and inappropriate content, as well as tracking and prioritizing these comments. The latter is especially important in apps where user interactions “shape the content and experience,” Google says, such as apps where popular models rank higher or are more prominent, perhaps.
Developers also cannot advertise that their app violates any of Google Play’s rules, according to Google Application promotion requirements. If it advertises an inappropriate use case, the app could be launched from the app store.
Additionally, developers are responsible for protecting their apps against messages that could manipulate their AI functions to create harmful and offensive content. Google says developers can use it closed tests ability to share early versions of their apps with users to get feedback. The company strongly suggests that developers not only test before release, but also document those tests, as Google could ask to review them in the future.
The company also publishes other resources and best practices, like this one People + AI driverwhich aims to support developers creating artificial intelligence applications.