Google will require Android apps to let users report scammy or inappropriate AI-generated content if they want to stay on the Google Play Store, according to a policy update, amid fears that AI-based apps are being exploited by users to create NSFW and non-consensual content.
Starting January 31, apps on the Google Play Store that allow users to create AI content will be required to have features so users can flag violative content to developers under the new policy.
App developers will use the user reports to filter and moderate content on their apps themselves, the guideline says.
Text-based generative AI chatbots and AI image generators are included within the policy, while apps that just host AI-generated content and don’t allow users to create new AI content aren’t able to create content using AI are free from the policy.
Some examples of violative AI-generated content determined by Google include non-consensual sexual deepfakes (or highly realistic fake AI images of real people), voice or video recordings of real people used to conduct scams and false or deceptive election-related content.
Google also announced last month it will soon require election advertisers to make “clear and conspicuous” disclosures for advertisements containing AI-generated content.
Eric Schmidt, the former CEO of Google parent Alphabet, warned in June that the “2024 elections are going to be a mess because social media is not protecting us from false generative AI.” Schmidt noted cuts to content moderation roles at companies like Meta and X, formerly known as Twitter, were a “big issue.”