Elon Musk’s X is the latest social network to roll out a feature to mark edited images as “used media,” if a post by Elon Musk is to be believed. However, the company has not specified how it will make that determination, or whether it includes images that have been edited using traditional tools such as Adobe’s Photoshop.
So far, the only details about the new feature have come from a cryptic post X by Elon Musk saying, “Edited optics warning,” as he reposts an announcement about a new X feature made by anonymous Account X DogeDesigner. This account is often used as a proxy for introducing new X features, as Musk will repost from it to share news.
However, details on the new system are scarce. DogeDesigner’s post claimed that X’s new feature could make it “harder for legacy media groups to spread misleading clips or images.” He also claimed that the feature is new to X.
Before it was acquired and renamed X, the company known as Twitter had flagged tweets using manipulated, deceptively modified or fabricated media as an alternative to taking them down. Its policy wasn’t limited to artificial intelligence, but included things like “selective editing or cropping or slowing down or overdubbing or manipulating subtitles,” the site’s head of integrity, Yoel Roth, said in 2020.
It is unclear whether X adopts the same rules or has made significant changes to deal with AI. His help documentation it currently says it has a policy against sharing non-authentic media, but it’s rarely enforced, as the recent debacle of users sharing nude images without consent showed. Besides, even the White House now shares doctored images.
Calling something “media manipulated” or “AI image” can be nuanced.
Since X is playground for political propagandaboth internally and externally, some understanding of how the company defines what is “processed”, or perhaps generated by AI or manipulated by AI, should be documented. Additionally, users should be aware of whether or not there is some sort of dispute process beyond X’s co-added community notes.
Techcrunch event
San Francisco
|
13-15 October 2026
As Meta discovered when it introduced its AI image tag in 2024, it’s easy for detection systems to get it wrong. In her case, Meta was found to be falsely tagging real photos with the “Made with AI” tag, even though they weren’t created using genetic artificial intelligence.
This is because AI features are increasingly being incorporated into creative tools used by photographers and graphic designers. (Apple’s new Creator Studio suite, released today, is a recent example.)
As it turned out, this confused the Meta recognition tools. For example, Adobe’s crop tool flattened images before saving them as JPEGs, triggering Meta’s AI detector. In another example, Adobe’s Generative AI Fill, which is used to remove objects – such as wrinkles on a shirt or unwanted reflection – also caused images to be labeled as “Made with AI” when processed only with AI tools.
Finally, Meta updated its tag to say “AI Info” so it wouldn’t label images as “Made with AI” when they weren’t.
Today, there is a standard-setting body for verifying the authenticity and provenance of content for digital content, known as C2PA (Content Origin and Authenticity Coalition). There are also related initiatives such as CAIor Content Authenticity Initiative and Project originfocused on adding tamper-evident provenance metadata to media content.
Apparently, the X implementation would follow some sort of known AI content detection process, but X owner Elon Musk didn’t say what that is. Nor did he clarify whether he’s specifically talking about AI images or anything other than the photo uploaded to X directly from your smartphone camera. It is not yet clear if the feature is brand new as DogeDesigner claims.
X isn’t the only outlet struggling with manipulated media. Apart from Meta, TikTok has also tagged AI content. Streaming services such as Deezer and Spotify are also scaling up initiatives to detect and flag music AI. Google Photos uses C2PA to indicate how the photos were made on his platform. Microsoft, BBC, Adobe, Arm, Intel, Sony, OpenAI and more are enabled the C2PA steering committeewhile many more companies have joined as members.
X is not currently included among the membersalthough we have contacted C2PA to see if this has changed recently. X doesn’t usually respond to requests for comment, but we asked anyway.
