AI-generated nude photos of Taylor Swift went popular on X, avoiding moderation and causing controversy.

AI-generated nude photos of Taylor Swift went popular on X, avoiding moderation and causing controversy.

Nonconsensual sexually explicit deepfakes of Taylor Swift became viral on X on Wednesday, garnering over 27 million views and more over 260,000 likes in only 19 hours until the account who uploaded the photos was suspended.

Deepfakes of Swift in naked and sexual situations continue to circulate on X, including reposts of popular deepfake photos. Such pictures may be constructed using AI technologies that generate wholly new, synthetic images, or by taking a genuine image and “undressing” it using AI algorithms.

The photographs’ origin is unknown, but a watermark suggests that they come from a years-old website infamous for releasing phony naked celebrity images. The website has a section labeled “AI deepfake.”

Reality Defender, an AI detection software startup, evaluated the photographs and concluded that they were most likely made using AI technology.

The photos’ widespread distribution over almost a day highlights the growing frightening spread of AI-generated material and disinformation online. Despite the issue’s intensification in recent months, tech platforms like as X, who have created their own generative-AI technologies, have yet to install or discuss techniques to identify generative-AI material that violates their policies.

Swift’s most watched and shared deepfakes depict her naked in a football stadium. Swift has received months of sexist abuse for attending NFL games to support her boyfriend, Kansas City Chiefs star Travis Kelce. Swift addressed the controversy in an interview with Time, stating, “I have no idea if I’m being shown too much and pissing off a few dads, Brads, and Chads.”

X did not reply quickly to a request for comment. Swift’s agent refused to comment on the record.

X has prohibited modified video that might cause damage to particular individuals, but it has consistently been sluggish or failed to address the problem of sexually explicit deepfakes on the site. In early January, a 17-year-old Marvel actress came out after seeing sexually explicit deepfakes of herself on X and being unable to erase them. As of Thursday, NBC News has found similar material on X. In June 2023, an NBC News investigation discovered nonconsensual sexually explicit deepfakes of TikTok stars spreading on the app. After X was approached for comment, only portion of the content was deleted.

According to several Swift fans, Swift and X were not responsible for removing the artist’s most prominent photographs; rather, it was the product of a mass-reporting effort.

Following the trending of “Taylor Swift AI” on X, Swift’s admirers started flooding the hashtag with complimentary tweets about her, according to a Blackbird study.AI is a startup that uses AI technology to safeguard enterprises against narrative-driven cyber assaults. “Protect Taylor Swift” also became popular on Thursday.

One of the persons who claimed responsibility for the reporting effort provided NBC News with two screenshots of notices she got from X indicating that her complaints resulted in the suspension of two accounts that posted Swift deepfakes for violating X’s “abusive behavior” regulation.

The lady who submitted the screenshots, who exchanged direct messages on the condition of anonymity, said she is increasingly concerned about the current effects of AI deepfake technology on ordinary women and girls.

“They don’t take our suffering seriously, so now it’s up to us to mass report these people and have them suspended,” the lady who reported the Swift deepfakes stated in a direct message.

In the United States, scores of high school-aged females have reported being targeted by deepfakes. There is currently no federal legislation in the United States that governs the development and distribution of nonconsensual sexually explicit deepfakes.

Rep. Joe Morelle, D-N.Y., who sponsored a bill in May 2023 to outlaw nonconsensual sexually explicit deepfakes at the federal level, said on X regarding the Swift deepfakes, “Yet another example of the destruction deepfakes cause.” Despite the fact that a notable young deepfake victim advocated for it in early January, the measure has not gone ahead.

Carrie Goldberg, a lawyer who has represented victims of deepfakes and other forms of nonconsensual sexually explicit material for over a decade, claims that even tech companies and platforms with anti-deepfake policies fail to prevent them from being posted online and spreading quickly through their services.

“Most human beings don’t have millions of fans who will go to bat for them if they’ve been victimized,” Goldberg said CNN. “Even those platforms that do have deepfake policies, they’re not great at enforcing them, or especially if content has spread very quickly, it becomes the typical whack-a-mole scenario.”

“Just as technology is creating the problem, it’s also the obvious solution,” she said. “AI on these platforms can recognize and delete these photos. If a single picture is becoming more popular, it may be watermarked and recognized. So there is no excuse.

More in Entertainment: https://buzzing.today/entertainment/
Photo Credits: https://commons.wikimedia.org/