For all the pomp and circumstance surrounding AI innovation over the past few years, just as many potential or undeniable pitfalls have accompanied its meteoric ascent in popularity. From environmental impact to job loss, AI has opened various cans of worms the industry still needs to root out.
Among these downsides is the rise of deepfakes, otherwise known as videos or images generated by AI that replicate someone’s likeness without their permission. Think of it like a more extreme version of comment catfishing. For instance, as the folks over at OnlineSportsBetting.net note in their nuts and bolts Lucky Rebel review, the former service is prone to receiving contrived negative reviews mass produced by other “sites promoting licensed sportsbooks that want to steer players away” from competitors.
Deepfakes are even more intricate. Generative images and videos can use a person’s likeness (and even voice) to sell products and spread disinformation. This practice has run somewhat rampant on different social media platforms, including TikTok, X and Instagram. Naturally, the primary targets of these deepfakes are celebrities. Whether trolls on the internet are trying to disseminate inappropriate material, profit off an A-lister’s likeness or both, the occurrence has become more common.
The issue is so problematic, if prevalent, certain celebs have used their platforms to shine a spotlight on the issue. Below you can read about just a few of the most prominent examples.
Tom Hanks Did NOT Promote a Random Dental Plan
Just as generative AI video began to take off a couple of years ago, Tom Hanks called out a company for using a deepfake of him…promoting some random dental plan.
Yes, you are reading this correctly.
“BEWARE!! There’s a video out there promoting some dental plan with an AI version of me,” the iconic actor posted on Instagram in response. “I have nothing to do with it.”
Welcome to contemporary image-and-likeness issues. Previously, we saw brands get dinged by celebrities and companies for using their likeness standstill promotional materials or products. Now, however, the problem has expanded to include videos realistic enough to fool people into thinking one of the greatest actors of their generation is out here peddling random orthodontia services.
Scarlett Johansson Had to Sue an AI Company

“Scarlett Johansson” Licensed Under CC BY-NC-ND 4.0
Don’t make the mistake of thinking it’s only individuals and companies attempting to sell physical products or in-person services who attempt to capitalize on fake celebrity endorsements. AI companies looking to promote their tech’s ability to generate deepfakes have followed this past as well.
Just ask Scarlett Johansson.
“In November last year, the Avengers: Endgame actress sued the company Lisa AI after it created a promotional video using her likeness and voice without her permission,” writes Sarah Keenlyside of Style. “Seemingly speaking from the set of Marvel’s Black Widow, Johansson is shown explaining the benefits of the company’s avatar app.”
If you are wondering why there was a collective fist pump from plenty of actors, writers, artists and general creatives when OpenAI shut down its AI generative video platform Sora, this is one of the reasons why. Yes, that service in particular was burning through money in an attempt to meet a user demand that didn’t exist. But the ease with which people could spit out videos of real people saying and doing things they have neither said nor done was alarming. It felt like a stream of lawsuits waiting to happen—kind of like the one Johansson herself filed.
Taylor Swift Deals with the Seediest Side of Deepfakes
Back when Taylor Swift’s relationship with NFL tight end Travis Kelce was just going public, an anonymous X user released sexually explicit deepfakes of the pop icon onto the platform, presumably after using the company’s AI agent Grok to generate them.
Though Swift herself didn’t draw further attention to them with a social media post of her own, she and her team were apparently weighing legal action. “These fake AI-generated images are abusive, offensive, exploitative and done without Taylor’s consent and/or knowledge,” a source told The Daily Mail. “The door needs to be shut on this.”
This is among the scandals that have sparked the most demands for better, more restrictive AI guardrails. The images of Swift in question were viewed over 45 million times before X removed them. There is no putting this genie back in the bottle with those kinds of engagement numbers.
Something clearly must be done. Here is hoping Swift followed Johansson’s lead and sued. And then won. Here’s also hoping instances like this do not become a normal occurrence.
