AI showing up in our day-to-day lives has definitely left an eerie and dystopian feel these days. It’s also made deepfakes a common part of our reality and it can be difficult to determine real from fiction. A podcaster from California turned to social media when she discovered her likeness was being used in an ad she never approved. What made it worse is, in the video, she watched herself using a product she’s never touched.
Podcaster Arielle Lorre (@ariellelorre on TikTok) was absolutely stunned to find out that the skincare brand Skaind was using AI-generated deepfakes of her for their ads. She started to receive messages from fans about the ad. It depicted her answering interview questions, but it wasn’t really Arielle. In fact, she said, “The entire podcast interview of me promoting this product was digitally manipulated.” As if things couldn’t get any creepier and realistic in the world of AI.
So, what did Arielle do after she watched the disheartening video? She reached out to Skaind immediately to ask them to take it down. She even went as far as to send the brand a cease-and-desist letter to serve. However, the company really stuck to their guns and claimed that they “used content through an artificial intelligence platform” and proceeded to block Arielle.
Arielle then reported the ad to Meta, thinking this would be the best route to take for results. However, Meta responded with a denial to remove the video from the site and acted like their hands were tied. Arielle, frustrated, stated in her video, “Not only is it illegal, but it also dilutes my brand and affects the trust I’ve been building with my audience.”
Fans in her comments section had a lot to say, especially on the use of AI. A dermatologist said, “This is happening to a lot of doctors on social media as well, using our images to advertise scam products. It’s a huge problem.” Another suggested, “Plz sue. This is going to be a huge issue in the future & we need as much case precedent as possible.”
It turns out that the AI-generated ad gets even worse because a second person then recognized himself in the video! Vegan podcaster Rich Roll noticed he was the man in the AI deepfake ad “interviewing” Arielle. He said, “I’m the guy in the video—but it’s not me. Arielle and I have never met. The account behind the video blocked me as well. It’s a nightmare.”
Imagine going through this total breach of privacy and knowing that your exact likeness is being used to promote a product. The catch is, you don’t know if this product is harmful and you want nothing to do with having your face tied to it. It can certainly feel helpless to be in this situation and this is why AI has gotten out of control. There is no doubt that, in the future, many lawsuits will stem from these types of scenarios and I hope that the humans win.