For 86 years, Glamour has been advocating for fairer conditions for women, from our Pass Paid Leave campaign in the US to lobbying the UK government to criminalize the creation of non-consensual deepfake porn. Now, we're proud to join the United Nations’ 16 Days of Activism to call for basic human rights for women in the digital age. Each of our global titles are responding to the UN's 2025 theme—"Unite Against Digital Violence to Women"—with five essays. Below, read on for a look inside American women's fight to legislate against deepfakes.
Jessica Guistolise is a lot like you. She’s got plans with her sister this weekend. She’s teaching her new puppy how to play fetch. She has a million and one things on her calendar. And even though her story about deepfake porn might not seem as relatable to you now, it should matter. Because it could easily happen to any of us.
In June 2024, Guistolise was having dinner with friends when she received a call that changed everything. It was Samantha*, the ex-wife of Guistolise’s friend Ben, whom she had known for many years. Samantha said she needed to meet. Immediately.
When they spoke, Samantha shared with Guistolise that Ben had secretly been making deepfake pornographic images and videos of women, including Guistolise, using a “nudify” app, which takes real images and turns them into naked photos or videos using AI. There weren’t just a few images—there were hundreds, depicting 80 different women. And that’s just what Samantha was able to catalog and capture of what Ben had made over the course of a single week.
"It was me, but it wasn’t me,“ Guistolise tells Glamour about how she felt when she first saw the images. “It was so bizarre and dystopian and confusing.”
After learning about the deepfakes, Guistolise and her friends got to work identifying as many of the women as they could. Some of those women, in turn, helped identify others.
Each deepfake was created using screengrabs from the woman’s social media channels. Nudify apps like the one Ben used typically only need one image of a person’s face to create a deepfake image. For Guistolise, it was from a photo taken at her goddaughter’s graduation and another from a family vacation, which she had posted to Facebook. This is a problem mostly faced by women, as the apps used to make nudify images aren’t trained to create images of the male form.
A year later, the women still haven’t been able to identify every single person in the images. They also haven’t seen any justice—the police officers they contacted said there is little they can do, even though Ben admitted to creating the images. And that’s because the laws around deepfake porn remains murky at best.
“This was not really something that had been on our radar,” says Omny Miranda Martone, the founder and CEO of the Sexual Violence Prevention Association (SVPA). “We’d been working on child abuse, sexual violence, and rape. We had not been working on any digital sexual violence until about three years ago.”
That changed when a victim came to them, sharing a story of how a rejected date at the gym turned into a man creating a deepfake of her and sharing it with her entire fitness community. When Martone asked the organization’s lawyers if there was more the survivor could do to protect herself, their answer was a resounding, “Not really.”
According to the lawyers, the act of creating that image and showing it to others didn’t constitute defamation because the man hadn’t been trying to pass it off as real. The victim wasn’t underage. And there was no law yet requiring social media platforms to take it down if he decided to distribute it. Still, SVPA knew that this was sexual violence. So Martone, their organization, and a few lawmakers got to work.
In May, President Donald Trump signed the Take It Down Act, a bipartisan bill introduced by senators Ted Cruz (R-Tex.) and Amy Klobuchar (D-Minn.) and endorsed by organizations like Martone’s and the Rape, Abuse & Incest National Network (RAINN). The bill “prohibits the nonconsensual online publication of intimate visual depictions of individuals, both authentic and computer-generated,” and, critically, will require platforms like Instagram and Meta to take down the images if they receive notice of their existence within 48 hours.
“That one was a really big win,” Martone says of the bill, which goes into effect in 2026. Stefan Turkheimer, the vice president for public policy at RAINN, noted that this law protects everybody, whether or not the images are authentic or fake. It also doesn’t matter whether they were taken consensually or nonconsensually, as long as they were shared nonconsensually.
But this is where things get complicated: This law protects people against the dissemination of deepfakes or intimate images shared without their consent. There is little to nothing someone can do about a person—be it someone they know or a total stranger—making and keeping the deepfakes for themselves.
“It’s not enough,” says Guistolise of the Take It Down Act. While it’s certainly a step forward, Guistolise says it still requires people to find the content, which she likens to trying to “find a needle in a needle stack.” If and when victims do find it and report it, they must then wait 48 hours for it to be removed, revictimizing themselves in the process.
“Searching for your own fake porn is not something, mental-health-wise, I’m willing to do,” Guistolise says. And again, this is only if the perpetrator distributes images in the first place. What Guistolise and many advocates want, instead, is to make the technology to create these deepfake images illegal in the first place.
“There are all kinds of things that have been said about it, like, ‘We don’t wanna block innovation,’” Guistolise says. “But women’s and children’s bodies are being sacrificed on the altar of capitalism. This is not innovation. This is exploitation. There is no reason for it to exist other than making money off of apps.”
It’s an issue that is absolutely exploding. In 2023, Time reported on an analysis by Graphika that found 24 million people had visited “undressing websites” in the month of September alone. These nudify apps have also been advertised everywhere from Google to Reddit and across social platforms. In July, Wired reported that the apps are worth about $36 million thanks to paid users and free users, who pony up their personal data that is later sold to the highest bidder.
There is, however, a little hope on the horizon. Representative Alexandria Ocasio-Cortez (D-N.Y.) along with Representative Laurel Lee (R-Fla.), Senator Richard J. Durbin (D-Ill.), and Senator Lindsey Graham (R-S.C.) has reintroduced another bill known as the Disrupt Explicit Forged Images and Nonconsensual Edits Act (DEFIANCE Act). If passed, it will give victims the right to bring a civil action against “individuals who knowingly produce, distribute, solicit, and receive or possess with the intent to distribute nonconsensual sexually explicit digital forgeries.” This means victims would have the right to sue for damages, and it’s supported by both RAINN and the SVPA.
“We are reintroducing the DEFIANCE Act to grant survivors and victims of nonconsensual deepfake pornography the legal right to pursue justice,” Ocasio-Cortez said in a statement. “I am proud to lead this legislation with Representative Lee, and senators Durbin and Graham, to provide victims with the federal protections they deserve.”
Elsewhere in the world, nations like Denmark are hoping to help protect people by passing legislation that gives everyone the copyright to their own likeness, including their voice. Our counterparts at Glamour UK lobbied tirelessly to change the law in the United Kingdom; it's now a criminal offense to create sexually explicit digital forgeries.
In Minnesota, Guistolise and the women in her group are working alongside state senator Erin Maye Quade and the state’s Senate Judiciary and Public Safety Committee on a potential bill that would outlaw nudification, requiring AI companies to disable the function that allows it to create the images, or fine them up to $500,000 for each nonconsensual deepfake.
“For these AI-generated photos and videos, the harm begins at creation,” Quade told MPR News. “Dissemination currently in Minnesota of nonconsensual, sexual deepfakes is illegal. But they can download these apps on their phones, and they’re doing that. They’re nudifying their teachers, their classmates, their siblings, friends.”
These laws, however common sense they may feel, will still face an upward battle. In May the Elon Musk–owned platform X sued the state of Minnesota over its law banning the creation of deepfakes to influence an election, which it said violated free speech. In August, Musk won a similar lawsuit against the state of California for its deepfake ban.
For Guistolise, this event has caused immeasurable pain. She’s lost trust in others. She’s afraid of how this may affect her future and her career. However, there is a “next” for her. She gets to go on being the sister and friend she’s always been. She’s excited to go to work tomorrow. She’s training her pitbull puppy, whom she appropriately named Olivia Benson, how to give a high five. And despite it all, “I love humans,” she says, before pausing. “I guess I still do.”
Some Practical Steps to Take If You're a Victim of Deep Fakes
It’s impossible to measure the toll and reach of deepfakes, as apps allow users to create them in mere moments. However, according to the experts we spoke to for this piece, there are practical steps to take if you find out you’re a victim.
“I recommend that somebody calls a friend to help them with this process,” Martone says, noting that it can be difficult for people to view the images over and over alone.
If you don’t feel comfortable talking with a friend or loved one, the SVPA has a hotline you can call at 800-656-HOPE (4673), and you can chat with RAINN online.
If you do want to move forward with any potential legal action, Martone recommends having a friend or loved one document all the instances of the deepfake so you have them easily available for lawyers or the police. (The one caveat is if the victim is underage: Do not screen-shot any content. Instead, take your phone directly to your local police station for them to catalog it.)
Next step is to alert any social platforms where the deepfake may have been distributed. On Instagram, you click on the image, then Report, then False Information, and Digitally Altered or Created.
If you’re comfortable telling work or school, or think your deepfake may spread to these places, it’s a good idea to speak to either HR or an administrator (especially if you’re the parent of an underage child).
Not all AI is bad. Take BitMind as a good example, which specializes in “detecting AI-generated and synthetic media.” Its founder, Ken Jon Miyachi, tells Glamour that the program can identify both synthetic and semisynthetic media that may be hard for humans to detect, which can help people understand whether what they are seeing is real or not. Here are more expert tips on spotting misinformation.
Meghan Cutter, the chief of victim services at RAINN, isn’t just looking to policy makers for change, but to all of us as well. “How do we as a society have conversations about sexual violence and different forms of sexual violence, and how we can make communities safer places for survivors to speak up and ask for help and identify that they might need support?” she says. “[We want to] create that awareness, so that when this does happen, people know this isn’t okay, and there’s something that I can do.”
As Cutter says, “Just because someone hasn’t actually physically touched you, or maybe the image is your face, but not your body, that doesn’t mean this isn’t a form of violence. This is a form of assault. It is a form of sexual violence. I think it’s really important to be explicit about that, to help survivors understand that there are options available to them and to have words for their experience.”
*Editor’s note: Some names in this story have been changed.
