Opinion

Action must be taken to prevent sexually explicit AI deepfakes

Earlier this year, a middle school in Beverly Hills discovered that a group of students was creating and sharing sexually explicit photos of their peers.

Earlier this year, a middle school in Beverly Hills discovered that a group of students was creating and sharing sexually explicit photos of their peers. While not much information about the incident is available, as the victims and perpetrators were minors, one of the students made a statement to the press saying that people were scared to come to the school in case explicit photos were created of them too. The images in question, called “deepfakes,” were AI-generated and nonconsensual. They were also part of a larger problem — the sexualization and dehumanization of young girls.

The term “deepfake” was coined in December of 2017, when a Reddit user started using AI to replace the faces of the subjects of pornographic videos with the faces of female celebrities. In 2017, the creation of deepfakes was a complicated process, requiring time and effort on the part of the creator. 

However, that is no longer the case. The rise of new AI tools like DeepFaceLab, Synesthesia, and Reface has allowed anyone, anywhere, to create deepfakes. While this has been used for good causes, like raising awareness about malaria, creating interactive history lessons, and even allowing doctors to run realistic medical tests without the need for human patients, it has also dramatically increased the amount of pornographic deepfakes created. There are over 85 thousand people, mainly women and girls, who have already been hurt by deepfake technology,  and that number doubles every six months.

According to Berkeley High School senior and Sexual Health Information  From Teens (SHIFT) member Sasha Spanier, the prevalence of non-consensual and sexually explicit deepfakes, along with the victim-blaming often associated with it, is frightening. “The widespread availability of these sexually explicit deepfakes just adds to the paranoia of what it feels like to be a woman in the modern age … It makes me sick to my stomach,” Spanier said.

The non-consensual creation of hyper-realistic, sexually explicit images is yet another way to dehumanize and objectify women, especially teen girls. The number of children and teenagers who were victimized by deepfakes increased by 117 percent from 2022-23 and has continued to grow exponentially since then as deepfake technology has become more and more widespread. Not only that, but there is a lack of information about the potential harms of deepfake technology that must be addressed, with only 71 percent of the world population knowing what a deepfake is.

Shockingly few BHS students are aware of the dangers deepfake technology poses, leaving them unable to defend themselves against it. BHS must work together with programs like SHIFT and Title IX to educate students about the problems caused by deepfake technology. Schools also must be less tolerant of deepfakes and ensure that everyone knows that what is happening is not okay. 

This sentiment was echoed by BHS students, “I think schools need to take it more seriously … Same goes for social media: (they need to be) taking it very seriously and being very fast to act on it,” Eve Eyal, a BHS freshman, said.

Bullying through the internet and various forms of social media is not a new problem for schools, but it is one with very few success stories. While AI, deepfakes, and the issues that come with them are a relatively new concept, schools have had to deal with cyberbullying ever since the creation of the internet. As of 2024, one in four middle and high school students have been cyberbullied in the past month which is more than double what it was in 2015. 

As social media becomes more and more popular, and younger and younger children gain access to it, cyberbullying will continue to worsen. As cyberbullying worsens, so will deepfakes, which is one of the most terrifying forms of cyberbullying. Schools, especially middle and high schools, must take decisive action against cyberbullying and deepfakes by educating their students on how to protect themselves. However, schools can’t do it all. Sure, access to deepfake websites and social media can be, and often is, blocked on school WiFi. But what about when the students get home?

The creation of deepfakes is not a school-wide problem or even a nationwide problem. It is a problem that affects everyone, everywhere, and the only people who can truly stop it are the creators of deepfake websites. Sites like DeepFaceLab, DeepSwap, and Synesthesia must crack down on the pornographic content created on their websites, especially DeepFaceLab, where 95 percent of deepfakes are created.

Until then, students, schools, and governments must work together to find ways to protect the victims of AI deepfakes by creating laws, staging protests, or even creating new school policies.