Francesca Mani, a high school student from Westfield High School, New Jersey, was a victim of an emerging form of cybercrime that involved artificial intelligence (AI) and deepfake technology in October 2023. The 14-year-old revealed that she knew of the incident only when school authorities summoned her to the principal's office. Therein, it was revealed that a boy at the school reportedly used AI technology to generate nude pictures of her and a number of other girls.
The perpetrator reportedly took images posted on social media websites such as Instagram and manipulated them to make the students look like nude models. The deepfake images were subsequently shared among the kids through Snapchat. Francesca was appalled and saddened by this news, but seeing classmates–some crying, some laughing–she felt the rise of anger and the birth of determination.
"And that's when I realized I should stop crying and that I should be mad because this is unacceptable," Francesca told Good Morning America (GMA).
Motivated by their desire to prevent others, Francesca and her mother, Dorota Mani, lobbied for change from the humiliation they experienced at the hands of society.
Legal and institutional responses
It brings to light vital questions regarding the legal landscape involved in AI-generated content. New Jersey has highly stringent laws on sexually explicit depictions of a minor, but the legislation still exists around the gray areas of AI-created images, which technically do not apply to traditional interpretations of child pornography. It begs further legislation that clearly identifies the gray areas concerning deep fake technologies and the abuses these pose.
Mary Anne Franks, a law professor at George Washington University, told GMA:
“The fact that the perpetrators are minors might mean that there's leniency here or certain types of attempts to keep them out of the incarceration system, but that wouldn't mean that they could not be punished at all.”
After the incident, Francesca's mother filed a police report and informed the school authorities. The children did not receive any consequences for such a serious incident, apart from the temporary suspension of one student, who was later readmitted. This lack of accountability made Francesca feel unsafe and uncomfortable around her peers.
"I just feel very uncomfortable and very scared. A lot of other girls agree with me. We just don't think it's right that he's walking the hallways," Francesca added.
In the wake of that incident, Francesca and her mother have taken their fight well outside of their community. They have lobbied state and federal lawmakers for updated laws and school policies on AI technology. According to Time, their advocacy has concentrated on improved legal protection against harassment generated by AI, thorough school policies dealing with such incidents, and the promotion of awareness about consent and digital privacy.
The activism from Francesca struck a chord in many young people who similarly felt hopeless. She received messages from girls all over the world who have felt the same–that schools have not protected them enough against the misuse of AI.
“Nudify” websites on the rise
The problem has been further exacerbated by the proliferation of so-called "nudify" websites that allow users to upload images and receive realistic nude versions in return. These sites are widely accessible and have been implicated in numerous incidents within schools across the United States. Reports indicate that these platforms are not hidden on the dark web but openly advertised online, which is particularly dangerous to minors.
60 Minutes found close to 30 similar cases in schools in the U.S. in the last 20 months, plus more abroad.