Original article: Unicef: Al menos 1,2 millones de niños han sido afectados por imágenes sexualizadas generadas por IA en el último año
A study conducted across 11 countries has revealed that at least 1.2 million children reported being affected in the past year by the manipulation of their images through explicit sexual ‘deepfakes’.
«In some countries, this figure represents 1 in every 25 children, equating to one child in a typical classroom,» UNICEF stated in a press release, emphasizing that abuses perpetrated through AI-generated images «are still abuses.»
The data comes from the second phase of the Disrupting Harm project, a research initiative led by Innocenti, UNICEF’s Office of Strategy and Data, Ecpat International, and Interpol, funded by Safe Online.
This project examines how digital technologies can facilitate child sexual exploitation and abuse, while generating empirical data to strengthen national systems, policies, and responses to this complex issue.
«During the development of this phase, national reports with findings from each country will be published throughout 2026,» the organization explained.
The estimates presented in this initial report are based on nationally representative household surveys conducted by UNICEF and Ipsos in 11 countries.
The research was carried out in countries representing diverse regional contexts: «Each survey included one child aged between 12 and 17 and a mother, father, or caregiver, utilizing a sampling design aimed at achieving comprehensive national coverage (91%-100%),» they pointed out.
Given this background, UNICEF warned that the harm inflicted by deepfake abuse «is real and requires immediate action. Children cannot wait for legislation to catch up.»
«We must be clear: sexualized images of minors created or manipulated by AI tools constitute depictions of child sexual abuse. Abuses committed through deepfakes remain abuses, and although the images may be false, the harm they cause is undeniably real,» UNICEF representatives emphasized.
«Using a child’s image or identity directly makes them a victim. Even if no identifiable victim exists, AI-generated material depicting child sexual abuse normalizes child sexual exploitation, fuels demand for abusive content, and poses significant barriers for law enforcement in identifying and protecting children in need of help,» the UN agency added.
Children Are Aware of the Danger
A crucial aspect revealed in the study is that children themselves are very aware of this danger.
«In some of the study countries, up to two-thirds of children expressed concern that AI could be used to fabricate explicit images or videos of them. Levels of concern vary widely between countries, highlighting the urgent need for increased awareness and preventive and protective measures,» UNICEF stated.
Urgent Measures Needed
To address this situation, UNICEF issued an urgent call to authorities in all countries to implement the following actions:
- All governments should broaden the definitions of what constitutes images of child sexual abuse to include AI-generated content, penalizing its creation, acquisition, possession, and distribution.
- AI developers must employ security-focused approaches from the design stage and implement robust safeguards to prevent the misuse of AI models.
- Digital companies must proactively prevent the circulation of images depicting child sexual abuse—instead of merely removing them after they’ve been shared—by enhancing content moderation through investments in detection technologies, allowing for immediate removal of such materials rather than days after a report from a victim or their representative.
Continue reading on this topic:
The Citizen



