The Voice of Africa

Deepfake abuse is abuse: UNICEF warns of a fast-growing threat to children online

By Maxine Ansah

0

Get real time updates directly on you device, subscribe now.

Getting your Trinity Audio player ready...

UNICEF has raised urgent alarm over the rapid rise in AI-generated sexualised images involving children, warning that deepfake technology is being increasingly used to facilitate sexual exploitation and abuse in digital spaces.

In a strong statement, UNICEF said it is seeing growing volumes of sexualised images created using artificial intelligence, including cases where real photographs of children are digitally manipulated and sexualised without their consent.

Deepfakes, defined as images, videos or audio generated or altered using AI to appear real, are now being widely misused to produce child sexual abuse material. One of the most disturbing practices is so-called nudification, where AI tools digitally remove or alter clothing in photographs to fabricate nude or sexualised images of children.

New evidence highlights the scale of the threat. A joint study conducted by UNICEF, ECPAT International and INTERPOL across 11 countries found that at least 1.2 million children disclosed that their images had been manipulated into sexually explicit deepfakes in the past year alone. In some countries, this equates to one in 25 children, roughly one child in a typical classroom.

Children themselves are acutely aware of the risk. In several of the countries surveyed, up to two thirds of children said they worry that AI could be used to create fake sexual images or videos of them. UNICEF noted that concern levels vary significantly between countries, pointing to uneven awareness and protection measures.

UNICEF was unequivocal in its assessment. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material. Deepfake abuse is abuse, and there is nothing fake about the harm it causes.

The organisation stressed that when a child’s image or identity is used, that child is directly victimised. Even in cases where no identifiable child can be traced, AI-generated child sexual abuse material still normalises sexual exploitation, fuels demand for abusive content, and makes it harder for law enforcement to identify and protect children who need help.

UNICEF acknowledged efforts by some AI developers who are embedding safety-by-design approaches and stronger guardrails into their systems. However, it warned that safeguards across the sector remain inconsistent, and that risks are heightened when generative AI tools are integrated directly into social media platforms where manipulated images can spread rapidly.

In response, UNICEF called for urgent action. Governments were urged to expand legal definitions of child sexual abuse material to explicitly include AI-generated content, and to criminalise its creation, possession, distribution and procurement. AI developers were called on to strengthen safety-by-design measures, while digital platforms were urged to prevent the circulation of abusive material, rather than relying on delayed removal after harm has already occurred.

UNICEF also emphasised the need for stronger content moderation and greater investment in detection technologies to ensure abusive material can be removed immediately, not days after a victim or their representative reports it.

The statement reflects positions set out in UNICEF’s Guidance on AI and Children 3.0, published in December 2025, and draws on findings from Disrupting Harm Phase 2, a research project led by UNICEF’s Office of Strategy and Evidence Innocenti, ECPAT International and INTERPOL, with funding from Safe Online. National reports from the study are expected to be released throughout 2026.

“The harm from deepfake abuse is real and urgent,” UNICEF said. “Children cannot wait for the law to catch up.”

For Africa, where digital access is expanding rapidly alongside youthful populations, the warning carries particular weight. As African countries build their digital futures, the challenge is not only to embrace innovation but to ensure that technology does not deepen harm against the most vulnerable. Protecting children online must be treated as core digital infrastructure, not an afterthought, if the continent’s digital growth is to be both inclusive and safe.

AI child abuse, deepfake abuse, UNICEF AI children, child sexual abuse material AI, CSAM deepfakes, online child protection Africa, artificial intelligence child safety, AI-generated abuse images, digital child exploitation, UNICEF child protection, ECPAT INTERPOL study, Disrupting Harm Phase 2, AI nudification images, children online safety Africa, AI misuse children, deepfake sexual abuse, AI and child rights, technology and child protection, AI regulation children, digital harms Africa, child exploitation online, AI safeguards children, social media child safety, AI-generated CSAM, global child protection AI, African digital safety, youth online protection Africa, AI risks children, child abuse material laws, AI governance child rights, digital violence children, online abuse prevention Africa, AI ethics children, child safety technology, AI content moderation, online harms Africa, child protection policy Africa, artificial intelligence regulation, AI and human rights children, digital abuse prevention, AI deepfake risks, protecting children online, UNICEF statement AI, AI child exploitation data, online safety policy Africa, AI child abuse statistics, digital child rights Africa, AI-generated sexual content, child safeguarding technology, AI misuse law enforcement, online child protection systems, AI risks youth, Africa child digital safety, AI child harm prevention, global child rights technology, AI governance Africa, child abuse material prevention, AI child safety standards, online exploitation prevention, AI regulation global, child protection digital age, technology safeguarding children, AI accountability child rights, digital safety Africa youth, AI and exploitation prevention, child online abuse response, AI child protection research, UNICEF Disrupting Harm, child digital resilience Africa, AI policy children Africa, safeguarding minors online, AI threats to children, child safety digital policy, online abuse detection technology, AI misuse prevention, child protection innovation Africa, global AI child safety, digital harm prevention children

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.