Report: Apple Significantly Undercounts Child Sex Abuse Materials on iCloud and iMessage

Bear and Terry

After years of controversy over plans to scan iCloud for child sexual abuse materials (CSAM), Apple abandoned those plans last year. Now, child safety experts accuse the tech giant of failing to flag CSAM on its services, including iCloud, iMessage, and FaceTime. They also allege that Apple is not reporting all the CSAM it does flag.

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) shared data with The Guardian, showing that Apple is “vastly undercounting” CSAM found on its services globally. According to the NSPCC, police in the UK investigated more CSAM cases in 2023 alone than Apple reported worldwide for the entire year. Between April 2022 and March 2023, Apple was implicated in 337 recorded offenses of child abuse images in England and Wales. However, in 2023, Apple reported only 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), supposedly representing all the CSAM on its platforms worldwide.

US tech companies must report CSAM to NCMEC when found, but while Apple reports a few hundred cases annually, its peers like Meta and Google report millions. Experts told The Guardian that Apple “clearly” undercounts CSAM on its platforms.

Richard Collard, NSPCC’s head of child safety online policy, told The Guardian that Apple’s child safety efforts need major improvements. He highlighted a concerning discrepancy between the number of UK child abuse image crimes on Apple’s services and the global reports of abuse content made to authorities. Collard believes Apple lags behind its peers in tackling child sexual abuse and urges all tech firms to invest in safety ahead of the UK’s Online Safety Act rollout.

Other child safety experts share Collard’s concerns. Sarah Gardner, CEO of the Los Angeles-based Heat Initiative, considers Apple’s platforms a “black hole” obscuring CSAM. She warns that Apple’s AI integration might exacerbate the problem, making it easier to spread AI-generated CSAM with less enforcement. Gardner also criticized Apple for not investing in trust and safety teams to handle this issue, especially as it rushes to introduce advanced AI features like ChatGPT integration into Siri, iOS, and macOS

(Dreamstime)(Dreamstime / TNS)

Spiking Sextortion and Surge in AI-Generated CSAM

Last fall, Apple shifted its focus from detecting CSAM to supporting victims. Meanwhile, every state attorney general in the US urged Congress to study the harm caused by AI-generated CSAM.

Despite some legislative efforts, lawmakers have been slow to address the problem, worrying child safety experts. By January, US law enforcement warned of a “flood” of AI-generated CSAM, complicating real-world child abuse investigations. Human Rights Watch (HRW) researchers found that popular AI models were being trained on real photos of kids, even with strict privacy settings, increasing the likelihood that AI-generated CSAM might resemble real children.

The FBI reported a significant rise in child sextortion cases, where children and teens are coerced into sending explicit images online. This increase in sextortion raises the risk of CSAM spreading, with explicit images being used to generate more harmful content. HRW researchers fear that AI advancements could make it even more dangerous for kids to share content online, as explicit deepfakes could target any child.

The harms of AI-generated CSAM are real for the victims. Actual child abuse victims risk being re-traumatized as their abuse materials are repurposed. Victims of AI-generated CSAM report anxiety, depression, and feeling unsafe at school. The line between real CSAM and AI-generated CSAM has blurred, with both types causing significant trauma.

In Spain, a youth court penalized 15 teens for creating naked AI images of classmates, charging them with creating child sex abuse images. In the US, the Department of Justice declared that “CSAM generated by AI is still CSAM,” following the arrest of a man who created thousands of realistic images of minors using Stable Diffusion and distributed them online.

As Apple faces backlash for not doing enough against CSAM, other tech companies are urged to update their policies to address new AI threats. For example, Meta’s oversight board is reviewing cases involving AI-generated sexualized images of female celebrities. This review aims to assess the effectiveness of Meta’s policies and enforcement practices. However, the board has not yet reached a decision, indicating that without clear legal guidance, even experts struggle to address the harms of explicit AI-generated images spreading on platforms.

Share.
Leave A Reply

Exit mobile version