Rising Online Risks to Children: The Alarming Increase in Self-Generated Child Sexual Abuse Material (CSAM) and Risky Interactions with Adults

Children today are facing unprecedented online risks, as highlighted in a recent report from Thorn, a technology nonprofit. This article aims to shed light on the concerning rise in certain online threats to children, including the surge in self-generated child sexual abuse material (CSAM) and risky interactions with adults. Additionally, we will explore the challenges posed by novel technologies employed by online predators, as well as the vital role of hashing and matching technology in combating these risks. Lastly, we’ll introduce Safer, a proactive CSAM detection tool developed by Thorn, offering a comprehensive database of known CSAM hash values.

Increase in Self-Generated CSAM

One distressing trend that has emerged is the increasing number of minors engaging in the taking and sharing of sexual images of themselves, both consensually and coercively. This dangerous behavior has significant implications for the well-being of these children, as it not only exposes them to exploitation but also has a lasting impact on their mental and emotional health. Shockingly, there has been a 9% increase in cases of self-generated CSAM from 2021 to 2022, indicating the urgent need for intervention and support for vulnerable youth.

Risky Online Interactions with Adults

The Thorn report also reveals that children are reporting a rise in risky online interactions with adults. These interactions can range from inappropriate conversations to grooming for sexual exploitation. The dangers involved in such encounters cannot be overstated, as they expose children to manipulation, coercion, and potential harm. It is crucial for parents, educators, and law enforcement agencies to remain vigilant and take active steps to protect children from these dangerous online engagements.

Surge in Reports of Child Sexual Abuse Material (CSAM)

The statistics from the National Center for Missing and Exploited Children (NCMEC) paint a distressing picture. Over the past five years, NCMEC has witnessed a staggering 329% increase in reported CSAM files. In 2022 alone, they received a shocking 88.3 million CSAM files. These alarming numbers underscore the urgent need for concerted efforts by tech companies, law enforcement, and policymakers to combat the proliferation of child sexual exploitation material.

Novel Technologies Used by Online Predators

Online predators are continuously evolving their tactics to ensnare vulnerable children. One concerning development is the deployment of chatbots to scale their enticement. These chatbots simulate real conversations, targeting unsuspecting children and luring them into dangerous situations. The scalability and effectiveness of these tactics pose a significant challenge, necessitating innovative solutions to counteract and neutralize these threats.

Rise in Reports of Online Enticement

The NCMEC report also reveals an alarming 82% increase in reports of online enticement of children for sexual acts between 2021 and 2022. This surge in predatory behavior further emphasizes the pressing need for enhanced preventive measures and decisive action to protect children online. Efforts must focus not only on identifying and apprehending predators but also on empowering children with the knowledge and skills to recognize and respond to potential threats effectively.

The Importance of Hashing and Matching Technology

In the fight against online exploitation, hashing and matching technology plays a crucial role. Tech companies can leverage this technology to protect users and platforms by identifying and blocking known CSAM (Child Sexual Abuse Material) hash values. With the ability to compare uploaded content against a comprehensive database of known CSAM hashes, potential instances of exploitation can be flagged and addressed proactively. This technology serves as a critical tool in preventing the dissemination and re-victimization of children through the online sharing of explicit material.

Safer: A Proactive Tool for CSAM Detection

Thorn’s Safer is a remarkable tool designed to proactively detect CSAM by providing access to a vast database aggregating over 29 million known CSAM hash values. By automatically scanning and comparing uploaded content against this extensive repository, Safer can swiftly identify and flag potentially harmful material. This technology offers a powerful resource for tech companies, law enforcement agencies, and other stakeholders committed to combating the exploitation of children online.

The rise in self-generated CSAM and risky online interactions with adults highlights the urgent need for heightened awareness, intervention, and preventive measures to protect children in the digital realm. The alarming increase in reports of CSAM and online enticement further underscores the necessity for multi-stakeholder collaboration and decisive action. Harnessing the potential of innovative technologies and tools like hashing and matching, as demonstrated by Safer, can pave the way for safer online spaces for children. It is our collective responsibility to prioritize the well-being and safeguard the innocence of our children by effectively addressing and combating these growing online risks.

Explore more

How B2B Teams Use Video to Win Deals on Day One

The conventional wisdom that separates B2B video into either high-level brand awareness campaigns or granular product demonstrations is not just outdated, it is actively undermining sales pipelines. This limited perspective often forces marketing teams to choose between creating content that gets views but generates no qualified leads, or producing dry demos that capture interest but fail to build a memorable

Data Engineering Is the Unseen Force Powering AI

While generative AI applications capture the public imagination with their seemingly magical abilities, the silent, intricate work of data engineering remains the true catalyst behind this technological revolution, forming the invisible architecture upon which all intelligent systems are built. As organizations race to deploy AI at scale, the spotlight is shifting from the glamour of model creation to the foundational

Is Responsible AI an Engineering Challenge?

A multinational bank launches a new automated loan approval system, backed by a corporate AI ethics charter celebrated for its commitment to fairness and transparency, only to find itself months later facing regulatory scrutiny for discriminatory outcomes. The bank’s leadership is perplexed; the principles were sound, the intentions noble, and the governance committee active. This scenario, playing out in boardrooms

Trend Analysis: Declarative Data Pipelines

The relentless expansion of data has pushed traditional data engineering practices to a breaking point, forcing a fundamental reevaluation of how data workflows are designed, built, and maintained. The data engineering landscape is undergoing a seismic shift, moving away from the complex, manual coding of data workflows toward intelligent, outcome-oriented automation. This article analyzes the rise of declarative data pipelines,

Trend Analysis: Agentic E-Commerce

The familiar act of adding items to a digital shopping cart is quietly being rendered obsolete by a sophisticated new class of autonomous AI that promises to redefine the very nature of online transactions. From passive browsing to proactive purchasing, a new paradigm is emerging. This analysis explores Agentic E-Commerce, where AI agents act on our behalf, promising a future