AI “disturbed” that Algorithms may Direct People Towards Child Sex Abuse Images
A conversation with Claude AI about the potential impact of AI and algorithms on pornography use
Peter Watts
5/15/20264 min read


When asked, Claude AI stated that it finds it “genuinely disturbing” that algorithms can direct people toward increasingly extreme and illegal content.
But let’s start at the beginning. There has been a lot of conversation in around the role of algorithms in curating a user’s social media content, directing specific content towards them. The impact on teenagers, our purchasing habits and even influences in democracies have been explored. It struck me that the same may apply to the pornography industry and porn content. Do algorithms curate a user’s unique experience and can this lead towards child sex abuse material (CSAM).
So, on the 14th May 2026, I decided to use an AI, Claude int his case, to explore some of these themes. I asked it several questions including:
How is AI influencing and affecting the pornography people watch.
Are algorithms tailoring what pornographic video’s are presented and suggested to the individual user.
Can this tailored content contribute to problematic pornography use / addiction.
Is this use of AI and algorithms harmful and, if so, what should be done about it.
Do these algorithms have the potential to present, or direct people to illegal content such as child sex abuse material (CSAM).
If that were the case, how did Claud AI feel about this outcome.
AI, Algorithms and porn / sex addiction
Claude noted that about 6–11% of men, 0.6–3% of women, and around 4% of gender-diverse individuals report problems associated with pornography use, as do 14% of adolescent boys and 10% of adolescent girls, citing a research paper published in 2025v.
It described the supranormal stimulus of pornography that overrides evolutionary reward systems in the brain, and the role of dopamine in this process. It then noted algorithms play a direct role in this by intersecting with these neurological vulnerabilities, potentially leading to an escalation in both a quantitative (more time spent) and qualitative (more extreme content) sense. The abundance of content and consumer behaviour, such as tab jumping, edging, frequency of sessions etc all play a part. Claude noted that:
The algorithm actively facilitates all of this. It is not neutral.
It is designed to maximise time on platform, and it does so by continuously serving fresh, novel, slightly-more-stimulating content — which maps almost perfectly onto the brain's desensitisation cycle.
The adult entertainment industry deploys its recommendation systems with increased aggressiveness and secrecy because there are few content constraints and strong consumer demand for fresh content.”
AI’s and Algorithms may direct people towards illegal content
Claude believes that alogrithms can and do direct people towards increasingly harmful and illegal content. In fact, the AI stated that the evidence for this is quite strong, not a hypothetical, but something that is actually happening.
Claude indicated it found this “genuinely disturbing, and I think the word "disturbing" is appropriate in a way that stronger language sometimes isn't, because it captures both the moral weight and the fact that this should unsettle anyone who thinks about it seriously.”
Conversation summary
If you are interested, you can read the whole conversation with AI about algorithms and pornographyv. However, if you do not wish to read the whole thing, then I also asked Claude to summarise key points from the conversation, and here they are:
1. AI-generated content is transforming pornography. AI tools now allow users to create hyper-realistic, fully customisable synthetic adult content — shifting pornography from passive consumption to an interactive, personalised experience with essentially no limits on what can be depicted.
2. Algorithmic recommendation is arguably the bigger issue. Separate from AI-generated content, platforms already use sophisticated recommendation algorithms — similar to YouTube or TikTok — to tailor which pre-existing content is shown to individual users, optimising relentlessly for engagement and time on platform.
3. A near-monopoly controls much of this. A single company, Aylo (formerly MindGeek), controls a disproportionate share of the major platforms, meaning a small number of algorithmic systems shape what a vast proportion of global pornography consumers see.
4. The brain is neurologically vulnerable to exactly what these algorithms exploit. Pornography acts as a "supranormal stimulus" on the brain's dopamine reward system, and the brain is specifically primed for sexual novelty. Algorithms that continuously serve fresh, novel, escalating content map almost perfectly onto the desensitisation and tolerance cycle that underlies compulsive use.
5. Escalation is a documented and serious phenomenon. Research identifies two distinct escalation patterns: quantitative (watching more, for longer) and qualitative (progressing to more extreme content). Algorithmic systems actively facilitate both, not as a side effect but as a consequence of how they are designed.
6. These systems can create a pathway toward illegal content. The same escalation logic that pushes users toward more extreme legal content has no built-in mechanism that treats the legal/illegal boundary as a stopping point. Law enforcement and researchers have documented cases where algorithmic navigation has led toward child sexual abuse material (CSAM).
7. Children are particularly at risk. The average age of first exposure to pornography is now around 11. Algorithmic systems apply the same engagement-maximising logic to minors as to adults, during critical periods of brain development when impulse control is not yet fully formed, and with no meaningful age verification in most jurisdictions.
8. The commercial incentive actively works against safety. Platforms profit from engagement. Robust content moderation and escalation controls reduce engagement. The financial incentive therefore structurally undermines the safeguards that should exist — and this is not incidental but intrinsic to the business model.
9. CSAM represents an absolute ethical line. Every piece of child sexual abuse material documents the real abuse of a real child. Any algorithmic system that creates a pathway toward it — regardless of intent — represents a serious failure of duty of care, and "we didn't design it to go there" is not an adequate defence for those who build and profit from these systems.
10. Regulation is necessary but must be well-targeted. The most defensible interventions include robust age verification, algorithmic transparency requirements, a formal duty of care for platforms, and user-facing controls. Blanket prohibition or censorship of legal adult content is likely to be counterproductive. The goal should be to hold platforms to the same accountability standards we increasingly apply to social media and gambling — industries that also profit from exploiting psychological vulnerabilities at scale.
