Internet Watch Foundation (IWF)’s cover photo
Internet Watch Foundation (IWF)

Internet Watch Foundation (IWF)

Non-profit Organizations

Cambridge, Cambridgeshire 20,430 followers

Leading tech charity working globally to eliminate child sexual abuse images & videos from the internet.

About us

Protecting children is at the heart of everything we do. For over 25 years, since the early days of the internet, our job has been to help child victims of sexual abuse by hunting down and removing any online record of the abuse. How we do this Tech-for-good.We build cutting-edge tech tools designed to make it easier to identify and remove online images and videos of child sexual abuse. In short, tech to protect kids. Our team of human analysts. Tech companies and law enforcement worldwide trust the assessments, experience and knowledge of our extraordinary team of people. Working together. With international partners in government, law enforcement, reporting hotlines, charities and the tech community we work to stop illegal images of children being circulated again and again. We share vital information that could lead to the rescue of a child from terrible abuse. IWF Hotline. This gives people a safe and anonymous place to report suspected online images and videos. When we started in 1996, 18 per cent of child sexual abuse imagery online was hosted in the UK. Today, thanks to our Hotline, it’s less than one per cent. We’re proud of that.

Website
https://www.iwf.org.uk/
Industry
Non-profit Organizations
Company size
51-200 employees
Headquarters
Cambridge, Cambridgeshire
Type
Nonprofit
Founded
1996
Specialties
tech, AI, machine learning, child protection, and online protection

Locations

  • Primary

    Vision Park, Chivers Way

    Histon

    Cambridge, Cambridgeshire CB24 9ZR, GB

    Get directions

Employees at Internet Watch Foundation (IWF)

Updates

  • A British schoolgirl whose extreme sexual abuse was recorded and spread across the internet for years is now getting support after one of our analysts identified her, thanks to images of her school uniform. We're the only non-law enforcement body in the world with access to the Government’s CAID (Child Abuse Image Database), a secure archive of child sexual abuse imagery acquired by UK police and the National Crime Agency (NCA). We have a dedicated Taskforce team that painstakingly grades, assesses, and assigns digital fingerprints (or hashes) to imagery to prevent further distribution online. This difficult but important work is vital to making sure survivors do not continue to be revictimised by the repeated sharing of their sexual abuse and can begin to recover free of the spectre of knowing their images and videos may still be out there. Read more at https://lnkd.in/equyQ-3e and support our work.

    • No alternative text description for this image
  • From this Friday, the EU becomes the only jurisdiction in the world where technology companies lack legal certainty to detect child sexual abuse material on their platforms. Last week, the European Parliament voted not to extend the temporary law that provided that legal basis. The consequences will be stark: fewer children safeguarded, fewer perpetrators held accountable, and offenders displaced to harder-to-reach environments re-establishing themselves on mainstream platforms. Today, the IWF – together with technology companies and platform owners – has called on EU legislators to urgently progress a permanent Child Sexual Abuse Regulation. The EU already hosts more child sexual abuse material than any other region in the world. In 2024, 62% of confirmed child sexual abuse webpages were traced to EU hosting services. Without a permanent framework, that will get worse, and the effects will not stop at EU borders. As our CEO Kerry Smith puts it: "the EU has opened the door for predators to target children without fear of reprisal. That door must be closed."   We're calling on EU lawmakers to act. Read our full statement. Link in the comment section.

    • No alternative text description for this image
  • Harm without limits: AI child sexual abuse material through the eyes of our Analysts – a new report from the IWF AI revictimises victims and survivors, perpetuates violence against women and girls, normalises sexual violence against children and causes operational harm to child protection systems. Whether images are created through traditional means or are AI-generated, the effect on children is the same: ongoing exploitation, loss of control and repeated trauma. We stand ready to collaborate with researchers, institutions, industry partners and other organisations to provide a strong, independent auditing function throughout every phase of tech development. Read the full report at iwf.org.uk/aireport. Get in touch for support. #AIAbuse #AIReport #DigitalViolence #ChildSafety

    • No alternative text description for this image
  • Last year, we took action on 312,030 reports where analysts confirmed the presence of child sexual abuse material. Our URL List is a vital tool in this battle. The tech community that uses our List to protect customers, staff and services trust our assessments, experience and knowledge. They’re partners in our mission to defend children online. Learn more about our tech-for-good and find out how we could help you: https://lnkd.in/dA2Zz5iq.

    • No alternative text description for this image
  • ⚠️The scale and speed of AI child sexual abuse material online run the risk of overwhelming inundated reporting mechanisms and law enforcement capacity. In 2025, our team identified 3,440 AI-generated child sexual abuse videos, an increase of 26,362% compared with the previous year. Of these, 65% (2,230 videos) were classified as the most serious Category A, encompassing offences such as rape, sexual torture, and bestiality. By comparison, only 43% of confirmed non-AI criminal videos were Category A. AI is enabling the creation of material that reflects offenders’ most extreme sexual interests with unprecedented speed and accessibility. We’re calling on the UK Government to introduce further measures to prevent AI technology from being exploited to create child sexual abuse material, in addition to the successful passage of existing measures with the Crime and Polling Bill: 1️⃣Introduce and implement a ban on nudify apps and tools.   2️⃣Ensure all AI chatbots are adequately regulated under the Online Safety Act.  3️⃣Bring forward an AI Bill to ensure AI systems are safe by design, including stronger safeguards against AI-generated child sexual abuse material. Read more and share with your networks for awareness. Link in the comment section. #AIAbuse #AIReport #DigitalViolence #ChildSafety

    • No alternative text description for this image
  • Through partnership with the Internet Watch Foundation, organisations like yours are: ✅ Driving global action ✅ Supporting critical technology and research ✅ Protecting children every single day Our partners are more than supporters - they are changemakers. Learn more about our work in partnership and join us in our mission to eradicate child sexual abuse content online: https://lnkd.in/eqDcV-Vu.

  • Internet Watch Foundation (IWF) reposted this

    View profile for Catherine McShane

    Internet Watch Foundation…707 followers

    “Advances in technology should never come at the expense of a child’s safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child’s life. This material is dangerous.” - Internet Watch Foundation (IWF) CEO Kerry Smith A new report published March 24 by the IWF reveals the full scale of AI-generated child sexual abuse images and videos discovered online by analysts in 2025.  The findings are stark, and it is only getting easier and quicker to create more extreme criminal content. The report also provides unsettling insight into how offenders delight at the ability of AI tools to recreate any cruel scenario imaginable.   EU lawmakers are being urged to recognise the “wide-reaching harms” of AI child sexual abuse imagery and to ensure that it is criminalised across all EU member states. Read more below ⬇️ https://lnkd.in/e8tAB--h

  • Internet Watch Foundation (IWF) reposted this

    More from #TSSummit👇   🔹 Content Moderation & Moderator Wellness explored how law enforcement operations are evolving to meet digital risk, with Matt Turner (Roblox) and Skip G. (Global Internet Forum to Counter Terrorism (GIFCT) discussing the shift toward proactive, intelligence‑led approaches, cross‑border collaboration, and responding to AI‑driven harms.   🔹 Online Safety for Children & Vulnerable Users brought together leaders from across the ecosystem. Iain Drennan (WeProtect Global Alliance), Jess Lishak (Tech Coalition), Sarah Smith (The Lucy Faithfull Foundation), and Neil Prowse (Internet Watch Foundation (IWF) shared how collective action, leadership, and scalable tools are critical to future‑proofing prevention against OCSEA and CSAM.   🔹 Youth Safety focused on empowering users with trusted AI, with Massimo Belloni (Docplanner), Ariadna Ràfols (Synthesia), and Melanie Quandt (Highspring) discussing how to design enriching AI products that balance innovation with robust safety and trust.   #TrustAndSafety #OnlineSafety #TrustAndSafetySummit #ChildSafety #AISafety #PlatformSafety #TSSummit

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +2
  • Internet Watch Foundation (IWF) reposted this

    View profile for Neil Prowse

    Internet Watch Foundation…1K followers

    Day-2 IQPC Trust & Safety Summit, London Thank you Iain Drennan of WeProtect for expertly moderating the panel session I joined representing Internet Watch Foundation (IWF) this afternoon covering Collective Action for Child Safety: Leaders, Tools, and Future Proofing Prevention. Some incredibly important insights from Jess Lishak Tech Coalition and Sarah Smith Lucy Faithful Foundation and on collaboration in tackling CSAM. Internet Watch Foundation (IWF) Trust & Safety Summit

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image

Similar pages

Browse jobs