UK School Uses AI to Ban 200 Books Including Orwell's 1984 and Twilight
AI-Powered Book Censorship: UK School Removes 200 Titles Deemed 'Inappropriate'
A secondary school in Greater Manchester, UK, has used AI to identify and remove approximately 200 books from its library, including George Orwell's 1984, Stephanie Meyer's Twilight, Michelle Obama's autobiography, and Nicholas Sparks' The Notebook. The school librarian who refused to comply was placed under a safeguarding investigation and subsequently resigned.
What Happened
- Senior staff used an AI chatbot to identify books deemed 'inappropriate' for students
- Removal criteria included: 'not written for children,' 'themes that could upset children,' and 'constitute a safeguarding risk'
- When the librarian refused to remove the books, she was reported to the local council as a safeguarding risk
- The library was closed as a 'temporary safeguarding measure'
The Irony
The AI-generated justification for removing 1984 — a novel about totalitarian censorship and thought control — warned that it contained 'themes of torture, violence, sexual coercion.' The book is widely taught in schools worldwide as a literary classic and warning against exactly the kind of authoritarian decision-making the school demonstrated.
Other Targeted Books
- Michelle Obama's autobiography — removed from shelves
- The Notebook by Nicholas Sparks — deemed inappropriate
- Men Who Hate Women by Laura Bates — an exposé of incel culture, initially targeted for removal
- Twilight by Stephanie Meyer — the young adult vampire romance
Index on Censorship Response
Freedom of expression charity Index has obtained a list of 193 books and confirmed the school admitted in writing that the removal reasoning was generated by AI.
Why It Matters
This case exemplifies a growing trend of AI being used to make decisions about content access, raising fundamental questions:
- Delegation of judgment: Should AI determine what students can read?
- Safeguarding weaponization: Using child protection frameworks to justify content removal
- AI hallucination risk: AI-generated justifications may misrepresent actual content
- Chilling effect: Librarians and educators may self-censor to avoid investigation
Broader Context
The incident occurs amid increasing use of AI in educational administration and growing debates about book bans in schools globally, particularly in the US and UK. The automation of censorship decisions represents a significant escalation in the book banning debate.