Meta's AI Content Moderation Shows Modest Improvements as Company Shuts Down Metaverse Platform Horizon Worlds
Meta has begun rolling out AI-powered content moderation across its apps, claiming improvements in scam detection and fake profile identification. The announcement came in the same week Meta walked away from its metaverse ambitions by shutting down Horizon Worlds.
AI Moderation Results
| Area | Result |
|---|---|
| Password scam attempts | 5,000/day detected and mitigated (undetectable by humans) |
| Fake celebrity profiles | User reports reduced by 80%+ |
| Adult sexual solicitation | Detection doubled |
| Scam ad views | Reduced by 7% |
Account Takeover Detection
Meta claims AI can detect account takeovers by noticing "suddenly accessed from a new location, password changed, profile edits made" — patterns that "look harmless to a person reviewing the account."
Critics note that enterprise security products have detected "impossible travel" patterns for years.
End of the Metaverse
In the same week, Meta announced the shutdown of Horizon Worlds — its metaverse platform and the reason it changed its name from Facebook. Over five years and $80+ billion spent on metaverse ambitions.
Meta later partially walked back the shutdown but said it would not create new immersive environments. The company now plans "super experiences" — a new vision to replace the metaverse.
The Irony
Meta adopted its current name to reflect Zuckerberg's belief that the metaverse was "the next big thing." The rebrand cost the company over $80 billion while its content moderation for Facebook and Instagram was failing and children were being harmed by its products.
Source: The Register