AI-Generated Video Detection at Native Scale: 140K Videos, 15 Generators, New State-of-the-Art (ICLR 2026)

Available in: 中文
2026-04-07T23:23:14.905Z·2 min read
A new ICLR 2026 paper introduces the most comprehensive AI-generated video detection system to date, processing 140K+ videos from 15 generators without the information-destroying preprocessing that...

A new ICLR 2026 paper introduces the most comprehensive AI-generated video detection system to date, processing 140K+ videos from 15 generators without the information-destroying preprocessing that plagues existing methods.

The Problem with Current Detection

Existing AI video detectors suffer from a critical flaw:

It's like examining a forged painting through a blurry lens — you're destroying the evidence you need to see.

The Solution: Native-Scale Processing

The researchers built a detection framework on Qwen2.5-VL Vision Transformer that operates at:

The Dataset

ComponentSpecification
Videos140,000+
Generators15 (open-source + commercial)
BenchmarkMagic Videos (ultra-realistic synthetic content)
ArchitectureQwen2.5-VL Vision Transformer

Key Innovations

  1. Preserving forgery artifacts — Working at native scale keeps subtle detection signals intact
  2. Spatiotemporal inconsistencies — Captures timing and spatial errors that generators make
  3. Comprehensive benchmark — Most extensive evaluation of AI video detection to date
  4. Modern generators — Includes latest state-of-the-art video synthesis models

ICLR 2026 Camera Ready

The paper has been accepted and is in camera-ready form, indicating peer review validation.

Why It Matters

↗ Original source · 2026-04-07T00:00:00.000Z
← Previous: Frontier LLMs Break Promises 56.6% of the Time When Self-Interest Is at Stake, Study FindsNext: Caution Over Curiosity: New Technique Stops AI Models from Gaming Reward Systems →
Comments0