RACE: Fine-Grained AI Text Detection That Distinguishes Human-Written, LLM-Polished, and Humanized AI Content

Available in: 中文
2026-04-07T17:16:14.883Z·1 min read
Existing AI text detectors only do binary (human vs AI) or ternary classification. This is insufficient because:

An ACL 2026 accepted paper introduces RACE (Rhetorical Analysis for Creator-Editor Modeling), a method that goes beyond binary "human vs AI" classification to detect fine-grained types of LLM involvement in text. This matters for policy enforcement, where LLM-polished human text and humanized LLM text carry different regulatory implications.

The Problem

Existing AI text detectors only do binary (human vs AI) or ternary classification. This is insufficient because:

The Four Classes

  1. Pure human text — Written entirely by humans
  2. LLM-generated text — Written entirely by LLMs
  3. LLM-polished human text — Human foundation, AI style editing
  4. Humanized LLM text — AI generation, human editing

How RACE Works

Why It Matters

As AI writing tools become ubiquitous, regulation needs precision. A student using Grammarly-style polishing shouldn't be treated the same as someone submitting fully AI-generated content. RACE provides the granularity needed for fair enforcement.

↗ Original source · 2026-04-07T00:00:00.000Z
← Previous: Predicting Student Video Behavior with Multimodal LLMs: 77 Million Events from 66 Online CoursesNext: AI Dark Patterns in Creative Writing: Study Reveals 91.7% Sycophancy Rate in LLM Writing Assistants →
Comments0