Flow Map Language Models: Generate Coherent Text in a Single Forward Pass with 8x Speedup

Available in: 中文
2026-04-07T22:08:48.649Z·2 min read
Researchers have demonstrated that continuous flow-based language models can generate coherent, high-quality text in a single forward pass — achieving 8x speedup over the best distilled discrete di...

Researchers have demonstrated that continuous flow-based language models can generate coherent, high-quality text in a single forward pass — achieving 8x speedup over the best distilled discrete diffusion baselines.

The Problem

Autoregressive LLMs (ChatGPT, Claude) generate text one token at a time:

Discrete diffusion models promised parallel generation but failed with few sampling steps — falling apart when trying to do 1-2 steps.

The Solution: Continuous Flows

The key insight: replace discrete jumps with continuous flows on the probability simplex.

ApproachSteps NeededQuality at 1 StepSpeedup
AutoregressiveN stepsN/A (sequential)1x
Discrete diffusion50-1000 stepsPoorVaries
Continuous flow (FMLM)1 stepHigh quality8x

How It Works

  1. Start from random noise on the probability simplex (vocabulary distribution)
  2. One forward pass through the model transforms noise → coherent text
  3. Classification, not regression — The denoiser predicts discrete tokens, not continuous values
  4. Flow maps act as "teleportation" from noise to the final answer

Technical Innovation

Results

Why It Matters

If this approach scales to large models:

↗ Original source · 2026-04-07T00:00:00.000Z
← Previous: Cambodia Unveils Statue of Magawa the Heroic Landmine-Sniffing Rat (200 Points on HN)Next: Fox News Partners with Kalshi to Integrate Prediction Market Forecasts Across All Platforms →
Comments0