DeepSeek Releases V3-0322: Open-Source Model Matching GPT-4.5 on Key Benchmarks

Available in: 中文
2026-03-22T23:30:11.276Z·1 min read
DeepSeek released V3-0322, an open-source MoE model with 671B total / 37B active parameters that matches GPT-4.5 on key benchmarks while remaining fully self-hostable under MIT license.

DeepSeek Releases V3-0322: Open-Source Model Matching GPT-4.5 on Key Benchmarks

Chinese AI lab DeepSeek has released V3-0322, an updated version of its open-source language model that matches or exceeds GPT-4.5 performance on several key benchmarks while remaining fully open-weight and runnable on consumer hardware.

The Model

Key specifications:

Benchmark Performance

V3-0322 shows significant improvements:

What Changed

From the previous V3:

  1. Training data: Updated with more recent data through early 2026
  2. Alignment: Improved RLHF and instruction tuning
  3. Efficiency: Better inference optimization for consumer hardware
  4. Safety: Enhanced guardrails and refusal accuracy

Why It Matters

The release has significant implications:

Industry Reaction

The AI community response:

Source: DeepSeek Official Announcement

← Previous: Apple AirTag Used to Solve Series of Catalytic Converter Thefts Across Multiple StatesNext: United Airlines Preps for Oil Crisis: CEO Plans for $175/Barrel Jet Fuel →
Comments0