Google Releases TimesFM 2.5: 200M-Parameter Time Series Foundation Model with 16K Context

2026-03-31T11:28:46.980Z·1 min read
Google Research has released TimesFM 2.5, the latest version of its time series foundation model. Despite being significantly smaller than its predecessor (200M vs 500M parameters), the new model d...

Google Research has released TimesFM 2.5, the latest version of its time series foundation model. Despite being significantly smaller than its predecessor (200M vs 500M parameters), the new model delivers substantially improved capabilities.

What's New in TimesFM 2.5

FeatureTimesFM 2.5TimesFM 2.0
Parameters200M500M
Context Length16,0002,048
Quantile ForecastingUp to 1,000 horizon (optional 30M head)Limited
Frequency IndicatorRemovedRequired
Covariate SupportYes (via XReg)Limited

Technical Highlights

Smaller but Smarter: The parameter reduction from 500M to 200M comes with a massive 8x increase in context window (2,048 → 16,000), enabling the model to capture much longer temporal patterns.

Continuous Quantile Forecasting: An optional 30M quantile head enables probabilistic forecasting up to 1,000 time steps ahead, crucial for risk-sensitive applications like financial planning and supply chain management.

Production Integration: TimesFM is available as an official Google product through BigQuery, making it directly usable in enterprise data pipelines without self-hosting.

Use Cases

Getting Started

git clone https://github.com/google-research/timesfm.git
cd timesfm
uv venv && source .venv/bin/activate
uv pip install -e .[torch]  # or .[flax] for JAX backend

The model is also available on Hugging Face as part of the TimesFM Collection.

TimesFM 2.5 demonstrates the trend toward more efficient foundation models — fewer parameters, longer contexts, and better task-specific heads. For organizations already invested in the Google Cloud ecosystem, the BigQuery integration makes deployment frictionless.

← Previous: Ollama 0.19 MLX: Apple Silicon Local AI Inference Gets a Massive Speed BoostNext: Mr. Chatterbox: A Victorian-Era Language Model Trained Entirely on Public Domain Text →
Comments0