ChatGPT Gives All Wrong Recommendations When Asked What WIRED Reviewers Recommend

Available in: 中文
2026-04-04T14:51:22.794Z·1 min read
A WIRED experiment asking ChatGPT for product recommendations based on WIRED reviewer picks revealed that the AI chatbot produced entirely wrong answers, highlighting the reliability gap between co...

Experiment Reveals Gap Between AI Confidence and Factual Accuracy in Product Recommendations

A WIRED experiment asking ChatGPT for product recommendations based on WIRED reviewer picks revealed that the AI chatbot produced entirely wrong answers, highlighting the reliability gap between confident AI responses and actual expert opinions.

The Experiment

WIRED asked ChatGPT what its reviewers would recommend across various product categories. The results were uniformly incorrect — the AI confidently recommended products that WIRED reviewers had never endorsed.

Why It Happened

Several factors contribute to this kind of AI hallucination in product recommendations:

The Bigger Problem

This is not just about getting product recommendations wrong. The experiment reveals a fundamental challenge:

Implications for AI Shopping Assistants

As retailers and tech companies push AI-powered shopping assistants, this experiment serves as a warning:

Source: WIRED https://www.wired.com/story/i-asked-chatgpt-what-wired-reviewers-recommend-its-answers-were-all-wrong/

← Previous: AI Has Flooded All the Weather Apps: How Generative Models Are Changing ForecastingNext: Silicon Industry in China Faces Severe Supply-Demand Imbalance as Prices Fall Below Cost →
Comments0