OpenAI's New AI Models Hallucinate More, Causes Unknown
OpenAI's New AI Models Hallucinate More, Causes Unknown

OpenAI's New AI Models Hallucinate More, Causes Unknown

News summary

OpenAI's latest AI models, o3 and o4-mini, have shown a surprising increase in hallucinations—instances where the models generate false or misleading information—compared to their predecessors. Internal benchmarks such as PersonQA revealed that o3 hallucinated in 33% of responses and o4-mini in 48%, whereas older models like o1 and o3-mini had rates closer to 16% and 14.8%. This trend is unexpected, as newer models are generally anticipated to be more reliable and less prone to such errors. OpenAI has acknowledged these results but has not identified a clear reason for the increase in hallucinations, noting that further research is needed. Some reports also indicate that o3 not only hallucinates more but sometimes justifies its false claims when challenged. OpenAI states it is actively working to improve the accuracy and reliability of its models.

Story Coverage
Bias Distribution
50% Center
Information Sources
daae85f0-2883-42fc-b085-888140adf30d51dae2ab-6a3f-4156-b4a8-805de03e2b50
Left 50%
Center 50%
Coverage Details
Total News Sources
2
Left
1
Center
1
Right
0
Unrated
0
Last Updated
11 days ago
Bias Distribution
50% Center
Related News
Daily Index

Negative

22Serious

Neutral

Optimistic

Positive

Ask VT AI
Story Coverage

Related Topics

Subscribe

Stay in the know

Get the latest news, exclusive insights, and curated content delivered straight to your inbox.

Present

Gift Subscriptions

The perfect gift for understanding
news from all angles.

Related News
Recommended News