"OpenAI Confirms GPT-4.5 Hallucinates: Insights Inside!"

Key Points:

  • OpenAI’s new model, GPT-4.5, reportedly hallucinates or provides inaccurate information 37% of the time according to their own benchmarking tool, SimpleQA.
  • The issue is contrasted with human relationships where frequent fabrications would be unacceptable, highlighting a disparity in expectations for AI reliability.
  • The prevalence of hallucinations raises concerns about the trustworthiness of AI outputs, particularly given OpenAI’s significant market value.

References:

  • OpenAI’s admission regarding GPT-4.5’s performance was discussed in a recent Futurism article.

Executive Summary:
OpenAI has disclosed that its latest model, GPT-4.5, generates inaccurate information a staggering 37% of the time, as assessed by its internal tool. This high rate of “hallucinations,” or fabrications presented as facts, underscores potential trust issues with AI systems, contrasting sharply with societal norms surrounding honesty in personal relationships. The revelation is particularly significant given OpenAI’s substantial valuation in the tech industry.

12ft.io Link: https://12ft.io/https://futurism.com/openai-admits-gpt45-hallucinates
Archive.org Link: OpenAI Admits That Its New Model Still Hallucinates More Than a Third of the Time

Original Link: https://futurism.com/openai-admits-gpt45-hallucinates

User Message: OpenAI Admits That Its New Model Still Hallucinates More Than a Third of the Time

for more on see the post on bypassing methods