Fivearts.org

Library => Miscellaneous => Topic started by: Under10Gods on September 17, 2025, 11:09:53 PM

Title: AI makes mistakes. And very often.
Post by: Under10Gods on September 17, 2025, 11:09:53 PM
@smsek
And for other non-professionals. The authors of the article coyly call AI errors hallucinations. In reality, it's quite different. If the AI ​​doesn't know the answer, it will choose the most popular one. In AI, the chain of answer options is built on features—and the percentages of event frequencies—depending on those features. Even if your feature isn't there, you'll still get an answer. But it will be wrong. I don't know how many people have been and will be disappointed by blind faith in AI. But that's not my karma anymore :).

A link to the research. But we all know no one will go there :)

https://arxiv.org/abs/2509.04664


Title: Re: AI makes mistakes. And very often.
Post by: 2noBody on November 03, 2025, 05:14:11 PM
Reddit members have noted a change in the policy for the popular OpenAI AI bot, ChatGPT. The algorithm no longer provides personalized medical and legal advice to users, nor does it analyze medical images, including MRIs, X-rays, and photos of skin lesions.
It is noted that previous tricks, which asked the AI ​​model to consider a hypothetical situation to obtain the desired information, no longer work. This time, a protection mechanism is triggered, and ChatGPT only provides general advice, recommending contacting a specialist for a more detailed examination of the problem.

OpenAI representatives did not comment on the changes to ChatGPT's policy, but they were likely made to avoid litigation. Statistics show that more and more people are turning to the chatbot for medical and legal advice, the results of which are unpredictable. However, the use of neural networks in such tasks is currently poorly regulated, which can create risks for both developers and users.

Consultations on other issues are similarly problematic. No one guarantees the accuracy and precision of AI responses.