Reddit members have noted a change in the policy for the popular OpenAI AI bot, ChatGPT. The algorithm no longer provides personalized medical and legal advice to users, nor does it analyze medical images, including MRIs, X-rays, and photos of skin lesions.
It is noted that previous tricks, which asked the AI model to consider a hypothetical situation to obtain the desired information, no longer work. This time, a protection mechanism is triggered, and ChatGPT only provides general advice, recommending contacting a specialist for a more detailed examination of the problem.
OpenAI representatives did not comment on the changes to ChatGPT's policy, but they were likely made to avoid litigation. Statistics show that more and more people are turning to the chatbot for medical and legal advice, the results of which are unpredictable. However, the use of neural networks in such tasks is currently poorly regulated, which can create risks for both developers and users.
Consultations on other issues are similarly problematic. No one guarantees the accuracy and precision of AI responses.