That change is primarily due to *safety, ethical, and legal concerns*. Here's why:
1. *Accuracy & Liability*: Health and legal issues are serious and often require context, nuance, and professional expertise. If ChatGPT gives incorrect advice in these areas, it could lead to harmful decisions. To avoid causing harm and potential legal liability OpenAI restricts such guidance.
2. *Not a licensed professional*: ChatGPT is not a doctor or lawyer. It can’t assess personal conditions, understand legal history, or provide tailored professional advice. That’s why it's now limited to explaining general concepts only.
3. *Encouraging proper support*: The goal is to push users to consult *real, qualified professionals* who can offer accurate, contextual, and responsible help rather than relying on AI alone.
So, ChatGPT can still help explain medical or legal terms, processes, or general principles but won’t give personal diagnoses, treatment suggestions, or legal strategies.
