Some regulators are proposing that AI systems should be restricted from answering questions about medicine, law, engineering, and other professional fields. I understand the concern.
But the way these proposals are framed reveals a deeper misunderstanding of how AI systems actually work. And ironically, trying to ban knowledge from AI may increase the very problem they’re trying to prevent, hallucinations.
You can’t really “ban knowledge” from a trained model.
Large language models are trained on enormous datasets. Once those patterns are embedded in the system, guardrails and rules can guide responses, but they don’t magically erase entire domains of knowledge. If you try to force the model to avoid certain topics entirely, the system doesn’t suddenly become safer. In many cases it just becomes less grounded.
And that’s where hallucinations come in.
AI systems generate responses by predicting the most likely next word based on context. When the system has strong anchors like reliable sources, clear information, or retrieved documents, its responses tend to stay closer to reality. But when those anchors are removed and the model is still expected to generate an answer, it ends up guessing across a wider range of possibilities.
In information theory terms, you’ve increased entropy. There’s more uncertainty in the system, which means a higher chance of fabricated or distorted outputs.
Ironically, restricting access to knowledge can sometimes make misinformation more likely rather than less.
A better approach would be to ground AI systems in verified information when sensitive topics appear.
One widely used method is retrieval-augmented generation (RAG). Instead of relying only on what the model learned during training, the system retrieves information from trusted knowledge bases and uses that material to construct its response.
A safer architecture for sensitive domains might look something like this:
User asks a question
→ The system detects that the topic is medical, legal, or another high-stakes domain
→ The AI retrieves information from vetted sources
→ The response summarizes that information and cites where it came from
→ The system clearly states that the information is educational context, not professional advice
In other words, the AI becomes more like a research assistant than a professional authority.
Another piece of this conversation that often comes up is the concern about protecting professional fields. Medicine, law, and engineering involve years of training and carry real responsibility. AI should not replace licensed professionals in those roles.
But restricting AI from explaining publicly available information doesn’t actually protect expertise. That knowledge already exists in textbooks, research papers, and educational resources across the internet. AI systems are simply becoming new interfaces for navigating that information.
People will still need doctors to diagnose and treat illness. They will still need lawyers to interpret the law and represent them in court. Professionals bring judgment, experience, and accountability that AI systems do not have.
What AI changes is the information layer around those professions.
People may use AI to understand terminology before a doctor’s appointment. They may ask questions about legal concepts before speaking with an attorney. In many cases this could actually lead to better conversations between people and experts, because individuals arrive with a clearer understanding of the basics.
Trying to suppress knowledge flows isn’t the right solution.
The real challenge is designing AI systems that anchor their responses in reliable information and clearly communicate the limits of what they are doing.
We’re still early in figuring out how society should govern AI systems. But the solutions will need to reflect how these systems actually work, not how we imagine they work.
The goal of AI governance shouldn’t be to make systems know less.
It should be to make systems more grounded in reality.






Leave a Reply