Hey all —
I use AI on cases. Every day. It helps me think through complex problems, cross-reference research, and catch things I might miss. It's a genuinely powerful tool, and I'm not here to tell you to stop using it.
But I need you to know something AI will never tell you on its own: it has no built-in mechanism for knowing when it's making something up. It will invent drug interactions, fabricate biochemical pathways, and confidently tell you that you have toe cancer, or that your doctor is wrong — all in the same calm, authoritative tone it uses when it's giving you real information.
I just published a DEEP dive on this — co-written (ironically) with Claude AI. The full article goes into the actual mechanics, real examples where I caught AI inventing biochemistry mid-conversation, and prompts you can paste into any AI to test it yourself. I think that it’s important for you to understand the how’s and why’s, so that you can use these tools as safely as possible.
These systems don't look up answers — they predict the most likely-sounding response, one word at a time. And during training, they were literally penalized for saying "I don't know" — so confident wrong answers got rewarded over honest uncertainty. The article shows you exactly what that looks like when it hits your important healthcare questions.
Use AI. Benefit from it. But verify it — because it will not verify itself.
Here's the quick version of what you can do about it:
Rule | How to Apply It |
|---|---|
Force the Source | Ask: "How confident are you, and what's your actual source?" No citation? It's a guess. |
Partner, Not Doctor | Use AI to brainstorm and prep questions for your visit. Since it confabulates what it doesn’t know, don't treat it as a second opinion. Bring the prompts that you asked to your visit, not just the output. |
Challenge It | Simply say “I think that your assertions are incorrect - review external sources and tell me why or why not”. Especially if it’s getting specific. |
Click Every Link | Open every citation. Broken URL = fabrication red flag. |
Relevant To You? | Press the AI to confirm that what it’s saying is truly relevant to your question. They can get stuck in the weeds of minutiae that are not relevant. |
Pit AIs Against Each Other | Ask multiple systems the same question and compare, or feed the answer of one into another and ask why it is incorrect (or correct) |
Protect Your Privacy | Don't put your name or medical details into a public chatbot. |
p.s. This one's already making the rounds in my physician community. If someone you know is leaning on AI for health decisions, send it their way.
Hope to hear from you soon,
Doc Sandford & crew
Ps. If you were forwarded this, CLICK HERE to receive them directly


