Healthcare, AI, and the Second Pathway
Healthcare, AI, and the Second Pathway
I watched an interview at Davos with Jensen Huang, CEO of NVIDIA, about the future of AI. Few people have a clearer view of how AI is being adopted across industries. When he spoke about healthcare, one point caught my attention.
“Citing real-world anecdotes, he noted the increasing use of AI in radiology and nursing, which has automated tasks such as charting, scans, and note transcription, making radiologists and nurses more productive and giving them more time to focus on patients.”
At a macro level, he’s right. Leveraging AI to handle charting and documentation so doctors and nurses can spend more time with patients sounds ideal.
But efficiency is not neutral.
When we remove friction from human work, we may also remove the very mechanisms that create the doctors’ and nurses’ judgment, memory, and intuition.
I thought about this more deeply because I have a close colleague who is a surgeon. We’ve talked about AI in healthcare before, and I asked her what happens when charting is fully turned over to AI.
When a doctor or nurse writes a chart, they are not just recording facts. They are re‑processing what just happened. They decide what matters. They interpret. They summarize. That act creates a second pathway in the brain, a reinforcement loop that deepens memory and understanding of the patient.
A clinician may walk into the next room with only a fleeting recollection of what just occurred. The act of charting anchors that experience. It is cognition, not just clerical work.
Humans and AI do not learn in the same way.
Every time a human re‑encodes an experience, by writing, explaining, or reflecting, new neural pathways are formed. Learning is biological and continuous. It is also self‑directed. The doctor decides what to notice and what to refine.
AI, by contrast, is trained. It does not change itself through experience. It produces outputs based on statistical patterns learned during training and later adjusted through costly, centralized processes.
Gemini summarized this difference cleanly:
This difference has real consequences.
A doctor learns instantly. If a stitch breaks, she changes her technique on the very next stitch. The cost of that change is zero. Her mind updates in real time.
An AI system only “learns” when someone far away notices a pattern of errors, curates examples, retrains or fine‑tunes a model, validates it, and redeploys it. That process costs millions in compute and time. The feedback loop is delayed, indirect, and administrative.
Now bring that back to healthcare.
A clinician who does not participate in charting may not fully encode a patient’s unique condition. The information exists, but only in the system. In a critical moment, when seconds matter, the difference between having read and having internalized can be life‑altering.
My colleague was quick to point out that many doctors are already using AI to supplement their charting to great effect. They still review and sign off. That distinction matters.
AI that assists cognition strengthens the human. AI that replaces cognition quietly weakens it.
IBM’s Enterprise in 2030 research highlights agentic AI automating clinical coding, managing patient waitlists, and streamlining discharge procedures. Those are excellent uses of AI automation. They remove friction without replacing human judgment.
The lesson here extends far beyond healthcare.
In any domain where human judgment, reaction, and situational awareness matter, medicine, aviation, security, finance, emergency response, organizations must ask not only:
“Is this faster?”
“Is this cheaper?”
But also:
“What human capability does this replace?”
“What learning loop does this remove?”
“What risks, does this add, even if limited to life-altering moments?”
One final implication flows from this.
The future of responsible AI is not just bigger models. It is appropriate models. Smaller, specialized models in domains like healthcare can be more accurate, more transparent, and more frequently updated. They shorten the distance between error and correction. They behave more like tools than oracles.
In the end, what AI needs to succeed in the real world is, dare I say, human after all.
Not because humans are faster.
But because humans are the only ones who truly learn from doing.
Links for to information mentioned in this post:
Nvidia CEO Jensen Huang on how AI is becoming the next great infrastructure build

