
As artificial intelligence rapidly embeds itself into every corner of health care — from reading X-rays to drafting medical notes — a new National Academy of Medicine report warns that the technology’s promise could easily deepen the very problems it aims to solve. While AI tools are being hailed as a fix for clinician burnout, rising costs and inequities in access to care, they also risk amplifying bias, eroding trust and widening digital divides.
The Academy’s proposed solution is laid out in a new report: An AI Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action (AICC). It provides a set of six simple but sweeping commitments — advancing humanity, ensuring equity, engaging affected individuals, improving workforce well-being, monitoring performance and fostering innovation — to help the nation harness AI’s benefits without sacrificing ethics, safety or fairness.
LDI Senior Fellow and University of Pennsylvania Professor Kevin B. Johnson, M.D., M.S., was one of the 21 authors of the 206-page national report.
“The same thing that happened with electronic health records is happening again with AI,” said Johnson. “Everyone’s building tools, but there isn’t a shared playbook to make sure they’re safe, fair and actually useful. This report was needed to bring some order to the chaos. It gives us a national framework so AI in health care can be developed and used responsibly, with transparency and trust at the center.”
