The Missing Link: Can the ‘Additive Tree’ Expand Machine Learning in Medicine?

The Missing Link: Can the ‘Additive Tree’ Expand Machine Learning in Medicine?

By combining elements of two widely used prediction models, the additive tree is a highly predictive model that is also easy to interpret

By Frank Otto

When health care providers order a test or prescribe a medicine, they want to be 100 percent confident in their decision. That means being able to explain their decision and study it over depending upon how a patient responds. As artificial intelligence’s footprint increases in medicine, that ability to check work and follow the path of a decision can become a bit muddied. That’s why the discovery of a once-hidden through-line between two popular predictive models used in artificial intelligence opens the door much wider to confidently spread machine learning further throughout health care. The discovery of the linking algorithm and the subsequent creation of the “additive tree” is now detailed in the Proceedings of the National Academy of Sciences (PNAS).

Lyle Ungar
Lyle Ungar

“In medicine, the cost of a wrong decision can be very high,” said one of the study’s authors, Lyle Ungar, PhD, a professor of Computer and Information Science at Penn. “In other industries, for example, if a company is deciding which advertisement to show its consumers, they likely don’t need to double-check why the computer selected a given ad. But in health care, since it’s possible to harm someone with a wrong decision, it’s best to know exactly how and why a decision was made.”

The team led by Jose Marcio Luna, PhD, a research associate in Radiation Oncology and member of the Computational Biomarker Imaging Group (CBIG) at Penn Medicine, and Gilmer Valdes, PhD, an assistant professor of Radiation Oncology at the University of California, San Francisco, uncovered an algorithm that runs from zero to one on a scale. When a predictive model is set to zero on the algorithm’s scale, its predictions are most accurate but also most difficult to decipher, similar to “gradient boosting” models. When a model is set to one, it is easier to interpret, though the predictions are less accurate, like “classification and regression trees” (CARTs). Luna and his co-authors subsequently developed their decision tree somewhere in the middle of the algorithm’s scale.

Continue reading at Penn Medicine News.

Share: