As some of you may know, I do a fair amount of clinical research developing and evaluating artificial intelligence models — particularly machine learning algorithms that predict certain outcomes.
And there’s this thorny issue that comes up as algorithms have gotten more complicated — it’s called “explainability”. The problem is that AI can be a black box. Even if you have a model that is very accurate at predicting death, clinicians don’t trust it unless you can explain how it makes its predictions — how it works. “It just works” is not good enough to build trust.