Andrew Gonzalez, M.D., J.D., MPH, discusses the best uses for AI in healthcare delivery.
Transcript:
Artificial intelligence can contribute to healthcare in a variety of ways, doing tasks that are not that challenging for anybody with a little bit of healthcare knowledge to do but are basically boring and not truly involved in the delivery of care. So these would be a lot of administrative tasks, a lot of searching, documenting, finding this, finding that, putting it in a format that people want to see. So artificial intelligence is really good for those sorts of tasks. The challenge is that you have to ensure that the systems are safe, accurate and reliable.
Secondly, for the narrow but superhuman tasks are things that are like tracking a large number of variables and then coming up with an answer. So when a clinician does it, we call this “clinical gestalt.” They sort of read through we presume like 50 pages of the medical record and then kind of think about it and then come up with a decision.
But for a artificial intelligence system, it has the ability to essentially track those 50 variables, or think of it like adding numbers for a calculator. The calculator has essentially an unlimited amount of memory for numbers, but even the smartest human with the best memory, if you just keep giving them two plus seven plus nine plus 15, at some point they’re working memory is going to stop, and they’re not going to be able to complete that task as fast as a calculator would.
Dr. Gonzalez: AI solutions in healthcare must be equitable, accurate and safe.
Transcript:
In deploying artificial intelligence-based systems in real clinical settings, there’s a variety of challenges. So the one that the National Academy is particularly focused on is equity. So there’s equity in the sense of making sure that everyone has access to this technology. There’s equity in the sense of making sure that the data upon which these systems is trained represents the patient populations in which they might be used.
But outside of these equity concerns, there’s the accuracy concern, particularly with generative models. They have the ability to create new text or new data. What we want to ensure is that the output of the model is accurate. It’s not something that the model made up just to answer the question.
The third big area is safety. So let’s assume that the model puts out something that is completely accurate. It’s in the format that a human clinician would want to deal with. As these systems get more and more used, we want to make sure that we have a separate system in place that evaluates the safety of basically the overall healthcare system.
So there’s concern that using these systems may lead to a lot of unintended consequences. So one of the big areas that we want to ensure is that institutions have a framework for identifying problems as they come up, because some problems are going to be exacerbated, but there are also going to be wholly new problems that haven’t previously been issues until you use an artificial intelligence-based system.