Executive Update
March 16, 2020

From Peter Embí, CEO: Regenstrief partnering to guide the future of AI in healthcare

Dr. Peter J. Embi

As we enter a new decade, the role of artificial intelligence (AI) in healthcare will continue to grow. AI offers exciting opportunities to improve care, but many questions are unanswered about the applications, how they will impact workflow and staffing, how to maximize benefit and minimize harm, and many other unknowns.

For decades, Regenstrief research scientists have been pioneers in the use of AI — machine learning, natural language processing, computerized decision support and other innovative uses of big data — to improve healthcare delivery. From the institute’s early days, our innovators have been designing, developing, testing and applying original concepts and novel approaches to enable computers to analyze large amounts of data, draw conclusions and recommend actions based on the results.

While we have learned that there is clearly immense potential for benefits to be gained from these technologies that comprise AI, we also know that we must be cautious, thoughtful and act wisely to mitigate risks as we move forward.

Regenstrief is playing a key role in the current, very active discussion of AI and healthcare. Vice President for Research Development Eneida Mendonca, M.D., PhD, is a co-author on a seminal report released late last year from the National Academy of Medicine (NAM) exploring opportunities, issues and concerns related to AI and its role in improving human health. The report, “Artificial Intelligence in Healthcare: The Hope, The Hype, The Promise, and The Peril,” also features input from experts, in addition to Regenstrief, affiliated with institutions such as Harvard University, the Mayo Clinic, Johns Hopkins, Stanford University, Columbia University and the Gates Foundation.

Dr. Mendonca also is engaged in a partnership with Harvard University to organize an AI conference on March 24 in Cambridge, Massachusetts. And, Dr. Mendonca and Shaun Grannis, M.D., Regenstrief vice president for data and analytics, are spearheading an AI conference hosted by Regenstrief and planned for June.

Renowned for her work and expertise in machine learning, predictive analytics and AI adoption, Dr. Mendonca co-authored chapters in the NAM report that consider the potential tradeoffs and unintended consequences of AI and explore deploying AI in clinical settings. For more information about the content of that report, visit: https://www.regenstrief.org/article/mendonca-nam-ai-report/. Within the press release is a link to download the free special report.

Another Regenstrief distinguished expert, Dr. Grannis is a global thought leader in health and data, who recently presented in Washington, D.C. on the FDA’s real-world evidence (RWE) initiative and works closely with policy makers to advance national patient identity management strategy. He also has his hands in numerous AI projects. This includes work with other Regenstrief investigators Joshua R. Vest, PhD, Nir Menachemi, PhD, and Suranga Kasthurirathne, PhD, on the highly successful Uppstroms, a machine learning app that works to help at-risk populations by anticipating their needs outside of clinical care.

At a recent Regenstrief Entrepreneurial Ecosystem Forum (REEF), Dr. Grannis succinctly communicated to a group of physician entrepreneurs, “We know there is value in healthcare AI. Excitement and attention are growing, but often we get distracted by the tremendous potential.”

I echo the sentiment that as we harness the immense potential of AI, we should focus on the whole picture in real-world situations, being realistic in applications and expecting unexpected outcomes. Because AI systems are constrained by learning from data input by humans, those systems will have an amplifying effect on existing behaviors, including bias.

That is why algorithmovigilance is so important. We must be vigilant to ensure these new tools are providing the positive impact for which they were designed while not causing unanticipated harm. If there is harm occurring, we must act decisively to alleviate the problem. Monitoring is important as well to build trust in AI. Many in healthcare are wary of this new technology, so we must work to provide evidence of its positive impact, or at the very least, that it is not causing negative outcomes. Dr. Mendonca, Dr. Grannis, Chief Information Officer Umberto Tachinardi, M.D., and I discussed this important topic at the American Medical Informatics Association’s Health Informatics Policy Forum in early December. There are also safety concerns of which to be cognizant, including the potential for hacking.

The use of AI will alter the healthcare workforce. While machines will not replace humans, they will likely change current roles. The ideal use of AI will free clinicians and others in the health fields to do more good for people, with computers managing administrative tasks and assisting with prioritizing resources.

Educational opportunity available

In an encore to the NAM and partners special report, Regenstrief is hosting, on the institute website, an 11-video course based on the publication. NAM, in collaboration with key partners Regenstrief, Stanford University Institute for Human-Centered Artificial Intelligence, the Gordon and Betty Moore Foundation, the American Medical Association, Sutter Health and Vanderbilt University, led a digital learning collaborative at which the publication’s authors and other AI experts discussed key considerations made in the special publication. Videos of the discussions are now available online at learning.regenstrief.org for CME credit through the Indiana University School of Medicine’s Division of Continuing Medical Education.

We are in the early developmental stages with many practical, scientific, technological, legal and ethical hurdles to overcome. But AI clearly holds immense potential in the healthcare arena for both individual and population health. Leveraging AI to focus on clinical safety and effectiveness as well as stakeholder and user engagement are of paramount importance, as are continual monitoring and evaluation. We at Regenstrief are well positioned and eager to continue the institute’s leadership, along with many of the other highly regarded institutions previously mentioned, to pragmatically develop and wisely implement AI’s potential as we prepare for the expected and unexpected effects it will have.

Humans will drive AI but they must propel it wisely. I look forward to a future in which we prudently employ the tools of AI for the benefit of humankind.

Related News

IU announces $138 million Lilly Endowment grant, launches partnership to accelerate bioscience innovation

IU announces $138 million Lilly Endowment grant, launches partnership to accelerate bioscience innovation

Regenstrief one of the partners expected to drive advancements and commercialization in biosciences  Regenstrief Institute will partner with Indiana

Thomas Imperiale, MD

Is the Multitarget Stool DNA Test Just a Better “FIT” for Colorectal Cancer Screening?

Published in the journal JAMA Internal Medicine. Here is a link to the article. Regenstrief Institute authors: Tom Imperiale,

Alexia Torke, MD, MS, and George Fitchett, DMin, PhD

Caring for the emotional and spiritual needs of family members of ICU patients

INDIANAPOLIS – Family members of intensive care unit (ICU) patients often experience psychological and spiritual distress as they deal

Randall Grout, MD

Informaticians apply tools and techniques to eliminate ambiguity and better implement guidelines and policies in pediatric care

Policy implementation experts’ model can be reproduced and repeated, in many different practices For the last three decades, medical