Case Study
February 5, 2026

Evaluating AI-Enabled Trial Screening in Real-World Oncology Workflows

photo of a panel of participants

Regenstrief, Triomics, and Indiana University Health, supported by an independent grant from Lilly

The Question

Clinical trial enrollment in oncology has not kept pace with the growing number and complexity of available trials—not because patients are reluctant, but because identifying eligibility across fragmented data sources is, difficult to sustain in routine care. As trial protocols grow more complex and clinical workflows more distributed, health systems face a critical question:

Can AI-enabled trial screening improve identification of eligible patients while integrating seamlessly into routine care delivery?

For Indiana University (IU) Health and IU Melvin and Bren Simon Comprehensive Cancer Center (IUSCCC), this is not a theoretical question. Leaders need to decide whether an AI-enabled screening approach could be trusted enough to scale across the system. For Triomics, the question is whether their tool could stand up to real-world testing—delivering reliable performance and measurable impact beyond pilots and demonstrations, in routine operational use.  For a life sciences sponsor, the question is whether AI-assisted screening could responsibly expand trial access, with performance, safety, and equity validated in real-world oncology settings.

Existing evidence cannot fully answer these questions. Vendor claims and retrospective analyses do not capture how AI tools behave inside live workflows, and small pilots—while promising—offered limited insight into real-world adoption, trust, workflow integration, and sustainability.  This is where implementation science is essential: it provides the methods to evaluate not just accuracy, but feasibility, fidelity, acceptability, and long-term impact in routine care.

We set out to evaluate the tool where it matters most: inside real oncology clinics, with real clinicians, making real decisions.

 

The Analysis

The Regenstrief Institute is currently leading an evaluation of Triomics Prism, an AI-enabled clinical trial screening tool, in partnership with IU Health and IUSCCC and Triomics, supported by a healthcare improvement grant from Eli Lilly and Company.

Rather than immediately deploying the tool system-wide, IU Health and Regenstrief chose to begin with two sites within the IU Health system. These sites were selected to reflect the realities of oncology care—where variation in staffing, documentation, and site workflow are most informative for rapid iteration and design cycles.

The goal is to determine whether AI-enabled screening can be trusted to scale within a learning health system, before broader adoption.

We are studying clinical trial AI matching not as a standalone technology, but as a system intervention embedded into routine oncology practice. The evaluation is designed as a hybrid effectiveness–implementation trial, focused on generating decision-relevant evidence early—before scaling decisions are made.

Two complementary research efforts anchor this work:

  • Dr. David Haggstrom, a nationally recognized implementation scientist, leads the implementation evaluation, examining adoption, fidelity, clinician burden, and sustainability using established evaluation frameworks.
  • Dr. Jiang Bian, Regenstrief’s Chief Data Scientist, leads the technical audit, evaluating how AI-enabled screening performs across heterogeneous real-world clinical data, including unstructured oncology documentation and evolving eligibility criteria.
  • Dr. Tim Lautenschlaeger, MD (IU Radiation Oncology | Medical Director, IU Simon Comprehensive Cancer Center Clinical Trials Office): Provides organizational and clinical leadership for the workflow change, including in-service sessions for clinical trial nurses and physicians.
  • IU Health implementation team (Co-Leads: Emily Webber, MD, CMIO; incl. CTO teams/ODSRI / IT / EDW): Leads health-system rollout of AI clinical trial-matching platform—defining technical scope, build plan, and trial cohorts, completing data governance reviews, configuring EDW integrations, deploying within IU Health’s HIPAA-compliant environment with role-based access, training CTO teams, and supporting go-live plus performance monitoring.

Because Regenstrief operates within an integrated learning health system environment, we can conduct this work inside live IU Health oncology workflows—conditions that cannot be replicated in synthetic datasets or retrospective simulations.

Importantly, this evaluation does not assume that AI adoption is inherently beneficial. Instead, we are asking: Under what conditions does this intervention fit—and where does it not?

 

The Answer

Because this is a live study, the value lies not in final outcomes, but in what the evaluation is already enabling.

First, it reduces deployment risk. By evaluating AI-enabled screening in two satellite sites first, IU Health can assess trust, workflow fit, and operational impact before committing to system-wide scale. This approach replaces assumption-driven adoption with evidence-informed decision-making.

Second, it enables rapid learning and iteration. By combining quantitative performance data with continuous qualitative feedback from patients, clinicians and operational teams, the evaluation supports timely refinement of workflows and tool configurations before broader scale-up.

Third, it establishes a reusable model for responsible AI evaluation. For life sciences sponsors and health systems alike, this collaboration demonstrates how AI interventions can be evaluated rigorously before scale—through partnerships that align vendors, delivery systems, and independent researchers around shared evidence needs.

This work reflects how we approach Clinical AI/ML at Regenstrief: by evaluating how AI interventions inside the real-world clinical systems where trials of life-saving therapies succeed or fail.

 

Related News

doctor looking at atrial firilation on a monitor

From model to practice: How Regenstrief developed, validated, and implemented an EHR-based atrial fibrillation risk tool in routine cardiology care

Partners involved: Pfizer, Regenstrief Institute, and Eskenazi Health The Question Atrial fibrillation (AF) is a major risk factor for