Case Study
February 5, 2026

Evaluating AI-Enabled Trial Screening in Real Oncology Workflows

photo of a panel of participants

(Triomics × IU Health × Regenstrief, with support from Eli Lilly and Company)

The Question

Clinical trial enrollment in oncology continues to fall short—not because patients are unwilling, but because eligibility screening is labor-intensive, fragmented, and difficult to sustain in routine care. As trial protocols grow more complex and staffing constraints persist, health systems face a critical question:

Can AI-enabled trial screening improve identification of eligible patients without increasing clinician burden or disrupting care delivery?

For IU Health Oncology, this was not a theoretical question. Leaders needed to decide whether an AI-enabled screening approach could be trusted enough to scale across the system. For Triomics, the question was whether their tool could perform reliably outside of pilots and demonstrations. For a life sciences sponsor, the question was whether AI-assisted screening could responsibly support trial access in real-world oncology settings.

Existing evidence could not fully answer these questions. Vendor claims and retrospective analyses did not capture how AI tools behave inside live workflows. Small pilots offered promise, but little insight into adoption, trust, or sustainability.

We set out to evaluate the tool where it matters most: inside real oncology clinics, with real clinicians, making real decisions.

The Analysis

We are currently leading a sponsor-funded evaluation of Triomics PRISM, an AI-enabled clinical trial screening tool, in partnership with IU Health Oncology, Triomics, and Eli Lilly and Company.

Rather than immediately deploying the tool system-wide, IU Health and Regenstrief agreed to begin with two oncology satellite sites within the IU Health system. These sites were selected to reflect the realities of community-based oncology care—where staffing constraints, documentation variability, and competing clinical priorities are most acute.

Our goal is to determine whether AI-enabled screening can be trusted to scale within IU Health specifically, before broader adoption.

We are studying PRISM not as a standalone technology, but as a workflow intervention embedded into routine oncology practice. The evaluation is designed as a hybrid effectiveness–implementation study, focused on generating decision-relevant evidence early—before scaling decisions are made.

Two complementary research efforts anchor this work:

  • Dr. Jiang Bian, Regenstrief’s Chief Data Scientist, leads the technical audit, evaluating how AI-enabled screening performs across heterogeneous real-world clinical data, including unstructured oncology documentation and evolving eligibility criteria.
  • Dr. David Haggstrom, a nationally recognized implementation scientist, leads the implementation evaluation, examining adoption, fidelity, clinician burden, and sustainability using established implementation science frameworks.

Because Regenstrief operates within a deeply integrated learning health system environment, we are able to conduct this work inside live IU Health oncology workflows—conditions that cannot be replicated in synthetic datasets or retrospective simulations.

Importantly, this evaluation does not assume that AI adoption is inherently beneficial. Instead, we are asking: Under what conditions does this intervention fit—and where does it not?

The Answer

Because this is a live study, the value lies not in final outcomes, but in what the evaluation is already enabling.

  • First, it reduces deployment risk.
    By evaluating AI-enabled screening in two satellite sites first, IU Health can assess trust, workflow fit, and operational impact before committing to system-wide scale. This approach replaces assumption-driven adoption with evidence-informed decision-making.
  • Second, it tests whether AI screening can extend trial access in real care settings.
    Automated prescreening has the potential to support trial identification beyond academic centers. This study examines whether that promise holds in community oncology environments—without increasing clinician burden or disrupting care.
  • Third, it establishes a reusable model for responsible AI evaluation.
    For life sciences sponsors and health systems alike, this collaboration demonstrates how AI interventions can be evaluated rigorously before scale—through partnerships that align vendors, delivery systems, and independent researchers around shared evidence needs.

This work reflects how we approach clinical AI at Regenstrief: not as a product success story, but as a structured way to turn uncertainty into credible evidence.

This is what it looks like to evaluate AI interventions inside the real clinical systems where trials—and ultimately therapies—succeed or fail.