Why Traditional Feasibility Fails (and What Actually Predicts Enrollment Success)
Clinical trial feasibility is intended to answer a straightforward question: Can this study realistically enroll patients?
Yet across therapeutic areas and trial phases, studies that pass feasibility with confidence routinely struggle to meet enrollment targets—or fail altogether.
This pattern is well documented in academic literature. Analyses of terminated clinical trials show that low participant accrual is one of the most common reasons studies are delayed, extended, or stopped early (Desai et al., 2020). The issue is not isolated execution failure. It reflects a deeper limitation in how feasibility is traditionally defined and measured.
What Traditional Feasibility Actually Measures
Most feasibility assessments rely on a familiar set of signals:
-
Site-reported experience with similar studies
-
Historical enrollment performance
-
Patient counts derived from structured EHR fields
-
Investigator confidence in protocol fit
These inputs are useful for assessing interest and capability. However, research consistently shows they function as proxies, not predictors, of enrollment success.
Reviews of recruitment practices note that feasibility approaches are often inconsistent and poorly standardized, limiting their ability to forecast real-world recruitment outcomes with accuracy (NIH, NCBI Bookshelf).
Why These Signals Break Down in Practice
Structured data lacks clinical context.
Structured EHR fields provide scalable counts, but they rarely capture disease severity, progression, comorbidities, or care pathways. Peer-reviewed studies have shown that relying on coded data alone can significantly misrepresent how many patients are truly available and appropriate for a given protocol (Idnay et al., 2023).
As a result, feasibility estimates based solely on structured data often overstate real enrollment potential.
Eligibility is treated as static.
Traditional feasibility assumes eligibility is a fixed attribute. In reality, eligibility is temporal. Patients move in and out of eligibility windows as conditions evolve, treatments change, or care settings shift.
Large retrospective analyses of registered trials demonstrate that initial enrollment projections frequently fail to account for these dynamics, contributing to persistent recruitment shortfalls (Desai et al., 2020).
Historical performance is misapplied.
Past enrollment success is commonly used as a signal for future performance. However, academic literature shows that protocol complexity, eligibility burden, and real-world care patterns have a greater influence on enrollment outcomes than site history alone (NCBI Bookshelf).
Historical performance appears to lose predictive value when protocols diverge meaningfully from prior studies or when patient populations differ in clinically relevant ways.
Protocol assumptions don’t reflect real-world care.
Protocols are often designed around idealized patient populations. Analyses published in major medical journals consistently show that trial populations differ substantially from patients seen in routine clinical practice, limiting both generalizability and enrollability (NEJM perspective summarized in NCBI).
When these assumptions go unchallenged, feasibility projections become overly optimistic.
The Feasibility-to-Enrollment Gap
Feasibility and enrollment are typically treated as sequential steps rather than a continuous process. This separation creates a gap between early expectations and real-world execution.
Public analyses of ClinicalTrials.gov data show that a large proportion of trials fail to meet original enrollment targets or timelines, even after feasibility assessments are completed (PMC review).
When feasibility relies on static proxies instead of dynamic evidence, enrollment risk is often identified too late to correct.
What Actually Predicts Enrollment Success
Evidence from academic and public research suggests that enrollment success aligns more closely with contextual and longitudinal signals than with raw patient counts.
More reliable predictors include:
-
Longitudinal patient availability rather than point-in-time estimates
-
Alignment between protocol intent and real-world care patterns
-
Temporal eligibility tied to clinical events
-
Operational readiness at the site level
-
Continuity between feasibility assumptions and recruitment execution
Reviews of completed trials indicate that design complexity and real-world patient flow are stronger indicators of enrollment performance than historical site metrics alone (Idnay et al., 2023).
How Leading Teams Rethink Feasibility
Organizations with more consistent enrollment outcomes tend to approach feasibility as an evidence-based discipline rather than a one-time checkpoint:
-
From static counts to longitudinal insight
-
From coded data to clinical context
-
From isolated feasibility exercises to continuous evaluation
This shift does not eliminate uncertainty, but it reduces reliance on assumptions that academic research has repeatedly shown to be unreliable.
Rethinking Feasibility
Feasibility should not function as a confidence exercise. It should operate as an evidence-based decision discipline grounded in how care is delivered today. When feasibility reflects context, timing, and continuity, enrollment outcomes become more predictable—not guaranteed, but materially better aligned with reality.
Frequently Asked Questions
Why does traditional feasibility fail to predict enrollment?
Because it relies heavily on proxies such as historical performance and structured data counts that do not reflect real-time patient availability or clinical context (NCBI Bookshelf).
Is enrollment failure a common reason trials stop early?
Yes. Multiple analyses of terminated trials identify low accrual as one of the leading causes of early termination and prolonged timelines (Desai et al., 2020).
Can feasibility be improved?
Feasibility becomes more predictive when it incorporates longitudinal data, clinical nuance, and operational realities rather than static snapshots (Idnay et al., 2023).
Read More
American Hospital Association: How AI Is Transforming Clinical Trials
Top AI-Driven Clinical Trial Patient Identification Tools: What Research Teams Should Look for in 2026
Top AI-Driven Clinical Trial Patient Identification Tools: What Research Teams Should Look for in 2026Clinical trial enrollment remains one of the most persistent barriers in research. Even when the right patients exist within a health system, identifying them often...
Beyond the Buzzword: What Real AI Looks Like in Clinical Research
Beyond the Buzzword: What Real AI Looks Like in Clinical Research Every company seems to claim they’re “using AI.” Yet if you ask how, the answers often fall apart under scrutiny. In clinical research, the difference between buzzword AI and real AI is more than...
Human-in-the-Loop AI in Clinical Trials: Why the Future of Recruitment Isn’t Fully Automated
Human-in-the-Loop + AI in Clinical Trials: Why the Future of Recruitment Isn’t Fully Automated In the race to modernize clinical-trial recruitment, artificial intelligence has become the headline act. Many vendors now promise fully automated tools that can find,...
The Rise of the Digital-Ready Site: How Technology Readiness Wins More Studies
The Rise of the Digital-Ready Site: How Technology Readiness Wins More Studies In clinical research, experience used to be the differentiator. Sponsors and CROs prioritized sites that had conducted the most studies or enrolled the most patients. Today, the bar has...