top of page
  • Writer: David Jones
    David Jones
  • Apr 15
  • 2 min read

5 Reasons Why Clinical Trials Fail Due to Design


Poor methodology costs billions and delays life-saving treatments, yet the same structural errors appear, time and again, before a single patient is enrolled.


50% of trials fail to meet their primary endpoint

~$2B average cost to bring a new drug to market

40% of failures linked to poor protocol design


Clinical trials represent one of the most resource-intensive endeavours in medicine, with a single Phase III study routinely costing upwards of $300 million. Yet an estimated 50% of trials fail to meet their primary endpoints, and many of those failures are not due to the therapy itself, but to flaws embedded in the design long before the first participant signs a consent form. Understanding where design goes wrong is the first step to building studies that succeed.



The Five Design Failures



1.

Small Sample Sizes

Enrolling too few participants is among the most preventable errors. When a trial lacks adequate statistical power, it cannot reliably detect a real treatment effect - even a genuinely efficacious therapy will appear to fail. Rigorous power calculations, incorporating realistic effect sizes and expected dropout rates, must precede recruitment, not follow it.



2.

Poorly Defined Endpoints

Vague or clinically irrelevant endpoints undermine an otherwise sound trial. Surrogate markers may be convenient to measure but fail to translate into meaningful patient outcomes, causing regulatory bodies to reject promising data. Endpoints should be specified, and locked, prior to any unblinding, with clear thresholds for success defined in advance.



3.

Inadequate inclusion and exclusion criteria

Overly broad criteria introduce heterogeneity that obscures a real signal; overly narrow criteria produce findings that cannot be generalised to the real patient population. The tension between internal validity and external relevance must be resolved deliberately. Criteria that exclude entire demographics also risk producing data that regulators and clinicians cannot apply equitably.



4.

Insufficient control of confounding variables

Failing to randomise adequately, or to stratify by known prognostic factors, allows confounders to contaminate results. Adaptive designs and stratified randomisation can mitigate this risk, but they require foresight. Retrospective adjustment is no substitute for prospective control, once the data is collected with imbalanced arms, the damage to causal inference is largely irreversible



5.

Unrealistic timelines and operational assumptions

Design failures are not always statistical. Trials that assume implausible enrolment rates, underestimate site activation times, or overlook data management complexity collapse during execution. Operational feasibility assessments should be as rigorous as the statistical design, with scenario planning built into the protocol from the outset to anticipate and absorb real-world delays.



Additional Listening


Learn from Joab Williamson, a clinical operations leader at Faron Pharmaceuticals, with direct experience in immunotherapy and combination strategies for blood cancers, shared a sponsor’s perspective on understanding and addressing clinical trial failures in HR-MDS. The discussion focused on practical, sponsor-led approaches to study design, operational controls, and lessons from the “graveyard” of past phase 2 and 3 trials.


Young man presents at a podium during a conference. The blue screen reads "Clinical Outsourcing Group." Attendees and bottles are visible.
Joab Williamson presenting at COG UK 2026



Comments


bottom of page