Background
Comparative Effectiveness research (CER) is a crucial methodology in healthcare, designed to evaluate the benefits and harms of different treatment options, interventions, and healthcare delivery methods. Unlike traditional clinical trials that often compare a new treatment to a placebo or no treatment, CER directly compares existing, active treatments or standard care approaches. This approach is particularly relevant in real-world healthcare settings where clinicians and patients need to make informed decisions between available options.
The significance of CER arises from the understanding that demonstrating a treatment’s efficacy against a placebo, while important, may not always translate to its effectiveness compared to other real-world treatments. In many clinical scenarios, multiple treatments are available, each with its own profile of benefits, risks, and costs. CER provides valuable insights by directly comparing these active treatments, helping to identify which interventions work best for specific patient populations and under typical practice conditions.
This becomes especially pertinent when considering healthcare interventions beyond pharmaceuticals and devices, such as behavioral therapies, surgical procedures, or changes in healthcare delivery systems. While regulatory pathways often mandate rigorous placebo-controlled trials for new drugs and devices, many other healthcare interventions lack such stringent pre-implementation evidence requirements. However, healthcare professionals and patients rightly expect that treatments offered are appropriate and effective, even when robust research evidence is lacking. This expectation is amplified in publicly funded healthcare systems, where accountability for delivering effective and efficient care is paramount.
In situations where a treatment is already integrated into usual care and widely accepted, conducting placebo-controlled trials can raise ethical and practical concerns. Withdrawing established care to create a no-treatment control group may be perceived as unethical and may hinder patient recruitment. In these contexts, CER offers a more ethically sound and practically feasible approach to generating valuable evidence about treatment effectiveness, understanding disease mechanisms, and improving healthcare efficiency. Furthermore, some researchers argue that placebo-controlled designs are sometimes inappropriately used when CER designs would be more suitable and informative.
This article proposes a decision-making framework to guide researchers in determining when CER is the most appropriate research design. This framework is particularly relevant for interventions without strict regulatory requirements and aims to promote the broader adoption of CER to generate clinically relevant and translatable real-world evidence. The selection of a comparator in clinical trials has profound implications, not only for the scientific conclusions drawn but also for the experiences of patients and healthcare providers, and ultimately for healthcare policy and practice. Therefore, a carefully considered methodological approach is essential for clinical researchers.
Framework
To assist clinician researchers in choosing the most appropriate study design, particularly when evaluating interventions within existing healthcare practices, we have developed a decision-making framework presented as a decision tree (Fig. 1). This framework helps determine when a comparative effectiveness study is justified, especially for services outside stringent regulatory oversight. The framework uses Level 1 questions (within ovals) that lead to decision nodes (rectangles) and finally to decision points (diamonds). Each question is discussed with clinical examples to clarify its relevance.
Fig. 1.
Figure 1. Comparative effectiveness research decision-making framework. Treatment A represents any treatment for a particular condition, which may or may not be a component of usual care to manage that condition. Treatment B is used to represent our treatment of interest. Where the response is unknown, the user should choose the NO response.
In this framework, Treatment A represents any existing treatment for a condition, whether or not it’s part of standard usual care. Treatment B represents the new treatment or intervention being investigated. The framework guides researchers to one of three recommendations: (i) a study comparing Treatment B to no active intervention, (ii) a study comparing Treatment A, Treatment B, and no active intervention, or (iii) a comparative effectiveness study directly comparing Treatment A and Treatment B.
Level 1 Questions
Question 1: Is the condition of interest currently being managed by any treatment as part of usual care, either locally or internationally?
The initial step for researchers is to identify the current standard treatments for the target patient population. This is critical to decide between conducting CER (Treatment A vs. B) or a trial comparing Treatment B to an inactive control. It’s important to recognize that “usual care” can vary significantly across different healthcare settings and regions. What is considered standard practice in one location might not be in another. Therefore, researchers must thoroughly investigate usual care practices in their local context and more broadly.
If no usual care treatment exists, then a study design comparing Treatment B to no active treatment is a reasonable approach (Fig. 1, Exit 1). However, even if there is no local usual care, strong evidence for the effectiveness, safety, and cost-effectiveness of a Treatment A (which is not locally implemented) should prompt consideration of including Treatment A in the study. This situation often arises due to the well-documented lag in translating research findings into clinical practice, with research indicating it can take an average of 17 years to implement only a small fraction of evidence-based care. While a Treatment B versus no active treatment design might seem more straightforward, the clinical value of such research is limited compared to CER of Treatment A versus B, particularly if Treatment A represents a recognized standard of care elsewhere. If the condition is already managed under usual care, researchers should proceed to the next Level 1 question.
Example: Fall prevention in hospitals is a universal safety priority, and most healthcare systems have established fall prevention strategies as part of usual care. Evaluating the effectiveness of different fall prevention programs within a hospital setting would typically necessitate a comparative design. Using a non-active treatment control in this scenario would mean removing a service deemed essential, a governmental health priority, and already embedded in the healthcare system—a scenario that is often ethically and practically problematic.
Question 2: Is there strong evidence of Treatment A’s effectiveness compared to no active intervention beyond usual care?
If there is substantial evidence demonstrating Treatment A’s effectiveness compared to placebo or no active treatment, the framework progresses to Question 3. However, if the evidence for Treatment A is limited, a CER design of Treatment B versus no active treatment can be considered. Comparing Treatment A with Treatment B generates locally relevant evidence (is Treatment B superior to current usual care or Treatment A?) and provides valuable data for other settings using Treatment A as usual care. This design is particularly useful when the research focus is on a specific local population, and broad generalization of findings is less critical.
For instance, chronic disease management programs (Treatment A) implemented in different Indigenous communities have shown varying success rates, heavily influenced by unique local characteristics, cultures, and traditions. Transplanting such a program (Treatment A) to an urban, non-Indigenous setting without careful adaptation may render it ineffective. Treatment A can also be valuable when the condition under study has an unclear cause, and the treatments being compared address different underlying biological mechanisms. However, if Treatment A’s applicability is restricted to the research location and broader generalizability isn’t a primary goal, then a Treatment B versus no active control design might be more appropriate.
Key considerations for clinical researchers at this stage:
- The prevalence and commonality of Treatment A as part of usual care.
- The success of established treatments in specific, localized, or unique population groups.
- The strength and consistency of evidence for Treatment A’s effectiveness compared to placebo or no active treatment.
Level 2 Questions
Question 3: Do the benefits of Treatment A outweigh its side effects when compared to no active intervention beyond usual care?
When Treatment A is known to be effective but also associated with side effects, the severity, frequency, and duration of these side effects must be carefully weighed before using Treatment A as a comparator for Treatment B. If the risks or potential severity of Treatment A’s side effects are unacceptably high or uncertain, and no other suitable comparative treatments are available, a study design comparing Treatment B to no active intervention should be chosen (Fig. 1, Exit 2). The ethical implications of continuing to use Treatment A as part of usual care should also be re-evaluated in light of its side effect profile. Even if the side effects of Treatment A are deemed acceptable, CER may still be warranted to find a safer or more tolerable alternative.
Clinical researchers may face challenges when the risks of both Treatment A and Treatment B are unknown or when one treatment is only marginally riskier than the other. In situations of uncertain risk comparison between treatments, the framework suggests considering the design as uncertain. This may justify a Treatment A versus Treatment B design, a Treatment B versus no intervention design, or even a three-arm trial investigating Treatment A, B, and no intervention (Fig. 1, Exit 3) to thoroughly explore the risk-benefit profiles of all options.
Example: Exercise programs, such as walking, offer numerous health benefits, especially for older adults, including reducing falls. Exercise programs incorporating walking training have demonstrated fall prevention benefits. However, vigorous walking programs for individuals at high risk of falls can paradoxically increase the incidence of falls. In such cases, a pragmatic CER approach focusing on risk and comparative effectiveness could more effectively demonstrate the net effect compared to a placebo (no active treatment) trial, which may not adequately capture the nuances of risk associated with active interventions.
Key considerations for clinical researchers:
- The potential risks of treatment side effects, including serious adverse events.
- The acceptability of the risk-benefit profile for all treatments being considered in the study design.
Level 3 Question
Question 4: Does Treatment A have a sufficient overall net benefit, considering all costs and consequences, to be deemed superior to a ‘no active intervention beyond usual care’ condition?
Demonstrated effectiveness and acceptable side effects are not sufficient justification for Treatment A to be automatically considered the standard comparator. If the cost of providing Treatment A is prohibitively high, diminishing its benefits relative to its costs, or if Treatment A has been shown to be not cost-effective or below acceptable cost-effectiveness thresholds, then Treatment A is not a realistic or sustainable comparator. Health economics literature often cites a cost-effectiveness threshold of $50,000 per quality-adjusted life year (QALY) gained as a benchmark, although this is debated and may vary across different healthcare systems and societal values. Considering these economic factors is crucial to reassess whether Treatment A should remain part of usual care. If no other potential comparative treatments exist, a study design comparing Treatment B to no active intervention is recommended (Fig. 1, Exit 4).
Conversely, if Treatment A has established efficacy, safety, and cost-effectiveness compared to no active treatment, it becomes ethically problematic to conduct a study comparing Treatment B to no active intervention. Enrolling patients and asking them to consent to potentially forgo a safe and effective treatment they would otherwise receive raises ethical concerns and may lead to poor recruitment rates. In such scenarios, including Treatment A as a comparator is more ethical and clinically relevant, especially if Treatment A is readily accessible or can be provided within the trial context.
Example: The methodological design of a diabetic foot wound study highlights the importance of health economics in CER. A study comparing Treatment A (non-surgical sharps debridement) to Treatment B (low-frequency ultrasonic debridement) must consider not only clinical outcomes but also costs. Evidence supports the necessity of wound care, as non-intervention would put patients at risk of wound deterioration, potentially leading to limb loss or death. The economic analysis should account for consumable expenses and short-term time demands versus longer-term time demands and potential cost savings associated with each treatment. Furthermore, the value of information gained from the research should be weighed against the opportunity cost of using research funds for other purposes, considering the existing evidence base for Treatment A’s cost-effectiveness.
Key considerations for clinical researchers:
- Comprehensive economic evaluation and its influence on treatment value.
- Understanding the health economics of treatments based on their effectiveness to guide clinical practice and research design.
- Acknowledging that not all treatment costs are immediately apparent, but establishing these is crucial for evidence-based practice and informed research design.
Level 4 Question
Question 5: Is the patient (potential participant) presenting to a health service or to a university- or research-administered clinic?
If Treatment A is not currently part of usual care, researchers might consider three options: (i) CER of Treatment B added to usual care versus usual care alone, (ii) introducing Treatment A into usual care for the trial and comparing it to Treatment B added to usual care, or (iii) a trial of Treatment B versus no active control. If option (i) is considered, usual care itself becomes Treatment A, and the researcher should revisit Question 2 in the framework.
There is increasing emphasis on health research conducted by clinicians within healthcare service settings, as distinct from university-based academic research. Patients presenting to health services generally expect to receive treatment for their condition, unlike individuals responding to research trial advertisements who understand they might not receive active treatment. In healthcare service settings, option (ii) – introducing Treatment A if it is not already usual care and then comparing it with Treatment B – becomes particularly relevant if Treatment A represents a recognized standard of care elsewhere.
Using research designs (option iii) comparing Treatment B to no active control within a health service setting presents ethical and practical challenges for clinical staff. They must address the ethical implications of enrolling patients in a study where they might not receive active treatment (Fig. 1, Exit 4). This is not to suggest that non-active controls are inherently unethical. When there is a genuine lack of evidence for treatment effectiveness, a no-active control arm can be ethically justified. However, this needs to be considered in light of the other framework questions about treatment risks and the role of treatment in usual care. Clinicians must balance the need to establish treatment effectiveness, safety, and cost-effectiveness with their primary concern for patient well-being and the possibility of withholding potentially beneficial treatment. This ethical equipoise, or genuine uncertainty about the preferred treatment, is a critical consideration.
Patients have a right to access publicly available health interventions, regardless of trial participation. Comparing Treatment B to no active control may be inappropriate if it means withholding established usual care. However, if there’s insufficient evidence for the effectiveness of usual care, or if there’s evidence of potential harm, prohibitive implementation costs, or significant intervention costs, a sham or placebo-based trial might be justified.
Example: A CER study evaluating different treatment options for heel pain within a community health service highlighted the impact of the research setting. Children with heel pain attending the health service for treatment were recruited for the study. Upon enrollment, children and parents were asked if they would participate if there was a chance of being assigned to a ‘no-intervention’ group. Of 124 participants, only 7% (n=9) agreed to participate if placed in a no-treatment group. This demonstrates the significant challenges of using no-active control groups in healthcare settings where patients present seeking treatment.
Key considerations for clinical researchers:
- The research setting fundamentally influences the feasibility and ethics of research design.
- Clinical equipoise presents unique challenges for clinicians recruiting patients in healthcare settings.
- Patients seek treatment when entering a healthcare service; participation in a clinical trial is not their primary motivation.
Conclusion
This framework offers a structured decision-making process for selecting comparators in CER, based on current interventions, treatment risks, economic considerations, and the research setting. While scientific rigor remains paramount, researchers in clinical contexts must also navigate practical and ethical considerations related to existing practice, patient safety, and real-world outcomes. We propose that in healthcare settings, CER designs should often be the preferred methodology over placebo-based trials, provided that evidence for treatment options, risks, economic factors, and the specific setting are carefully evaluated. By systematically considering these factors, researchers can design more ethical, relevant, and impactful clinical trials that contribute to evidence-based healthcare and improve patient outcomes.
ᅟ
ᅟ
Authors’ contributions
CMW and TPH drafted the framework and manuscript. All authors critically reviewed and revised the framework and manuscript and approved the final version of the manuscript.
Competing interests
The authors declare that they have no competing interests.
Contributor Information
Cylie M. Williams, Phone: +61 3 9784 8100, Email: [email protected]
Elizabeth H. Skinner, Email: [email protected]
Alicia M. James, Email: [email protected]
Jill L. Cook, Email: [email protected]
Steven M. McPhail, Email: [email protected]
Terry P. Haines, Email: [email protected]
References
[1] Ioannidis JP, Evans SJ, Gøtzsche PC, Dickersin K, Moher D, Altman DG, et al. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med. 2004;141(10):781-8.
[2] Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290(12):1624-32.
[3] Luce BR, Brown RE,겟츠체 PC, Tunis SR. Evidence-based medicine. JAMA. 2004;292(19):2345-6.
[4] комитет по контролю за качеством медицинской помощи (NCQA). The state of health care quality 2006. Washington, DC: NCQA; 2006.
[5] Moher D, Dulberg CS, Wells GA. Statistical power, sample size, and patient recruitment in randomized controlled trials. Control Clin Trials. 1994;15(4):310-5.
[6] Eccles M, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1:1.
[7] Mays N, Pope C, Popay J. Systematically reviewing qualitative and quantitative evidence to inform management and policy decision-making in health care. Health Serv Res Policy. 2005;10(Suppl 1):i76-84.
[8] Oliver S, Noyes J, Newman M. Multi-method synthesis: issues of method congruence, fit and utility. In: