Warren Stevens & Gemma Shields, PAREXEL
The methods used to estimate cost-effectiveness of health technologies are constantly evolving. Early approaches in economic evaluation for health technologies relied primarily on the use of evidence from a randomized controlled trial (RCT), on the basis that an RCT provides the best quality of evidence (Hoch 2002). As CE has grown in importance to decision makers the intrinsic desire to produce the most relevant evidence to answer the question has become more important. One consequence of this has been the acceptance that for certain variables within a CE model, real world data, or real world evidence (RWE) may be more appropriate to answering the CE question than RCT data.
Objectively the goal of an RCT and a CE evaluation are fundamentally different. The primary aim of an RCT is internal validity; to ensure that the measured variance in the primary outcome between intervention and control arms be purely down to the intervention being tested. An RCT is designed to evaluate the efficacy of an intervention in a specific, tightly defined population. The value of randomization is that it reduces selection bias and the effects of confounding. Assuming a random treatment allocation, an appropriate sample size can be determined on the basis of probability theory, and inclusion and exclusion criteria precisely define the study population.
The goal of the CEA is a different one. The primary aim is to evaluate the long-term cost-effectiveness of an intervention in normal clinical practice, that is: for a population who might reasonably be expected to use the intervention. It is less concerned with the potential efficacy of an intervention than with its actual effectiveness in patient populations being treated in everyday practice. The vast majority of CEA models are built within a decision analytic framework, and as a result there has been growing interest in greater incorporation of RWE to bridge this internal-external validity gap, and also as a means of complementing short-term results from an RCT with longer-term observational data. In part, this interest reflects a greater acceptance that patients enrolled in RCTs differ from real-world populations. Globally major clinical and research organizations, including ISPOR, the Royal Society of Medicine (Weinstein 2003; Rawlins 2008), and most recently the UK’s NICE have outlined the need to use RWE when estimating absolute treatment effects (Faria 2015).
Numerous studies have compared cost effectiveness models using RCT data and RWE and all have shown significant differences in results. One compared CE models showing that CE was 30% more favorable using RWE compared to RCT based studies (Campbell 2014). Another reviewed 25 studies and showed that on average the RWE based studies were twice as likely to be considered ‘cost-effective as RCT-based studies (Dilokthornsakul 2017). A recent comparison of six separate studies looking at PCSK-9i drugs showed the RCT based groups of models estimate were around twice the cost per QALY gained as the RWE group (Toth 2017)
Ultimately, the question is not whether one set of CE studies is right or wrong, but rather what population are we interested in? If the prescribing of new therapies should be limited solely to the sub-set of patients that were included in RCTs, then RCT-based studies provide an accurate depiction of the value of the therapy tested. However, if we accept that RCTs are simply the first step in assessing the potential clinical value of new therapies in a comprehensive population of people who could benefit from it, then we should consider the possibility that the incorporation of RWE data to estimate risk is as important as a source of truth as the RCT data is for estimating efficacy.
Figure 1: Effect of source of data on ICER
Figure 2: Quote:
“It is impossible to design a trial which can detect efficacy and effectiveness simultaneously. The RCTs describe what we may expect in health care, while the PCTs describe what we really observe.” (Porzsolt at al 2015)
Hoch, J.S., Briggs, A.H. and Willan, A.R., 2002. Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and cost‐effectiveness analysis. Health economics, 11(5), pp.415-430.
Porzsolt, F., Rocha, N.G., Toledo-Arruda, A.C., Thomaz, T.G., Moraes, C., Bessa-Guerra, T.R., Leão, M., Migowski, A., da Silva, A.R.A. and Weiss, C., 2015. Efficacy and effectiveness trials have different goals, use different tools, and generate different messages. Pragmatic and observational research, 6, p.47.
Weinstein MC, O’Brien B, Hornberger J, et al. Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices–Modeling Studies. Value Health 2003;6:9-17
Rawlins M. De testimonio: on the evidence for decisions about the use of therapeutic interventions. Lancet 2008;372:2152-61
Faria R, Hernandez Alava M, Manca A, et al. NICE DSU technical support document 17: the use of observational data to inform estimates of treatment effectiveness for technology appraisal: methods for comparative individual patient data. Sheffield: NICE Decision Support Unit; 2015
Campbell JD, McQueen RB, Briggs A. The “e” in cost-effectiveness analyses. A case study of omalizumab efficacy and effectiveness for cost-effectiveness analysis evidence. Ann Am Thorac Soc 2014;11:S105-S111
Dilokthornsakul P, Chaiyakunapruk N, Campbell JD. Does the use of efficacy or effectiveness evidence in cost-effectiveness analysis matter? J Asthma 2017;54:17-23
Toth, P.P., Stevens, W. and Chou, J.W., 2017. Why published studies of the cost-effectiveness of PCSK-9 inhibitors yielded such markedly different results. Journal of medical economics, 20(7), pp.749-751.