We summarise a paper, published in Journal of Comparative Effectiveness Research, that explored the issues of randomised control trials (RCTs) versus real-world evidence (RWE) replication.
RCTs and RWE
Regulators and other key decision makers have primarily relied on evidence from RCTs to determine drug effectiveness. RCTs are a straightforward approach. They minimise bias between treatment groups and provide tightly controlled measurements to maximise data quality.
Evidence generated from observational studies of real-world data (RWD) is seen as inferior. This is due to non-random treatment assignment and less rigorous data collection. Most importantly, results from observational studies do not always agree with RCTs. These inconsistencies generate uncertainty about this type of research.
While the authors noted that we may always need RCT results, RWE can provide substantial evidence of treatment effect. Additionally, it can help us better understand how treatment works within a typical care setting. Currently, the use of RWE to inform drug effectiveness decisions is limited to supporting approval in rare diseases and oncology and the comparative effectiveness of preventative vaccines.
The role of RWE as a complement to RCT
There are several ongoing efforts that aim to replicate the results of RCTs using observational studies. These replication efforts aim to evaluate comparability of RCT and observational study results based on prespecified measures or agreement (regulatory and estimate). The authors referenced ‘RCT DUPLICATE’ as a prominent example. This effort aims to replicate 30 completed Phase III or IV trials and predict the results of seven ongoing Phase IV trials using Medicare and commercial claims data. The aim of these exercises is to identify the clinical scenarios, study design and analytical approaches that lend themselves to valid study implementation with RWD.
Interpreting replication results
While these observations are rigorously designed to mimic the target RCTs, some level of variation in the results is expected. It is important to consider the reasons why RCTs and observational studies may not agree. Previous comparisons have primarily attributed discrepancies in results to bias and confounding. However, other factors can also be responsible. These include challenges with emulating the target RCT, differences within healthcare setting, inclusion of more vulnerable or diverse patients, differences in effect measures and data analysis and the efficacy-effectiveness gap. Additionally, reasons for failure to replicate can actually shed insight on the design and validity of RCTs themselves.
Recommendations
The authors highlighted that disentangling the reasons for these differences is a challenge in itself. They encourage researchers to build upon the foundation of current RCT replication efforts and conduct and publish similar replication exercises. This will involve careful consideration of the data and methods, as well as describing agreement and when results are discrepant. Moreover, they recommend that future efforts should include not only health insurance claims but also electronic health record data, registries, linked data sources and other clinically rich data sources. Finally, they emphasised that it important to consider other approaches, in addition to RCT replication efforts, to address FDA concerns about establishing causality within observational studies.
Conclusion
RCTs and RWE are complementary. They both contribute valuable information about patient outcomes. The use of observational studies to support regulatory decisions will be of considerable value. Therefore, efforts to replicate RCTs are important to support the credibility of observational studies. Nonetheless, researchers must carefully interpret the results and not just assume the observational study was flawed.
The authors argued that replication of RCTs is not an end goal but rather an intermediate step on the way to making statements about the efficacy-effectiveness gap, optimal use of medical products in real-world settings, heterogeneity of treatment effect and long-term outcomes. In addition, they predict that in 5-10 years researchers will use observational study design more frequently to provide evidence of product effectiveness.
Image credit: By metamorworks – canva.com