“The pandemic has brought out a lot of good, bad, and ugly.”– Cynthia Girman, Pharmaceutical Executive, Merck (on the use of RWD).
We provide an overview of a recent article, published in PNAS, that explored how the ongoing COVID-19 pandemic has prompted the widespread use and misuse of real-world data (RWD).
COVID-19, since its emergence late last year, has swept across the world, overwhelming healthcare systems and raising questions about how best to control and treat the virus. Researchers have risen to the challenge and launched a series of randomised trials to uncover interventions that hold promise of preventing or lessening the severity of the disease. However, getting these results takes time, something which doctors on the frontline did not have much of during the early months of the pandemic. As a result, researchers, pharmaceutical companies and government agencies turned to real-world data (RWD) sources. By analysing trends within COVID-19 datasets, the research community were able to rapidly fill in knowledge gaps and identify treatments that could potentially make an impact. Nonetheless, harnessing RWD is difficult, it requires high-quality data collection and proper methodological considerations.
While there are guidelines on how best to plan, execute and report observational studies, researchers can sometimes neglect those guidelines. The pandemic is a prime example of this, where a public health emergency has spawned some suspect research practices in a rush to publish. The pandemic has presented an unprecedented opportunity to leverage diverse RWD sources. However, at the same time, there are concerns that the need for speed may come at the expense of methodological rigor and detail.
Randomised clinical trials have long been the gold standard for gathering robust evidence. However, this approach does not necessarily reflect any given study population and therefore, limits generalisability. As a result, drug developers have often relied on RWD analyses to extend the relevance of clinical trial results. While there is a boom in RWD applications, the science around it remains in its infancy. In fact, reviewers are struggling to deal with the weight of real-world submissions with mixed results. This has been seen for the COVID-19 pandemic where there has been an influx of observational studies addressing efficacy of various interventions.
An example of the good, bad and ugly of RWD can be seen in the story of hydroxychloroquine (antimalarial drug). Good. Despite reports of its potential promise, hospital records revealed that the drug failed to reduce the risk of ventilation or death. Bad. However, this realisation was not quick enough. Infectious disease specialists had already started widely prescribing the drug. The FDA also issued an emergency use authorisation just based on a preliminary observation study involving 36 patients. Ugly. A data analytics firm reported mortality risk among patients who took hydroxychloroquine – a finding which nearly delayed randomised testing. Articles were quickly retracted, and trials restarted after critics challenged the study’s methods and the legitimacy of the company’s dataset. Many experts believe this incident damaged public trust and delayed scientific progress. In fact, many believe some of the authors engaged in deliberate fraud.
Its is important that researchers apply RWD studies to appropriate questions. Issues of confounding factors, selection bias and other sources of error in observational analyses are ubiquitous. Some believe that only randomised trials can produce reliable findings and ensure patient safety. While others argue that, in fact, it’s the methods not the approach that are the issue.
There is an apparent need for more rigorous types of analyses and for community standards. Despite many imperfections in the data, when analysed correctly, patient information can offer important insights. For example, at the start of the pandemic some doctors suggested patients should stop taking ACE inhibitors and angiotensin receptor blockers. However, researchers explored files from thousands of individuals and found that these medications did not increase risk of hospitalisation. Interestingly, the medication actually appeared to be protective.
The pandemic has emphasised the need for well-designed clinical trials, even during such uncertain times. There is a lot we can learn from observational studies if done correctly. Observational studies and RWD analytics can help researchers better understand how risk factors drive outcomes. They can also evaluate safety or help confirm results of randomised control trials regarding toxicity and clinical benefit. While RWD can be used as evidence for decision making, proper methods and consideration of biases in the data is important before making any causal inferences.
Image credit: By freepik – www.freepik.com