At the Festival of Genomics and Biodata 2021, we were joined by Paul Agapow (Health Informatics Director, AstraZeneca), Victor Neduva (Senior Principal Scientist, MSD) and Natalie Gavrielov (Director of Medical Writing, BioForum) to discuss all things machine learning and how it can unlock the power of real-world evidence.
Real-world evidence (RWE) is any kind of evidence derived from real-world data (RWD). RWD is collected as a by-product of different processes running irrespectively of any specific scientific investigation, e.g. wearable devices or electronic health data. The concept of ‘real world’ has been around for over 50 years; however, only within the last two decades has it gained significant attention.
Recently, there have been several drug/device approvals, mostly in oncology, using RWE and RWD. Even now with the COVID-19 pandemic, we can see the importance of using RWD to help us speed up the approval of therapies.
One of the benefits of using RWD is the longitudinal aspect or as Agapow referred to it “a great window into looking at the course of disease.” RWD allows us to look at disease progression and long-term outcomes. We often rely on this data to understand the disease prior to a patient presenting with it at clinical trials.
Nonetheless, several limitations remain. The panel drew upon lack of standardisation as data is often stored in different ways, with different formats and assumptions. There are also a lot of gaps within the data, making it difficult to create a carefully annotated dataset for benchmarking machine-learning models. Neduva emphasised that creating a gold standard dataset will be important to test across the entire algorithmic landscape. There are also issues regarding possible cofounders and biased data; therefore, knowing where the data is sourced from is critical.
When discussing what machine learning consists of, Agapow argued that it is very unclear where machine learning starts and traditional statistical and mathematical modelling end. However, he noted that it has a tendency towards assumption-like models and is on a continuum from our traditional methods (yet has certain advantages).
Gavrielov considered scalability as one of the main advantages of using machine learning to unlock the power of RWE. She described that machine-learning algorithms are able to incorporate a countless number of features which allows the model to be more precise and accurate. Additionally, she discussed that the main feature of RWE is that there is a lot of it, which is something machine learning is able to deal with.
A key issue the panel discussed was the accessibility of data. Gavrielov acknowledged that while accessibility to data is growing, the sources are not interoperable. Data silos are a major issue and diminish the value of the available data. Agapow added that capturing the diversity within this data will also be important as currently most data is from the US and Europe.
The panel also touched upon the ethical implications of using machine learning on RWE. Gavrielov discussed that the more data we have, the easier it will be to reconstruct an individual’s identity. She emphasised the importance of patient ownership of their digital entity and enabling them to decide to what extent they would like their data to be used and where.
Machine learning hype problem
Although we are starting to identify the best areas in which to apply machine learning, there has yet to be significant developments in terms of implementation. For example, Agapow noted that new papers are continuously emerging that explore how AI can interpret radiological images better than a radiologist. But in practice it does not actually happen because it’s a more complicated issue. In reality, real data is a lot messier than what is used to train models.
Nonetheless, the panel noted several examples of where machine learning is showing promise including identifying responders vs non-responders within clinical trials, in digital pathology to analyse images and also mapping out disease trajectories.
The panel went on to discuss the husky vs wolf problem – which refers to a machine learning algorithm that was trained to distinguish between huskies and wolves from a set of images. Once the model was built, it turned out that the model wasn’t actually looking at the animals, rather the background to see if there was snow. Many often fear that without understanding our systems we may be building what Agapow referred to as “snow detectors”. In other words, you can train algorithms on a dataset, but it is difficult to explain how those predictions are actually made.
Gavrielov expanded on this issue and discussed that there is an problem with clinicians relying on models they do not understand. There is a lack of trust. Gavrielov emphasised that while we want to make the leap to the next machine-learning models, we cannot detach ourselves because we have to understand the model itself. As a result, we cannot use black box algorithms. However, this would involve training the algorithm with clinical input. Gavrielov noted that we want the algorithms to be more complex, but we don’t trust the complexity and therefore, addressing this is critical. The panel emphasised that dealing with trust is important, as if these algorithms fail, they can have long lasting effects.
Registration for on-demand access to watch this talk and all our other talks from the Festival will end on February 12th. Register now.