Researchers have carried out a study investigating whether deep neural networks can represent associations between gene expression, histology and medical imaging features. This work will help to identify several types of cancers in earlier stages, allowing doctors to develop personalised treatments that increase long-term survival.
Combining molecular information and medical imaging data
Researchers have made considerable progress in detecting several cancers in earlier stages, allowing doctors to provide treatments that increase long-term survival. The credit for this is due to integrated diagnosis, an approach to patient care that combines molecular information and medical imaging data to diagnose the cancer and, eventually, predict treatment outcomes. This combined approach is commonly referred to as ‘radiogenomics’.
Researchers carry out radiogenomic studies in an attempt to integrate two complementary data types to explain tumour imaging patters using molecular information and vice versa. These radiogenomic studies support the derivation of a tumours biological state from non-invasive imaging and the correlation of molecular information and imaging phenotypes to better understand cancer heterogeneity. However, radiogenomic studies are often limited by the high dimensionality of the data and the lack of validation datasets.
Deep learning for radiogenomic studies
Deep learning methods have been used on molecular and imaging datasets as they are able to handle high-dimensional inputs without feature engineering, and represent nonlinear and hierarchical relationships between model inputs and outputs. Several radiogenomic studies have previously used deep learning models such as convolutional neural networks and generative adversarial networks. However, despite these works reporting accurate predictions of imaging phenotypes from genomic data, they do not attempt to provide a biological interpretation of what the model has learned. While classification accuracy is important, it is vital for the model to interrogate the findings to validate the learned radiogenomic associations.
Using deep neural networks in radiogenomic studies
In this study, the researchers used deep feedforward neural networks to model transcriptomes using two similarly derived radiogenomic datasets recently published in non-small cell lung cancer (NSCLC). As one of the relatively few publicly released radiogenomic datasets available, the study reported radiogenomic associations and provided a basis for comparisons. The team, led by William Hsu, investigated whether deep neural networks could represent associations between gene expression, histology and CT-derived image features. It was found that the network used in this study could not only reproduce previously reported associations but also identify new ones.
Datasets used to develop the deep neural network
The researchers used a dataset of 262 patients to train their neural networks to predict 101 features from an extensive collection of 21,766 gene expressions. The team then tested the network’s predictive ability on an independent dataset of 89 patients, while comparing its ability against that of other models within the training dataset. Finally, they applied the method know as gene masking to determine the learned associations between subsets of genes and the type of lung cancer.
The team found that the overall performance of the neural networks at representing these datasets was better than the other models they compared. Moreover, their model was found to be generalisable to datasets from other populations. Additionally, the results of gene masking suggested that the prediction of each imaging feature was related to a unique gene expression profile governed by biological processes. Hsu was encouraged by their findings, saying, ‘while radiogenomic associations have previously been shown to accurately risk stratify patients, we are excited by the prospect that our model can better identify and understand the significance of these associations. We hope that this approach increases radiologist’s confidence in assessing the types of lung cancer seen on a CT scans’. The teams also believe that the information generated by their model will be highly beneficial in informing individualised treatment planning, thereby helping to pave the way for personalised medicines for lung cancer patients
Limitations of this deep neural network
The model developed in this study has several limitations. Firstly, the retrospective datasets used in this analysis were from two different sources with varying imaging protocols. This can add variance to radiomic feature values. Secondly, a majority of patients were in early-stage cancers, and therefore late-stage patients were left out of the radiogenomic analysis. Additionally, the tumour tissue samples that were used for transcriptome profiling were limited in that only one sample was acquired per patient, which may not fully capture tumour heterogeneity.
These factors make it challenging to validate the relationships in radiogenomic models. Therefore, the researchers stressed that the reported findings should be interpreted as possible associations and require clinical or animal studies to validate.
This study investigated whether deep neural networks can represent associations between gene expression, histology and CT-derived image features for the early diagnosis of lung cancer. The researchers found that the network could not only reproduce previously reported associations but also identify new ones. Although the associations identified in this study need further work to validate them, it is hoped that the ability to better diagnose patients will help pave the way for personalised treatment plans for lung cancer patients.
Image credit: rawpixel – FreePik