NextGen radiology powered by AI

Radiomics, which integrates AI into radiology, offers great promise to accelerate precision medicine

NextGen radiology powered by AI

Radiomics, the application of artificial intelligence (AI) to radiology, may well be the trail blazer that the rest of the specializations in healthcare have been waiting for. In November 2018 M*Modal announced a cloud-based version of its radiology reporting solution designed with the help of Microsoft and Aligned Imaging Solutions, a radiology company focused on X-rays. In March 2018, GE Healthcare introduced the LOGIQ E10, its next-generation radiology ultrasound technology. This digital system integrates artificial intelligence, cloud connectivity and advanced algorithms to gather and reconstruct imaging data faster than ever before.

The progress of radiology since Wilhelm Roentgen’s discovery of X-rays in 1895 can now be propelled into the next century if we can use AI with good practice guidelines and validated biomarkers. Radiologists are not new to the concept of AI, as there has been pioneering work in this field since 1985 (Krupinski, Elizabeth A, Academic Radiology, 2003), when several symbolic interpretations of medical images based on human decisions were used for high-level assessments (Matsuyama T, Comput Vision Graph, 1989). This approach involved simple processes; for example, binarising / thresholding geometric structures in an image and evolving a set of logical rules for further diagnosis. This approach had a strong human involvement as the decision is taken based on human medical knowledge. However, it did not prove to be a successful decision support system. The second approach of probabilistic interpretation of medical images was driven by models which used combinatorial systems. This statistical approach depended on human decision-making expertise along with labeled parameters from the reference data set using probabilistic methods that are likely to determine the best solutions. This approach has numerous strengths, like aggregation of information across populations, expert knowledge and human-understandable models. However, the choices of the statistical methods and the process of building appropriate models which successfully form a reference data-set have become huge challenges.

Data-driven approach
The limitations of the above methods lie in the requirement for expert human knowledge. Moreover, converting this into a model system can be challenging, especially when the said expertise/knowledge is incomplete. ‘Radiomics’ is a data-driven / model-free approach, where a set of characteristic labeled (supervised) or unlabelled (unsupervised) appearances/representations of organs are used for training. In both learning methods, large datasets of image features are automatically extracted from each data point/image. By using these approaches of machine learning — along with statistical tools like logistic regression, support vector machine and decision trees — a better, feature-based separation between normal and disease conditions are achieved (Cortes C, Vapnik V. Mach Learn 1995).

In radiology, the data-driven approaches work by using specific features designed to reflect the properties of data, such as density, heterogeneity of tumours, shape etc. Newer approaches are being developed using deep learning (Chartrand G, et al, Radiographics 2017), which are improving the feature-based methods by using artificial neural networks (ANNs). These ANNs work by introducing a hierarchy of non-linear, multi-layer data nodes including the pixel values in an image. Thousands of these nodes with millions of networks become the best way of training the algorithms to respond to the new inputs for diagnostics. This approach moves ways from a hypothesis-based approach to a data-driven model, which is more powerful and leads to novel discoveries. The first sets of features, called engineered features, are specific characteristics of disease tissues which are used by domain-specific experts. In case of scarcity of data, a pre-trained network can be used to perform transfer learning. For any deep learning approach, data normalisation is an essential preprocessing step. This ensures better numerical stability and quicker convergence into the required output. This step could be achieved by principal component analysis (PCA) or a sample wise, feature-wise normalisation, making sure that “internal covariance shift” is understood. This shift can be mitigated using batch (Ioffe s and Szegedy C. Ithaca (NY): Cornell University 2015) or layer normalisation (Ba JL et al, Ithaca (NY): Cornell University 2016 ).

Overcoming overfitting
In deep learning, prediction performance can be influenced by several parameters and architectures like dimensionality and feature extraction. The selection of architecture may depend on the data size, statistical properties of data and the underlying scope of the analysis. One of the major challenges in deep learning is overfitting, which can influence the network’s ability to generalise unseen data. This kind of black box treatment and not giving enough attention to technical requirements can lead to undue complications. However, using shallower networks to avoid overfitting may result in underfitting due to insufficient learning of the training algorithm. In such scenarios, data augmentation by transformations and other regularisations, such as dropout, can be used, which reduces the individual parameter/node weights and hence increases the robustness of the network.

Similarly, penalising large parametric weights can also enable better network generalisations (Pereira F et al, Advances in neural information processing systems 2012 and Srivastava N et al SIGKDD Explor Newsl 2007). For identifying the best biomarkers along with above steps, a cross-validation-based early-stopping approach can help in reducing overfitting (Orr GB et al. Neural networks: tricks of the trade. Berlin/Heidelberg (Germany): Springer; 1998).

Biomarker validation using machine learning and deep learning models needs more than just avoiding overfitting and data leakages. The approaches must include locked validation cohorts and blinding them during the training and hyperparameter tuning. Important accuracy classifiers must be accurately evaluated in multiple performance metrics such as AUC (area under the curve), sensitivity, specificity, positive predictive value, negative predictive value etc. Along with these aspects, when we perform multiple testing or testing hundreds of features, corrections like Bonferroni (Bonferroni CE. Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commericiali di Firenze 1936) and Benjamini and Hochberg (Benjamini Y, Hochberg Y. J R Stat Soc Series B Stat Methodol 1995) must be applied. To understand the true clinical value of biomarkers, it is also important to statistically compare them to present standard markers for the additive increase in the computational models.

Data science and big data are going to play a major role in healthcare applications and will have an impact globally at industrial as well as academic settings. The radiology committees from professional colleges and societies must take such data and knowledge base to create frameworks and define required advance steps. Individual radiologists play a very pivotal role as the integration of machine learning workflows will need their undivided attention and can help them in clinical outcomes. A mindful approach of radiomic analysis of imaging data can lead to patient-specific data, leading to precision medicine.

Straight Talk

View More