The growing field of Deep Learning (DL) has major implications for critical and even life-saving practices, as in medical imaging. Learn about medical imaging and how DL can help with a range of applications and the role of a 3D Convolutional Neural Network (CNN) in processing images.
What Is Medical Imaging?
Medical imaging is the practice of creating visual representations of the interior of a body for medical analysis and intervention or to assess how organs or tissues function. Practitioners use medical imaging to reveal internal structures so they can diagnose and treat diseases. Medical imaging also contributes to a database representing normal anatomy and physiology, facilitating the identification of abnormalities.
Medical imaging generally refers to noninvasive techniques for producing such images. Radiology techniques used for medical imaging include radiography (X-ray), magnetic resonance imaging (MRI), ultrasonography (ultrasound), and endoscopy.
In addition to imaging devices like X-Ray machines, doctors can use advanced computer technologies to see inside patients without invasive surgery. For example, Computed Tomography (CT) builds a virtual model of a piece of tissue using multiple X-Ray or MRI images.
The need for deep learning in medical imaging
Medical imaging generates large volumes of data, accounting for over 90 percent of all medical data. The number of medical images that emergency room radiologists have to analyze can be overwhelming, with each medical study involving up to 3,000 images taking up 250GB of data. Radiologists can make use of Deep Learning (DL) to help sift through the data and analyze medical exams more efficiently. DL networks can extract and process data at a speed and scale that is not humanly possible.
The recent advances in deep learning frameworks have enabled faster and more accurate detection, while the increased CPU and GPU processing power available allows radiologists to scale their diagnostic efforts. Time is a critical factor for medical diagnosis, and early detection can potentially add years to the life of a patient.
Current Deep Learning Applications in Medical Imaging
There are many applications for DL in medical imaging, ranging from tumor detection and tracking to blood flow quantification and visualization. In particular, deep learning for computer vision can help medical practitioners make sense of large amounts of data. Below are some examples of real-world deep learning applications:
Deep learning cancer detection
One of the main uses of medical imaging is cancer detection. Some of the deadliest forms of cancer, such as melanoma and breast cancer, are highly curable if diagnosed early. Thus, cancer treatment can benefit from the speed and accuracy of DL-assisted diagnostics. DL algorithms can be used to detect the characteristics of metastatic cancer with higher accuracy than a human radiologist.
One common use of deep learning for medical imaging is breast cancer detection. Breast cancer screening typically involves comparing two mammogram images to identify points of abnormal breast tissue. This process can be facilitated by a trained Convolution Neural Network (CNN).
For example, a specialized deep neural network developed by IBM Research-Haifa in Israel uses identical subnetworks to compare image analyses. This can help detect and localize mass in breast mammography images.
Another example is LYmph Node Assistant (LYNA), which was developed by researchers at MIT. LYNA was trained with datasets of pathological slides. Once trained, it was able to halve the time spent reviewing each slide and could recognize characters of tumors in just one minute. LYNA has a 99 percent rate of accuracy for identifying metastatic cancer and locate even small metastases that a human pathologist might miss.
Deep learning medical image analysis – accelerating MRI image processing
High-quality interpretation is needed to extract value from medical imaging, but human interpretation is limited and prone to errors. While medical imaging such as MRI is typically used alongside computer analysis, standard MRI analysis requires hours of computing time. To align two MRI scans, a computer has to sort millions of voxels (3D pixels), and it is time-consuming to scale this up and analyze data from a large number of patients.
However, you can train neural networks to pick up indications of the same disease. The data is fed into one end of the network and then passes through multiple nodes to produce the desired output. This allows radiologists to accelerate MRI image processing.
For example, the VoxelMorph system, developed by researchers at MIT, is trained with 7,000 MRI brain scans, allowing it to identify common anatomical patterns, represented by groups of voxels. VoxelMorph can process an MRI analysis in two minutes using a regular CPU, or in less than a second if run on a GPU.
Retinal blood vessel segmentation
Blood vessels cover only a few pixels in contrast to background pixels, which requires segmentation to bring out their shape. A human specialist can use deep learning to improve the efficiency of the segmentation process, in an approach know as human-in-the-loop AI.
For example, Structured Analysis of the Retina (STARE), is a popular public dataset containing 28 annotated images with a resolution of 999 × 960. The bolder vessels are segmented without noise, allowing the human practitioner to save time. The human expert only has to draw a few hairlines with a “polyline” tool and correct NN predictions, which is faster and easier than manually annotating from scratch.
Tracking tumor development
CNNs can be trained to identify tissue abnormality based on a relatively small number of clinical trials. This helps reduce the need for invasive measures to monitor and treat diseases. For example, a DL algorithm can help predict tumor proliferation using a tumor probability heatmap, which classifies the tumor probability of overlapping tissue patches.
The images produced reveal the features of the tumor, including location, shape, area, and density, to track changes. informative data on different tumor features such as shape, area, density, and location, thus facilitating the tracking of tumor changes. Deep learning can also potentially enable the automation of progress monitoring.
Using a 3D CNN for medical imaging
For medical imaging to be effective, the images produced must undergo a process of segmentation, which delineates the contours between different types of tissue. Medical imaging segmentation has applications ranging from quantitative studies, computational modeling, and population-based to diagnosis and treatment development. It can take hours to segment a scan volume manually, so automation is a necessity.
A properly trained CNN can automatically provide fast and accurate segmentation, sometimes within seconds, and lower the costs of medical imaging. Segmentation CNNs can use 2D or 3D convolution kernels to predict the segmentation map. 2D CNNs map slices one-by-one to build a full volume, while 3D CNNs use voxel information to predict segmentation maps for volumetric patches. While this requires higher computation, the use of interslice context can improve performance.