From the early days of CT scanners and mammography devices, medical imaging has come a long way. With 3D medical imaging, healthcare professionals can now access new resolutions, angles, and details to help them gain a better understanding of their patient, all while cutting back on the dosage of radiation used to gather the images.
Traditional medical imaging systems provide 2D visual representations of human organs while more advanced digital medical imaging systems (e.g. X-ray CT) can create both 2D and in many cases 3D images of human organs.
Medical image registration is a common technique that involves overlaying two images, such as magnetic resonance imaging (MRI) scans, to compare and analyze anatomical differences in great detail. If a patient has a brain tumor, for instance, doctors can overlap a brain scan from several months ago onto a more recent scan to analyze small changes in the tumor’s progress.
However, this process can take two hours or more, as the traditional systems must accurately align each of the potentially million pixels in the combined scans.
Recently, researchers at MIT described a machine-learning algorithm that will be able to register brain scans and other 3-D images more than 1,000 times more quickly using new learning techniques.
The algorithm works by ‘learning’ while registering thousands of pairs of images. In this way, it gathers the information about how to align images and then estimates some optimal alignment parameters. After ‘training’, it will use those parameters to map all the pixels of one image to another, all at once. This will reduce the registration time to a minute or two when using a normal computer, or less than a second using a GPU with comparable accuracy to state-of-the-art systems.
The researchers’ algorithm, called “VoxelMorph,” is powered by a convolutional neural network (CNN), a machine-learning approach commonly used for image processing. These networks consist of many nodes that process image and other information across several layers of computation.
During training, pairs of brain scans were fed into the algorithm, which captured similarities of voxels in the two scans. In doing so, it learns information about groups of voxels – such as anatomical shapes common to both scans – which it uses to calculate optimized parameters. When fed two new scans, the algorithm uses the optimized parameters to rapidly calculate the exact alignment of every voxel in both scans.
“The tasks of aligning a brain MRI shouldn’t be that different when you’re aligning one pair of brain MRIs or another,” says co-author on both papers Guha Balakrishnan, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Engineering and Computer Science (EECS). “There is information you should be able to carry over in how you do the alignment. If you’re able to learn something from previous image registration, you can do a new task much faster and with the same accuracy.”
For a piece of medical equipment to accurately ‘learn’ from the thousands of images it gathers, the 3D equipment must be constructed with the use of a high-precision lens. Universe Optics will work with your design team to engineer and craft the lens you require to deliver the images necessary.
The algorithm has the potential to pave the way for use during operations. During some operations, various scans of different qualities and speeds are used.
According to the researchers, the new algorithm could potentially register scans in near real-time, getting a much clearer picture on their progress. During surgery doctors really can’t overlap the images, because it would take two hours. However, if it only takes a second, you can imagine that it could be feasible.