How AI is changing medical imaging

How AI is changing medical imaging

Spread the love

JThat doctors could peer into the human body without making a single incision once seemed like a miraculous concept. But medical imaging in radiology has come a long way, and the latest techniques based on artificial intelligence (AI) go much further: harnessing the massive computational capabilities of AI and machine learning to exploit scans bodies looking for differences that even the human eye can miss.

Imaging in medicine now involves sophisticated ways of analyzing each data point to distinguish disease from health and signal from noise. If the first decades of radiology were about refining the resolution of images taken from the body, the next few decades will be spent interpreting this data to ensure that nothing is overlooked.

Imaging is also evolving from its original purpose – the diagnosis of medical conditions – to also play a vital role in treatment, especially in the field of cancer. Doctors are starting to rely on imaging to help them monitor tumors and the spread of cancer cells so they have a better, faster way to tell if therapies are working. This new role of imaging will transform the types of treatments patients will receive and dramatically improve the information doctors get about the quality of their work, so they can ultimately make better choices about the treatment options they have. need.

“Over the next five years, we’ll see functional imaging become part of care,” says Dr. Basak Dogan, associate professor of radiology at the University of Texas Southwestern Medical Center. “We don’t see current standard imaging answering real clinical questions. But functional techniques will be the answer for patients who want greater precision in their care so they can make more informed decisions.

Detect problems earlier

The first hurdle to getting the most out of what images have to offer – whether X-rays, computed tomography (CT), magnetic resonance imaging (MRI) or ultrasound – is to automate reading as much as possible, which saves valuable time for radiologists. Computer-aided algorithms have proven successful in this area, as massive computing power has allowed training computers to distinguish abnormal results from normal results. Computer scientists and radiologists have been working together for years to come up with these formulas; radiologists feed computer programs their findings on tens of thousands of normal and abnormal images, which teaches the computer to distinguish when the images contain elements that do not meet normal parameters. The more images the computer has to compare and learn from, the better it can refine the distinctions.

For the US Food and Drug Administration (FDA) to approve an algorithm involving imaging, it must be accurate 80-90% of the time. So far, the FDA has approved about 420 for various diseases (mainly cancer). The FDA still requires a human to be the ultimate arbiter of what the machine learning algorithm finds, but such techniques are essential for flagging images that might contain suspicious results for doctors to review and ultimately account, provide faster responses to patients.

Read more: Virus hunters try to prevent the next pandemic

At Mass General Brigham, doctors use about 50 such algorithms to aid in patient care, ranging from detecting aneurysms and cancers to detecting embolisms and signs of stroke in ER patients. , many of whom will experience general symptoms that these conditions share. About half have been approved by the FDA, and the rest are being tested in patient care.

“The goal is to find things early. In some cases, it can take human days to find an accurate diagnosis, whereas computers can run without sleep continuously and immediately find patients who need care,” says Dr. Keith Dreyer, Director of Data Science and Vice President of Radiology at Mass General Brigham. “If we can use computers to do that, then that patient will be treated much faster.”

More in-depth patient follow-up

While computer-assisted triage is the first step in integrating AI-based support into medicine, machine learning is also becoming a powerful way to monitor patients and track even the smallest changes in their condition. . This is especially critical in the case of cancer, where the tedious task of determining whether a person’s tumor is growing, shrinking or staying the same is key to making decisions about the effectiveness of treatments. “We struggle to understand what happens to the tumor when patients undergo chemotherapy,” says Dogan. “Our standard imaging techniques unfortunately cannot detect any changes until halfway through chemotherapy” – which can take months in the process – “when some kind of shrinkage starts to occur.”

Imaging can be helpful in these situations by detecting changes in tumors that are unrelated to their size or anatomy. “In the very early stages of chemotherapy, most changes in a tumor aren’t quite at the level of cell death,” says Dogan. “The changes are linked to altered interactions between the body’s immune cells and cancer cells.” And in many cases, the cancer does not shrink in a predictable way from the outside in. Instead, pockets of cancer cells within a tumor may die off, while others continue to thrive, leaving the overall lump more pockmarked, like a moth. sweater. In fact, because some of this cell death is related to inflammation, the tumor size may even increase in some cases, although this does not necessarily indicate increased cancer cell growth. Currently, standard imaging cannot distinguish how much of a tumor is still alive and how much is dead. The most commonly used breast cancer imaging techniques, mammography and ultrasound, are instead designed to detect anatomical features.

Read more: The race to make a breast cancer vaccine

At UT Southwestern, Dogan is testing two ways to use imaging to track functional changes in breast cancer patients. In one, with funding from the National Institutes of Health, she visualizes breast cancer patients after a cycle of chemotherapy to detect slight pressure changes around the tumor by injecting microbubbles. gas. Ultrasound measures pressure changes of these bubbles, which tend to collect around tumors; growing cancers have more blood vessels to support their expansion, compared to other tissues.

In another study, Dogan is testing optoacoustic imaging, which turns light into sound signals. The lasers shine on the breast tissue, causing the cells to oscillate, which creates sound waves that are captured and analyzed. This technique is well suited for detecting the oxygen levels of tumors because cancer cells tend to need more oxygen than normal cells to continue growing. Changes in sound waves can detect which parts of the tumor are still growing and which are not. “Just by imagining the tumor, we can tell which are most likely to metastasize to the lymph nodes and which are not,” says Dogan. Currently, clinicians cannot tell which cancers will spread to the lymph and which will not. “This could give us insight into how the tumor will behave and potentially save patients unnecessary lymph node surgeries that are now part of standard care.”

The technique could also help find early signs of cancer cells that have spread to other parts of the body, long before they show up in visual scans and without the need for invasive biopsies. Focusing on organs to which cancer cells typically spread, such as bones, liver and lungs, could give doctors a head start in catching these new deposits of cancer cells.

Identify invisible anomalies

Given enough data and images, these algorithms could even find outliers for any condition that no human could detect, Dreyer says. His team is also working on developing an algorithm that measures certain biomarkers in the human body, whether anatomical or functional, so that it can flag changes in those parameters that might suggest a person is susceptible to having a stroke, fracture, heart attack or other. adverse event. It’s the holy grail of imaging, says Dreyer, and although it’s a few years away, “it’s the kind of stuff that’s going to transform healthcare for AI.”

Getting there will require tons and tons of data from hundreds of thousands of patients. But America’s siled health care systems mean sharing that information is difficult. Federated learning, in which scientists develop algorithms that are applied to anonymized databases of patient information from different institutions, is one solution. This way, confidentiality is preserved and institutions will not have to compromise their secure systems.

Read more: How researchers are working to make IVF more effective

If more of these models are validated, through federated learning or otherwise, AI-based imaging could even begin to help patients at home. As COVID-19 has made self-testing and telehealth more routine, people may eventually be able to obtain imaging information through portable ultrasound delivered through a smartphone app, for example.

“The real change in healthcare that is going to happen with AI is that it will bring many solutions to patients themselves, or before they become patients, so they can stay healthy. “, says Dreyer. This may be the most effective way to optimize imaging: by enabling patients to learn and make the most informed decisions possible to protect their health.

More Must-Try Stories from TIME


Write to Andrew D. Johnson at andrew.johnson@time.com.

Leave a Comment

Your email address will not be published.