Artificial intelligence (AI)—the ability of computers to take in information and make decisions —is making its way into many aspects of life, from self-driving cars to medical decision-making.

For some applications in radiology, it's becoming increasingly clear that AI algorithms may match—and in some cases even beat—human doctors, with great potential to boost radiologists’ speed and accuracy for the benefit of their patients. Now Duke’s Maciej Mazurowski, Ph.D., an Associate Professor of Radiology, Electrical and Computer Engineering, and Biostatistics and Bioinformatics, and Walter Wiggins, M.D., Ph.D, Assistant Professor of Radiology, have launched a new Duke Center for Artificial Intelligence in Radiology (DAIR) that aims to explore and develop AI algorithms for use in interpreting medical images of various types, with the goal to see them through all the way to their use the clinic.

DAIR will advance our already existing efforts in the research and clinical implementation of AI algorithms in radiology,” said Mazurowski, the center’s Scientific Director.

Mazurowski and Wiggins explain that DAIR will serve as a resource for the Department of Radiology and broader Duke, providing the needed resources and expertise in everything related to AI and its use in medical imaging. That includes data collection, technical aspects of image acquisition, processing of data, development of deep learning algorithms, algorithm validation, and ultimately clinical implementation. The center will consolidate expertise along the whole AI development pipe-line, while never losing sight of the ultimate goal to improve medicine for both doctors and patients.

"One of the important things that we’re emphasizing is to make sure that model development is informed at every step—from conception to testing and implementation—by the clinical problem it’s trying to solve,” said Wiggins, who will serve as the Clinical Director.

Wiggins explains that some of the early work in AI as applied in radiology has been done in a bit of a vacuum. Researchers have chosen problems that they think might be amenable to AI-driven decision-making. But they haven’t always thought enough about how those computer-based algorithms might function and make an important difference in the real world of the clinic.

“What’s important in model design and how that gets implemented is to ask yourself the entire way along: How am I best going to use this tool in clinical practice to make it as useful as possible?” Wiggins asked. “How do we justify the use of AI by demonstrating that it leads to improved outcomes for patients at end of the day?”


The AI field has made important leaps in the last decade based on an approach called deep learning, which is modeled on the inner workings of the human brain. “Deep learning is a type of machine learning in which data are processed by huge networks of interconnected units to make a decision,” explains Mazurowski.

In the case of radiology, those data come in the form of images. Using imaging data from tens of thousands or more of cases, a machine “learns” to make decisions about whether an image it hasn’t seen before indicates, for example, cancer or not. The training process is notable in that it requires no pre-existing assumptions about what’s important for making that diagnosis.

“The networks are trained to make decisions using data, so they are not arbitrarily pre-programmed in that way,” Mazurowski says. “Unlike some other AI approaches, in deep learning, there are no a priori ‘rules’.”

In principle, researchers can apply essentially the same deep learning approach to build unique algorithms for different—and perhaps just about any—medical imaging task. As one example, a study reported last year in Radiology by Mazurowski and his colleagues explored the use of a deep learning algorithm for deciding which patients with thyroid nodules should undergo a biopsy and which could safely avoid the invasive procedure. It’s an important question because doctors vary widely in the way they manage patients with thyroid nodules. It’s also an area in which overdiagnosis has been an issue.

Their study of 1,377 thyroid nodules in 1,230 patients showed that a computer could provide biopsy recommendations on the basis of two images. Once trained, the computer correctly spotted cancerous nodules 87 percent of the time. That’s as good as expert consensus could do, the researchers reported, and better than most expert radiologists individually.


Thyroid nodules are just one of many examples where AI holds promise in radiology. The researchers say they are making headway on an algorithm to assess the severity of osteoarthritis for use in deciding whether a patient needs a full knee replacement or not, among many others. The Duke team is now focused primarily on developing and validating algorithms as they look ahead to their implementation in the clinic. To make sure they’re concentrating on the right problems, they work closely with the very physicians in the clinic who might one day put those algorithms to work for the benefit of their patients.

Another important goal for DAIR, Mazurowski and Wiggins say, is to partner actively with the many important AI efforts at Duke, including AI Health and Duke Forge. “We’re convinced AI will be an important part of radiology—I think that’s clear,” Mazurowski said. “First, we need great algorithms and that is what we’re working to develop. We want to provide this unique set of expertise and, at the same time, we want to integrate ourselves into the bigger AI ecosystem at Duke with many new and exciting initiatives.”

To ensure they’ll have the most impact, Wiggins says they’ll take care not to lose sight of how an algorithm ultimately will be put to use in clinical practice. “It’s a lot of work to develop these models, so we’re really trying to make sure there’s a good balance between the computer science and clinical expertise at every step. It’s critical to ensure success in getting these out into clinical practice,” where they can really make a difference for patients.