X
Innovation

IBM tests the use of artificial intelligence for breast cancer screenings

The research has uncovered how AI can help radiologists improve the overall accuracy of mammogram screenings.
Written by Aimee Chanthadavong, Contributor

A recent study by IBM Research, together with Sage Bionetworks, Kaiser Permanente Washington Health Research Institute, and the University of Washington School of Medicine, has uncovered how combining machine learning algorithms and assessments by radiologists could improve the overall accuracy of breast cancer screenings.
 
Mammogram screenings, commonly used by radiologists for the early detection of breast cancer, according to IBM researcher Stefan Harrer, frequently rely on a radiologist's expertise to visually identify signs of cancer, which is not always accurate.
 
"Through the current state of human interpretation of mammography images, two things happen: Misdiagnosis in terms of missing the cancer and also diagnosing cancer when it's not there," Harrer told ZDNet.
 
"Both cases are highly undesirable -- you never want to miss a cancer when it's there, but also if you're diagnosing a cancer and it's not there, it creates enormous pressure on patients, on the healthcare system, that could be avoided.
 
"That is exactly where we aim to improve things through the incorporation of AI (artificial intelligence) to decrease the rate of false positives, which is the misdiagnosis of cancer, and also to decrease missing the cancer when there is one."

annotation-2020-03-11-115028.jpg

Comparison between a scanned, film mammogram image from DDSM dataset (left) and a digital mammogram images from the DM Challenge dataset provided by KPW

Screenshot: JAMA Network Open

The research used more than 600,000 de-identified mammograms and clinical data from Kaiser Permanente Washington (KPWA) and the Karolinska Institute (KI) in Sweden. Of the combined datasets, KI contributed examinations from 68,000 women, of which 780 were cancer positive; while KPWA provided examinations from 85,500 women, of which 941 were cancer positive.
 
"We had hundreds of thousands of mammograms that were annotated. That means medical practitioners looked at them and placed a label on the piece of information that said, 'Yep, there is tumour', or 'No, there is not' … and what we did was we took a portion of that data -- or what we called training data -- and used that data and we trained the algorithms on recognising tumours," Harrer said.
 
Harrer highlighted that while using AI to interpret mammograms is not new, the study was significant due to its size.
 
"What we did here was create a benchmark of the most advanced algorithm against by far the largest dataset of its kind," he said. 
 
"We expect this study is the start of any future work … the algorithms from this study will be publicly available for research purposes and can be used by anyone."

Read: Intel and GE Healthcare's X-ray machine uses embedded AI to prioritize scans (TechRepublic)  

Harrer added the research also enabled the team to develop a secure ecosystem, giving researchers access to datasets that were previously unavailable for research activities. 
 
"What we've done is create an ecosystem that allows us to keep that dataset … behind a secure firewall ... to allow researchers to build models and submit these models to us, as the organisers of this ecosystem," he said. 

"These models can then come to the data and be tested, trained, and validated inside this secure environment by us, and then the performance of these models be returned back to the researchers and they can keep on running and improving the model."

He also took the opportunity to debunk the myth that AI could potentially take over jobs.  
 
"AI will not replace all doctors. AI will replace doctors who don't use AI," Harrer said, acknowledging that the technology would "lead to a change in the field of radiology".
 
The study came off the back of results from the Digital Mammography (DM) DREAM Challenge, a crowd-sourced competition in 2016 designed to engage the international scientific community to assess whether AI algorithms could meet or beat radiologist interpretive accuracy.

Related Coverage

Editorial standards