The paper, which was put together by Stanford University researchers and published under the title, "MURA Dataset: Towards Radiologist-Level Abnormality Detection in Musculoskeletal Radiographs," looked at a large dataset that contained more than 40,000 images taken from almost 15,000 studies. The researchers put all of the images through a 169-layer densely connected convolutional network in order to train it to detect and localize abnormalities and in the end label each one as either normal or abnormal.
According to the researchers, the availability of large and high-quality datasets have played a pivotal role in the ongoing progress of fields with deep learning methods. To that end, they have created one of the largest public radiographic image datasets ever, which is what they used to feed the densely connected convolutional network that they trained to detect abnormalities in images of patients.
What the researchers found after sifting through and tabulating all of the data is rather shocking: their model -- that is, the deep learning network -- could practically do the job of radiologists much better than they do. As they state in their research notes, "We compare the performance of our model and radiologists, and find that our model achieves performance comparable to that of radiologists. Model performance is higher than the best radiologist's performance in detecting abnormalities in finger studies, and equivalent in wrist studies."
But while the model was able to show excellent performance in terms of detecting abnormalities in those two categories, the researchers found that there were still some areas of improvement. In their own words, "However, model performance is lower than best radiologist's performance in detecting abnormalities on elbow, forearm, hand, humerus, and shoulder studies."
To encourage future studies that can build on top of the results that they have presented, the researchers have made the large dataset they used freely available to the public. They have uploaded all of their data on a Github repository and even included research notes with relevant information that can help those who plan to download and manipulate the data. Their work could serve as the cornerstone of future research that delves even deeper into the possible uses of the model that they created through training. Perhaps similar models could also be created using the data that the researchers presented.
The results of this research could have huge implications for the world of radiology, particularly for the radiologists who are primarily required to not only analyze radiographic images for abnormalities but also later train students who will become radiologists themselves in the future. With the use of a model like the one that the researchers used, the work of humans in the field could be rendered completely unnecessary.
It's a well-known fact that radiologists are some of the highest paid professionals in the healthcare and medical field, so the idea that deep learning networks could and would replace them might have broad societal consequences. And that's just the beginning of the changes that could occur once deep learning and AI come into prominence. Geoffrey Hinton, a prominent AI researcher, has once commented that medical schools "should stop training radiologists now," since they would be useless anyway. Whatever happens, it would be mean great progress for the field of AI and deep learning research, but possibly a dark time for thousands upon thousands of individuals in the healthcare profession.
Read more about the future of AI research in Robots.news.
Sources include: