Deep learning discrimination of rheumatoid arthritis from osteoarthritis on hand radiography.
To develop a deep learning model to distinguish rheumatoid arthritis (RA) from osteoarthritis (OA) using hand radiographs and to evaluate the effects of changing pretraining and training parameters on model performance.
A convolutional neural network was retrospectively trained on 9714 hand radiograph exams from 8387 patients obtained from 2017 to 2021 at seven hospitals within an integrated healthcare network. Performance was assessed using an independent test set of 250 exams from 146 patients. Binary discriminatory capacity (no arthritis versus arthritis; RA versus not RA) and three-way classification (no arthritis versus OA versus RA) were evaluated. The effects of additional pretraining using musculoskeletal radiographs, using all views as opposed to only the posteroanterior view, and varying image resolution on model performance were also investigated. Area under the receiver operating characteristic curve (AUC) and Cohen's kappa coefficient were used to evaluate diagnostic performance.
For no arthritis versus arthritis, the model achieved an AUC of 0.975 (95% CI: 0.957, 0.989). For RA versus not RA, the model achieved an AUC of 0.955 (95% CI: 0.919, 0.983). For three-way classification, the model achieved a kappa of 0.806 (95% CI: 0.742, 0.866) and accuracy of 87.2% (95% CI: 83.2%, 91.2%) on the test set. Increasing image resolution increased performance up to 1024 × 1024 pixels. Additional pretraining on musculoskeletal radiographs and using all views did not significantly affect performance.
A deep learning model can be used to distinguish no arthritis, OA, and RA on hand radiographs with high performance.
Ma Y
,Pan I
,Kim SY
,Wieschhoff GG
,Andriole KP
,Mandell JC
... -
《-》
Automated Classification of Rheumatoid Arthritis, Osteoarthritis, and Normal Hand Radiographs with Deep Learning Methods.
Rheumatoid arthritis and hand osteoarthritis are two different arthritis that causes pain, function limitation, and permanent joint damage in the hands. Plain hand radiographs are the most commonly used imaging methods for the diagnosis, differential diagnosis, and monitoring of rheumatoid arthritis and osteoarthritis. In this retrospective study, the You Only Look Once (YOLO) algorithm was used to obtain hand images from original radiographs without data loss, and classification was made by applying transfer learning with a pre-trained VGG-16 network. The data augmentation method was applied during training. The results of the study were evaluated with performance metrics such as accuracy, sensitivity, specificity, and precision calculated from the confusion matrix, and AUC (area under the ROC curve) calculated from ROC (receiver operating characteristic) curve. In the classification of rheumatoid arthritis and normal hand radiographs, 90.7%, 92.6%, 88.7%, 89.3%, and 0.97 accuracy, sensitivity, specificity, precision, and AUC results, respectively, and in the classification of osteoarthritis and normal hand radiographs, 90.8%, 91.4%, 90.2%, 91.4%, and 0.96 accuracy, sensitivity, specificity, precision, and AUC results were obtained, respectively. In the classification of rheumatoid arthritis, osteoarthritis, and normal hand radiographs, an 80.6% accuracy result was obtained. In this study, to develop an end-to-end computerized method, the YOLOv4 algorithm was used for object detection, and a pre-trained VGG-16 network was used for the classification of hand radiographs. This computer-aided diagnosis method can assist clinicians in interpreting hand radiographs, especially in rheumatoid arthritis and osteoarthritis.
Üreten K
,Maraş HH
《-》
Radiograph-based rheumatoid arthritis diagnosis via convolutional neural network.
Rheumatoid arthritis (RA) is a severe and common autoimmune disease. Conventional diagnostic methods are often subjective, error-prone, and repetitive works. There is an urgent need for a method to detect RA accurately. Therefore, this study aims to develop an automatic diagnostic system based on deep learning for recognizing and staging RA from radiographs to assist physicians in diagnosing RA quickly and accurately.
We develop a CNN-based fully automated RA diagnostic model, exploring five popular CNN architectures on two clinical applications. The model is trained on a radiograph dataset containing 240 hand radiographs, of which 39 are normal and 201 are RA with five stages. For evaluation, we use 104 hand radiographs, of which 13 are normal and 91 RA with five stages.
The CNN model achieves good performance in RA diagnosis based on hand radiographs. For the RA recognition, all models achieve an AUC above 90% with a sensitivity over 98%. In particular, the AUC of the GoogLeNet-based model is 97.80%, and the sensitivity is 100.0%. For the RA staging, all models achieve over 77% AUC with a sensitivity over 80%. Specifically, the VGG16-based model achieves 83.36% AUC with 92.67% sensitivity.
The presented GoogLeNet-based model and VGG16-based model have the best AUC and sensitivity for RA recognition and staging, respectively. The experimental results demonstrate the feasibility and applicability of CNN in radiograph-based RA diagnosis. Therefore, this model has important clinical significance, especially for resource-limited areas and inexperienced physicians.
Peng Y
,Huang X
,Gan M
,Zhang K
,Chen Y
... -
《BMC MEDICAL IMAGING》