header advert
Orthopaedic Proceedings Logo

Receive monthly Table of Contents alerts from Orthopaedic Proceedings

Comprehensive article alerts can be set up and managed through your account settings

View my account settings

Visit Orthopaedic Proceedings at:

Loading...

Loading...

Full Access

General Orthopaedics

DETECTING MECHANICAL LOOSENING OF TOTAL HIP ARTHROPLASTY USING DEEP CONVOLUTIONAL NEURAL NETWORK

International Society for Technology in Arthroplasty (ISTA) meeting, 32nd Annual Congress, Toronto, Canada, October 2019. Part 1 of 2.



Abstract

INTRODUCTION

Mechanical loosening of total hip replacement (THR) is primarily diagnosed using radiographs, which are diagnostically challenging and require review by experienced radiologists and orthopaedic surgeons. Automated tools that assist less-experienced clinicians and mitigate human error can reduce the risk of missed or delayed diagnosis. Thus the purposes of this study were to: 1) develop an automated tool to detect mechanical loosening of THR by training a deep convolutional neural network (CNN) using THR x-rays, and 2) visualize the CNN training process to interpret how it functions.

METHODS

A retrospective study was conducted using previously collected imaging data at a single institution with IRB approval. Twenty-three patients with cementless primary THR who underwent revision surgery due to mechanical loosening (either with a loose stem and/or a loose acetabular component) had their hip x-rays evaluated immediately prior to their revision surgery (32 “loose” x-rays). A comparison group was comprised of 23 patients who underwent primary cementless THR surgery with x-rays immediately after their primary surgery (31 “not loose” x-rays). Fig. 1 shows examples of “not loose” and “loose” THR x-ray. DenseNet201-CNN was utilized by swapping the top layer with a binary classifier using 90:10 split-validation [1]. Pre-trained CNN on ImageNet [2] and not pre-trained CNN (initial zero weights) were implemented to compare the results. Saliency maps were implemented to indicate the importance of each pixel of a given x-ray on the CNN's performance [3].

RESULTS

Fig. 2 shows the saliency maps for an example x-ray and the corresponding accuracy of the CNN on the entire validation dataset at different stages of the training for both pre-trained (Fig. 2a) and not pre-trained (Fig. 2b) CNNs. Colored regions in the saliency maps, where red denotes higher relative influence than blue, indicate the most influential regions on the CNN's performance. Pre-trained CNN achieved higher accuracy (87%) on the validation set x-rays than not pre-trained CNN (62%) after 10 epochs. The pre-trained CNN's saliency map at 10 epochs identified significant influence of bone-implant interaction regions on the CNN's performance. This indicates that the CNN is ‘looking’ at the clinically relevant features in the x-rays. The saliency maps also demonstrated that the pre-trained CNN quickly learned where to ‘look’, while the not pre-trained CNN struggles.

DISCUSSION

An automated tool to detect mechanical loosening of THR was developed that can potentially assist clinicians with accurate diagnosis. By visualizing the influential regions of the x-ray on the CNN performance, this study shed light into CNN learning process and demonstrated that CNN is ‘looking’ at the clinically relevant features to classify the x-rays. This visualization is crucial to build trust in the automated system by interpreting how it functions to increase the confidence in the application of artificial intelligence to the field of orthopaedics. This study also demonstrated that pre-training CNN can accelerate the learning process and achieve high accuracy even on a small dataset.

For any figures or tables, please contact the authors directly.