header advert
Orthopaedic Proceedings Logo

Receive monthly Table of Contents alerts from Orthopaedic Proceedings

Comprehensive article alerts can be set up and managed through your account settings

View my account settings

Visit Orthopaedic Proceedings at:

Loading...

Loading...

Full Access

THE USE OF DIGITISED RADIOGRAPHS IN DETERMINING THE CONSISTENCY OF THE AO AND FRYKMAN CLASSIFICATIONS OF FRACTURES OF THE DISTAL RADIUS



Abstract

For any fracture classification, a high level of intraobserver reproducibility and interobserver reliability is desirable. We compare the consistency of the AO and Frykman classifications for distal radius fractures using digitised radiographs of 100 fractures by 15 orthopaedic surgeons and 5 radiologists using a Picture Archiving and Communications System (PACS). The process was repeated 1 month later. Reproducibility moderate for both the AO and Frykman systems, reliability only fair for both the AO and Frykman systems. In each case reproducibilty using the Frykman system was slightly greater. The assessor’s level of experience and specialty was not seen to influence accuracy. The ability to electronically manipulate images does not appear to improve reliability compared to the use of traditional hard copies, and their sole use in describing these injuries is not recommended.

These fractures are common, approximately one sixth of all fractures and the most commonly occurring fractures in adults. Their multitude of eponyms hint at the difficulty in formulating a comprehensive and useable system. The Frykman classification is most popular, but limited- does not quantify displacement, shortening or the extent of comminution. The more comprehensive AO system is limited in its complexity with 27 possible subdivisions. Computerised tomography shown to give only marginal improvement in consistency of classification.

Radiographs of 100 fractures selected. Anteroposterior and lateral view for each. 15 orthopaedic surgeons and 5 radiologists recruited as assessors, including 5 specialist registrars. Each given a printed description of Frykman and AO classifications. Radiographs could be manipulated digitally. Intra and inter-observer reproducibility analysed. A comparison made comparing reproducibility between radiologists and surgeons, consultant orthopaedic surgeons and trainees. Statistical methods; analysis involves adjustment of observed proportion of agreement between observers by correction for the proportion of agreement that could have occurred by chance. Kappa coefficients compared using the Student t test incorporating standard errors of kappa for these groups.

Median interobserver reliability was fair for both the AO (kappa = 0.31, range 0.2 to 0.38) and Frykman (kappa = 0.36, range 0.30 to 0.43) systems. Median intraobserver reproducibility was moderate for both the AO (kappa = 0.45, range 0.42 to 0.48) and Frykman (kappa = 0.55, range 0.51 to 0.57) systems. In each case the Frykman system was statistically (p< 0.01) more accurate. Level of experience, or specialty was not seen to influence accuracy (p< 0.01).

Our results demonstrate that using them in isolation in determining treatment and comparing results following treatment cannot be recommended

Correspondence should be addressed to: EFORT Central Office, Technoparkstrasse 1, CH – 8005 Zürich, Switzerland. Email: office@efort.org