header advert
Bone & Joint 360 Logo

Receive monthly Table of Contents alerts from Bone & Joint 360

Comprehensive article alerts can be set up and managed through your account settings

View my account settings

Visit Bone & Joint 360 at:

Loading...

Loading...

Full Access

Feature

Differences in clinician versus patient recording of comorbidities in PROMs

Small changes, big impact



Download PDF

Methodology

A data request was made from the Health and Social Care Information Centre (HSCIC) regarding patients who had undergone a primary total knee replacement (TKR) at the Norfolk and Norwich Hospital in 2014. In total, 576 patients had received post-operative PROMs questionnaires in 2014. Complete information was available for 195 patients, which forms the basis of this analysis. The patient letters and the pre-operative assessment documentation on our electronic system (Bluespier) were then reviewed. The comorbidities that the clinician felt would apply to that patient were recorded from the list provided in the Oxford Knee Score (OKS) and were then compared with what the patients had recorded.

In total, there were 189 additional comorbidities identified from our notes review. Of these, 95 would alter the predicted OKS score in 77 patients. There was a significant change in average predicted OKS score from 33.7 ± 3.9 to 32.3 ± 4.0 (p = 0.02) in the 77 patients who had additional OKS-altering comorbidities. When looking at the case-mix adjustment, the original mean adjustment was -0.83 (± 1.1). After adjusting for clinician-reported comorbidities, there was a significant change in the mean to -1.40 (± 1.4) (p < 0.0001). After the relevant recalculations were carried out, the adjusted average health gain went from 15.254 to 15.907. This is an improvement of 0.653.

The small change of ensuring accurate comorbidity recording can have an impact on the adjusted average health gain for a hospital. This is important information: patients report comorbidities differently to clinicians, and often overrate their health. Despite the limitation of this comorbidity data, hospital performance data, which are publically available, are based on this case-mix and comorbidity adjustment. Care clearly needs to be taken in the interpretation of these case mix-adjusted scores.

Introduction

The PROMs (Patient-Reported Outcome Measures) programme, embedded within the NJR, is an evaluation of surgical outcomes based on questionnaires completed by patients before and after their surgery. Eligible patients are those treated by or on behalf of the English NHS for the following procedures: hip replacements, knee replacements, varicose vein surgery and groin hernia surgery.

The increasing use of PROMs acknowledges the patients’ perspective as a marker of quality and effectiveness by placing them at the centre of decision making. It enables comparison of health services, identifies strengths and weaknesses of health care delivery, drives quality improvement, informs commissioning, and promotes choice. However, it does then rely on patients filling out forms and returning them. Efforts have had to be made to make shorter, more reliable tools, in order to increase patient participation, particularly among underrepresented patient populations. Widespread use of PROMs may be limited by the costly and time consuming process of collection, analysis and data presentation. The internet opens up opportunities but is not currently in widespread use.1 Clinician reported outcomes are based on a clinician’s observations or interpretations of the patient’s global level of functioning. A certain level of knowledge is required to perform this assessment but it can offer an educated insight to the patient’s condition. However, there can be a failure to comprehensively capture patient interpretation of their function and there is a risk of over-estimation of recovery status.2

PROMs data and analyses, including those from Hospital Episode Statistics (HES)-PROMs linked data, are published each month by the Health and Social Care Information Centre. Publications include monthly summary statistics, a more detailed quarterly set of statistics, including extensive reusable datasets, and either an analysis of a topic of interest from the datasets or a detailed annual report of the latest finalised annual data.

PROMs in use in arthroplasty surgery comprise both generic and disease-specific scores. The EQ‑5D Index collates responses given in five broad areas (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression), combining them into a single value. EQ VAS is a simple and easily understood ‘thermometer’‑style measure and is a patient’s self-scored assessment designed to measure their general health and quality of life on the day that they completed their questionnaire. This is obviously also an indication of their general health and is not necessarily determined by the condition for which they underwent surgery and which may have been influenced by factors other than health care. The OKS questionnaire is a disease-specific score that aims to combine a number of disease-specific questions into a single, validated score.

The PROMs outputs supplement mortality and complications data, and allow the DoH and NHS England to monitor progress towards their strategic objectives, such as those specified in the NHS Outcomes Framework (NHSOF). The aim is to allow local commissioners and service providers to assess, from the perspective of the patient, the results of treatment and care, with the goal of improving overall quality of care. It enables patients and clinicians to make an informed decision on the choice of treatment provider.

Adjusted average health gains are calculated using statistical models which account for the fact that each provider organisation deals with patients with different case mixes. The objective of the case-mix or risk adjustment process is to adjust the reported PROMs health status data, taking account of variables such as patient age, sex and comorbidities, amongst others, across the country. These variables are beyond the control of the provider and the adjustment can allow comparison between providers on a like-for-like basis.3 Random variation in patients means that small differences in averages, even when case-mix adjusted, may not be statistically significant. ‘Control limits’ are therefore defined and calculated, which represent boundaries. Providers falling outside of these limits may be stated with statistical validity to be significantly better (if above the upper limit) or significantly worse (if below the lower limit) than England as a whole.

In order to allow for this case-mix adjustment, patients are expected to recall their comorbidities and list them on the form. There is no formal guidance on how to do this and this may represent a problem. It is unclear how well patients can recall all of their medical problems and as a result they may fill in the form incorrectly. A common example might be in the case of hypertension - most patients feel that, whilst on medications, their blood pressure is controlled and hence they do not have “high blood pressure”.

The patient’s predicted post-operative functional score is calculated, based on several variables. The case-mix adjustment model sets out these different variables for the various scores that make up PROMs. Each variable has a coefficient that affects the predicted score, either in a positive or negative manner. This is to try and quantify the impact that different comorbidities may have on the outcome of the individual patient. The different variables have different impacts on the various scores. The OKS is the most relevant of these to this study. Table 1 shows the list of comorbidities collected in the PROMs booklet. Only some of these have been shown to affect OKS scores, and as such only these have had their coefficient listed.4

Table 1.

Comorbidities recorded in the NJR PROMs programme and coefficients of those that affect the OKS predicted score

Comorbidity Coefficient
Heart Disease −0.908
Stroke −1.161
Poor Circulation −2.858
Lung Disease −0.675
Diabetes −1.176
Depression −2.187
Hypertension
Kidney Disease
Nervous System
Liver Disease
Cancer
Arthritis

A patient’s predicted score is calculated using a mathematical formula that incorporates a multitude of variables based on their characteristics. This includes their pre-operative OKS score, their age, ethnicity, disability status and comorbidities, amongst others. If patients have a comorbidity listed in Table 1, it then adjusts their predicted score by the value of the coefficient. For example, if the patient has heart disease, their predicted score is reduced by 9.2%. As additional comorbidities are reported, so the predicted score decreases further.4 Although these look like relatively small changes to the overall predicted score, they may have far-reaching effects on the adjusted average health gains for a hospital.

This is due to how the adjusted average health gain for a provider is calculated. It starts with the generation of the predicted OKS score using the appropriate coefficients. A ratio is then calculated at individual patient level of the actual reported post-operative health status relative to their predicted health status. This is called the Relative Performance Factor (RPF) per patient.

RPF = Actual Post-op OKS Predicted post-op OKS

A ratio of 1.4 would mean that the patient-reported outcome scores are 40% better than might be expected given their characteristics. Conversely, a ratio of 0.9 suggests that the patient reported only 90% of the health score improvement that was predicted.

The RPF for the provider is then calculated by averaging the RPFs for all patients, and the adjusted average post-operative OKS score for the provider is then calculated by multiplying the RPF provider by the national average post-operative OKS score.

The adjusted average health gain for a provider is finally calculated using the following equation:

Adjusted Average Health Gain = Adjusted Average Post-Op OKS score Provider National Average Pre-Op OKS score

This score is then used to compare healthcare providers and if it is more than a defined number of standard deviations beyond the mean, the hospital is flagged as significantly better or worse than the national average. The individual RPF may also be important in future for remuneration if the commissioning system becomes dependent on patients hitting their predicted PROMs score before payment occurs.5

The aim of this paper was to establish if there is a difference in how patients and clinicians record comorbidities on the PROMs form, what effect this may have on predicted scores, and if this has implications for the hospital in terms of adjusted average health gain.

The objective of the paper was to review clinically documented comorbidities (patient letters and pre-admission information) and compare with those that the patient had reported. Further objectives were to interpret the data and calculate the difference in adjusted average health gain.

Methodology

A data request was made from the Health and Social Care Information Centre (HSCIC) regarding patients who had undergone a primary total knee replacement (TKR) at the Norfolk and Norwich University Hospital in 2014. In total, 576 patients had received post-operative PROMs questionnaires in 2014. All patients therefore would have their six-month post-operative PROMs available to compare with their pre-operative PROMs.

Several filters were placed on the information database provided by the HSCIC. This information is summarised in Table 2. The NHS number was required to check the hospital records as there is no other way of identifying patients on data returned. According to the HSCIC PROMs data dictionary, “Complete” refers to Q1 (Pre-Op Questionnaire) and Q2 (Post-Op Questionnaire) being completed.6 This should mean that both the pre- and post-operative OKS were completed. “Episode Matched” indicates whether Q1 has been linked to a HES inpatient episode. Further data validation was undertaken and cases were excluded when pre-operative or post-operative scores were incomplete. An initial number of 229 scores were available for review. However, as each operation note for the patients was reviewed, it became apparent that there was also some miscoding of revision TKRs present in the dataset provided by HSCIC. Thirty-four patients were excluded as revisions, leaving a final number of 195 patients for interpretation (Fig. 1). This represents 33.85% of the original data sent by the HSCIC that is suitable for review.

Table 2.

Numbers of patients with additional comorbidities ranging from 1 to 4

Number of additional comorbidities Number of patients
1 71
2 34
3 14
4 2
Fig. 1. 
          Flow chart demonstrating filters placed on data set and number of remaining cases available for review.

Fig. 1.

Flow chart demonstrating filters placed on data set and number of remaining cases available for review.

The electronic records of patient letters and the pre-operative assessment documentation available on our electronic system (Bluespier, Droitwich, UK) were then reviewed. There are specific systemic questions and free text as part of the electronically recorded pre-operative assessment form that covers aspects of patients’ comorbidities. Additional information was also gathered from the clinic letters. The clinicians used their judgement to ascertain whether certain comorbidities fell into particular categories, for example, atrial fibrillation constitutes heart disease and a transient ischaemic attack constitutes cerebrovascular disease.

The comorbidities that were recorded on the electronic record and applied to that patient were from the list provided in the OKS (as shown in Table 1). This was then compared with the patient’s self-reported record. The predicted OKS score calculated by the HSCIC for each patient was then amended based on the coefficients. The data were then interpreted using Microsoft Excel, (Microsoft, Redmond, Washington) and SPSS 22.0 software (IBM, Armonk, New York). Paired t-tests were used to assess for significance. A p value of < 0.05 was considered significant.

Results

Of the total 229 patients, 34 were revisions and thus excluded. Amongst the remaining 195 patients, 121 had additional comorbidities compared with the patients’ self-reports. The median additional comorbidity was one (Table 2).

In total, there were 189 additional comorbidities identified from the notes review. Of these, 95 would alter the predicted OKS score in 77 patients (heart disease, stroke, circulation, lung disease, diabetes and depression), as shown in Table 3.

Table 3.

Numbers of each individual additional comorbidity

Additional Comorbidity Number
Heart Disease 37
Stroke 14
Poor Circulation 8
Lung Disease 25
Diabetes 3
Depression 8
Hypertension 37
Kidney Disease 7
Nervous System 7
Liver Disease 4
Cancer 8
Arthritis 31

In patients with additional comorbidities, the predicted OKS score was recalculated. There was a significant change in average predicted OKS score from 33.7 ± 3.9 to 32.3 ±4.0 (p = 0.02) in the 77 patients who had additional OKS-altering comorbidities. When all 195 patients were included, the average predicted OKS dropped from 33.7 ± 3.9 to 33.3 ± 3.9 (p = 0.30). This smaller drop is expected as the patients with no additional problems are now included in the calculation. The maximum change was a lowering of the predicted score by 4.94 in a single patient.

Based on the method previously discussed, the adjusted average health gain for the provider was calculated. At the time of this study the national average post-operative OKS score was 35.399 and the national average pre-operative OKS score was 19.256. This is based on information from the HSCIC April 2014 to March 2015 provisional data. This was published on 12 May 2016.7

Taking all 195 patients into account, the RPF before adjustment of the OKS was 0.975, and after adjustment became 0.993 (p = 0.55). Therefore, the adjusted average health gain went from 15.254 to 15.907. This is an improvement of 0.653.

When looking at the case-mix adjustment, the original mean adjustment was -0.83 (± 1.1). After adjusting for clinician-reported comorbidities, there was a significant change in the mean to -1.40 (± 1.4), p < 0.0001 (p = 0.000008). Please refer to Figure 2.

Fig. 2 
          Box and Whiskers Chart showing the Original Adjustment Coefficient Scores compared Against the Clinican Adjusted Coefficient Scores. Original Mean -0.83, Adjusted -1.40.

Fig. 2

Box and Whiskers Chart showing the Original Adjustment Coefficient Scores compared Against the Clinican Adjusted Coefficient Scores. Original Mean -0.83, Adjusted -1.40.

Discussion

Multiple variables affect the predicted OKS score in PROMs. Though comorbidities may only account for a relatively small portion of these variables, they do have an impact on the predicted scores. These relatively small changes in the predicted scores may then have consequences for the adjusted average health gain for a provider.

In our study, we have demonstrated that for a patient group of 195 patients with clinician-documented comorbidities, the adjusted average health gain went up by 0.653. At the time of this study and based on the data form the HSCIC website, the 95% lower and upper control limits for a hospital with 192 respondents were 14.937 and 17.349, respectively.7 The number of patients in the modelled records is relevant as it is charted on a funnel plot and hence, as the numbers increase, the confidence intervals narrow. Given that these control limits are narrow in high volume providers, it would be easy to imagine a hospital could move from being an outlier in the lower 95% to the normal range on the funnel plot if their adjusted average health gain was increased by 0.653. As a worked example, we took an anonymised hospital below the lower 95% confidence limit with a patient record number of 177 PROMs responses. This is quite close to our patient size of 195. The lower 95% limit was 14.887 and the hospital score was 14.259. If they were to increase by 0.653, resulting in a score of 14.912, this hospital would no longer lie outside of the 95% confidence limit and thus it would no longer be an outlier.7

Given the fallible nature of patient-documented PROMs data, there is a strong argument that PROMS should be interpreted with caution. There is a danger, especially if attempting to compare different hospitals. It is known that patients’ self-reporting of general health status varies with social class and ethnicity, hich could potentially result in different magnitudes of this effect between hospitals. There is also a relatively poor questionnaire completion return rate, which will affect the quality of the overall data returned, and may leave it vulnerable to a selection bias. As we have shown, data from revisions may also be erroneously included and this is likely to affect the final scores as they are more complex operations with poorer outcomes. With the push to providing revision surgery predominantly in tertiary centres, these units might well be expected to suffer from worse PROM outcomes. The data allow for further investigation of a hospital if they are in the lower confidence interval but they do not necessarily indicate a problem. The risk, however, is that data that are publically available can have an impact on a hospital’s workload, with patients able to alter their choice of hospital based on the data presented. Making small changes such as ensuring accurate comorbidity recording and exclusion of revision cases can help this.

There is likely to be an element of underreporting of comorbidities by all patients across the country and this is likely to be reflected in the adjusted average health gains. However, independent providers and some hospitals may be able to provide more support to patients when the forms are filled out pre-operatively, for example, with the help of nursing or medical staff. If this additional assistance results in more accurate recording of comorbidities, it may have an impact on the final health gains as discussed above.

On an individual patient basis, the recording of additional comorbidities is unlikely to have any clinical significance as it only affects their predicted OKS which is derived from the average. But as a whole, this may become relevant in the future if payment relies upon patients achieving their predicted OKS. At this point, it is in everyone’s best interests to ensure that the predicted score is as accurate as possible to ensure good quality statistics to adequately inform decision making processes.

In conclusion, the small change of ensuring accurate comorbidity recording can have an impact on the adjusted average health gain for a hospital. This is important, despite the limitation of the data, as this information is publically available, and being marked as an outlier in the lower percentages can be damaging for the reputation of a unit. It may also be enough to move a unit from being an outlier into the normal range and protect against loss of earnings if payment by results is introduced.


Email:

References

1 Weldring T , SmithSM. Patient-reported outcomes (PROs) and patient-reported outcome measures (PROMs). Health Serv Insights2013;6:6168.CrossrefPubMed Google Scholar

2 Shultz S , OlszewskiA, RamseyO, SchmitzM, WyattV, CookC. A systematic review of outcome tools used to measure lower leg conditions. Int J Sports Phys Ther2013;8:838-48PubMed Google Scholar

3 Coles J . PROMs risk adjustment methodology guide for general surgery and orthopaedic procedures. Northgate Public Services, September2010. https://www.england.nhs.uk/statistics/wp-content/uploads/sites/2/2013/07/proms-ris-adj-meth-sur-orth.pdf (date last accessed 13/02/2017) Google Scholar

4 NHS England. Patient Reported Outcome Measures (PROMs) in England: Update to reporting and case-mix adjusting hip and knee procedure data. https://www.england.nhs.uk/statistics/wp-content/uploads/sites/2/2013/10/proms-meth-prim-revis.pdf (date last accessed 13/02/2017) Google Scholar

5 Department of Health. Patient Reported Outcome Measures (PROMs) in England. A methodology for Identifying Potential Outliers. July2011. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/216651/dh_128447.pdf (date last accessed 13/02/2017) Google Scholar

6 Patient Reported Outcome Measures in England. Data Dictionary. Version 3.4 [internet]. NHS Digital [cited 15/06/16]. Available from http://content.digital.nhs.uk/media/1361/HES-Hospital-Episode-Statistics-PROMS-DataDictionary/pdf/Proms_Data_Dictionary.pdf Google Scholar

7 Provisional Quarterly Patient Reported Outcome Measures (PROMs) in England - April 2014 to March 2015, May 2016 release [internet]. NHS Digital [cited 16/06/16]. Available from http://content.digital.nhs.uk/catalogue/PUB20623/prov-proms-eng-apr14-mar15-data-qual-note.pdf Google Scholar