Tuesday, July 26, 2016

Why is measuring worker outcome so challenging for workers’ compensation systems?

Those who make laws and policy do so to achieve certain outcomes.  Unfortunately, meaningful measurement of outcomes in workers’ compensation is sadly lacking. 

Workers’ compensation outcomes seem straightforward enough.  Safe workplaces.  Fair compensation for work injuries.   Effective treatment and rehabilitation.   Safe, timely and durable return to work.   There are others, but even these basic worker outcomes (or indicators related to them) are rarely reported. 

The most obvious barrier to reporting worker outcomes is that they are hard to measure.  Unlike “dollars spent” or “new claims received”, which are very objective input and process measures, worker outcome measures require a great deal of time and effort to develop, track, and report.  

Measuring “safe, timely and durable return to work” , “impact of disability on future earnings”, or “worker satisfaction with claim process”  for example, require specific definitions for these important terms and a mechanism for consistently assessing cases and reporting outcome on a timely basis. 

Often, the only way to get the data is through interviews well after the last temporary disability payment has been sent and the claim closed.  Many organizations are just not willing to put the time, staff resources and money into getting the data needed to produce a credible outcome measure on a timely basis.

One way to overcome this barrier is to be selective in the population under study.  Focusing on a few injury types, industries, and occupations can make the process easier.  Sampling techniques can reduce the number of cases (and related time, effort, resources and costs) needed to get a representative result.  Often, the lower precision of the outcome measure is an acceptable price to pay for increased timeliness of the analysis. 

The second barrier is a little less obvious but vitally important.  Worker outcome measures are meaningless without an appropriate basis on which to assess the reported result.  Even if you put the money, time and effort into developing and reporting on a given worker outcome, how is a policy maker or other stakeholder to know if the reported outcome is good or bad?  Without a credible parallel measure to compare with, an outcome measure may only provide year-over-year change data.  Comparator data is notoriously hard to come by.

No matter what population you decide to focus your outcome measure on, you are going to need something to compare your result against.  Where are you going to find that data?  You can’t just use another jurisdiction’s data without first adjusting for factors that might otherwise impact the outcome—factors like age, gender, industry, and occupation.  Not only that, privacy rules are likely to add to the data acquisition challenge.

Consider “duration of temporary disability”—arguably an important outcome for injured workers (who suffer the financial and physical losses while away from work) and employers (who pay the claim costs and consequences of worker absence including backfilling costs, lost productivity, etc.).  One would expect cases of similar work-related injury in similar occupations and industries would have similar outcomes assuming other factors are also similar.  Differences in outcomes across jurisdictions selected for comparison can be a fabulous starting point for exploring policy and practice impacts on worker outcomes.  But I can tell you from experience, getting timely comparable data from jurisdictions outside your own is a herculean task.

This example raises a third barrier, the “Do I really want to know?” challenge.  Outcome measurements that are rigorously developed and reported with appropriate comparator start discussions and raise questions that some may not want to consider. Fear that the result of outcome measurement will make a jurisdiction “look bad” may be the biggest unspoken reason for avoiding the whole process or the real reason behind stated objections against involvement (funding, providing data) in worker outcome research.  This barrier applies both to the jurisdiction developing the measure and any other jurisdictions approached to participate as a comparator.

At this point, you can understand why outcome measures are rarely reported in workers’ compensation.  Yet, if you are interested in improving workers’ compensation systems, outcome data across jurisdictions is essential.  When you come across well developed outcome measures from multiple jurisdictions it is like finding a vein of pure policy gold in the mountains of workers’ comp statistics, reports and data out there.  

A couple of recent studies demonstrate how the commitment of participating jurisdictions and the dedication of researchers have overcome these barriers.  These research products provide credible, useful outcome measures and analysis that policy makers and stakeholders can use to evaluate system performance and improve workers’ compensation.  Each study involves very large sample sizes and matched data sets that control for variations in many factors ( such as injury type, industry mix, age, gender, etc.).

Alex Collie, Tyler J Lane, Behrooz Hassani-Mahmooei, Jason Thompson , and Chris McLeod,  “Does time off work after injury vary by jurisdiction? A comparative study of eight Australian workers' compensation systems”, BMJ Open 2016;6:e010910 doi:10.1136/bmjopen-2015-010910

This study examines more than 90,000 claims and controls for demographic, worker and employer factors; it shows conclusively that jurisdiction in which an injured worker makes a compensation claim has a significant and independent impact on duration of time loss.  (Free on-line article).

Bogdan Savych and Vennela Thumula. “Comparing Outcomes for Injured Workers in …” WCRI, May 2016

This study (or, more accurately, a series of parallel studies) examines worker outcomes for each of the 15 states:  (ArkansasConnecticut, FloridaGeorgia, Indiana, Iowa, Kentucky, Michigan, Massachusetts, Minnesota, North Carolina, Pennsylvania, Tennessee, Virginia, Wisconsin)  using claim and interview data from very large samples in each jurisdiction. Each study controls for “mix” of industry and financial severity of the claim.  In the “Data Book” supplements for each jurisdiction, the authors provide worker outcome data for the unadjusted for case mix and additional detail on return-to-work accommodations provided in both successful and unsuccessful cases. (Limited free viewing and  free policy-maker registration for webinar;  low cost for others).

Neither of these examples identifies the specific policy features that may account for the outcome differences—that was never their purpose.  System features such as the presence (and length of the waiting period), rate of compensation, mandatory reinstatement laws, specific vocation rehabilitation programs, and insurance arrangements (exclusive state fund, competitive state fund, or private provision) are a few candidates for stakeholders and policy makers to consider.   

Another series of studies that do attempt to evaluate the impact of legislative changes on the specific worker outcome of post injury earnings and the adequacy of compensation are highlighted in a recent research summary: 

Emile Tompa, R Saunders , C Mustard, and  QLiao  “ Measuring the adequacy of workers’ compensation benefits in Ontario: An update”   IWH Issues Briefing,   March 2016.

This summary updates the analysis of benefits adequacy in Ontario by looking at more recent cohorts of permanently impaired workers’ compensation beneficiaries following the 1998 changes to Ontario’s workers’ compensation legislation.  This update does not directly compare any other jurisdictions although the methods and prior research but the study demonstrates the complexity of analysis necessary in outcome analysis.  Prior research in this series used data from British Columbia and Ontario as well as comparable data from taxation data sets.  Previous work by RAND and other research groups on data from California, New Mexico and Washington state, among others.  (Issue briefings and previous IWH research available for free online viewing).

By the way, although the results of these high-quality, peer-reviewed research efforts may be “free” to stakeholders and policy makers everywhere, the research itself has significant costs.  Not all workers’ compensation jurisdictions financially support workers’ compensation research, but thanks to those that do, every system can benefit from research findings. 

Getting those findings takes qualified researchers familiar with the data, jurisdictions committed to providing data and data definitions, and hours of work by analysts not to mention the infrastructure necessary to secure and maintain the integrity of the data, review research, publish and other activities transfer that knowledge to those who can act on it.  Understanding workers’ compensation is not a trivial undertaking for anyone including skilled and capable researchers.  It is doubtful that any of this research could have been undertaken without the knowledge base and experience with workers’ comp data evident among the researchers leading these studies.


Bottom line:  Worker outcome measures are challenging but essential to assessing and improving workers’ compensation systems. A few jurisdictions have invested their time and resources demonstrating  just how valuable this sort of research can be.  Every jurisdiction should actively pursue worker outcome research by contributing their data to comparative efforts,  funding workers’ compensation research,  and developing the research talent in workers’ compensation.     

No comments: