Bad measures make bad data

HOME 5 Blog 5 Bad measures make bad data
Bad measures make bad data. And bad data is a poor premise for making decisions about expenditure and delivery in social programmes.
9 Aug, 2016

Blog written by Gen Maitland Hudson – Head of Evaluation and Impact Assessment

Bad measures make bad data. And bad data is a poor premise for making decisions about expenditure and delivery in social programmes.

I was interviewed about the problems of bad data in the context of the Troubled Families (TF) programme for Newsnight on Monday. TF, which is now under review following a damning independent evaluation, is an example of just how unhelpful unreliable data can be.

The government data process for the programme demonstrated the worst combination of poorly aligned Payment by Results incentives, inadequate research methodology and unreliable reporting. This produced bafflingly high success rates at a time of rising need for the most disadvantaged, and more generalised local authority spending cuts.

Success rates of 100% within social programmes should always be treated with caution. In the context of TF, their regularity is a large, waving red flag.

These astonishing success rates are not corroborated by data held by individual local authorities and released under the Freedom of Information Act. FOI data should itself be treated with heavy caveats since it presents considerable interpretative variation, but in the case of TF the discrepancies between FOI information and government data releases are so consistently high that they clearly point to doubtful record keeping somewhere along the line.

The simple fact is that the government data releases for the programme tell us nothing about how good, bad or indifferent it was in different local areas. It cannot help us to direct funding; being distorted, poor quality information it leaves us worse off in making good budgetary and policy decisions.

If TF got it so wrong, how could we start getting it right?

One way is to make data collection a matter of low stakes rather than high stakes accountability. Ensuring that payment is not fully dependent on a narrow set of results reduces the incentive to game the data. Another is to ensure that the measures you introduce offer a good reflection of the programme’s aims and scope, are embedded directly into delivery and are regularly sense-checked with service providers and users. This will reduce the likelihood of wildly unrealistic outcomes data.

At Power to Change we are aiming to do these things by supporting community businesses to collect systematic, reliable information from their customers, clients and members that will support their business decisions. The primary aim of our data collection is usefulness at the frontline rather than impressing commissioners with glowing aggregate figures. When considered in the light of robust financial and governance information, this should start to give us a reasonably accurate picture of the ways in which community businesses contribute to places that are valued by their residents. Better measures, we hope, for better data.

An analysis of the TF dataset compared to FOI data is available for download here.