Radical improvements could be made in how data and technology are used to provide smarter services, according to the independent government advisory body, the Service Transformation Challenge Panel.
In other words, it is not just digital technology, but the data it generates, that can boost services in austere times.
While public sector data is often in the news – as open data, big data or personal data left lying about in skips – it is only recently that the potential of data analysis to improve services has become better understood.
Part of the power of data lies in combining multiple sources – open data, internal data and data held by partner organisations – to help target services, says Tom Smith, a member of the Cabinet Office’s Open Data User Group and co-founder of data research firm Oxford Consultants for Social Inclusion.
Census information, for example, which is open data, can show a council which areas have the highest numbers of children living in workless households, says Mr Smith. “So you can ask are our library services reaching enough kids in need? Open data checked or matched against service data can give you that comparison.”
Such analysis does cost money, but maintaining a research capacity in-house is “absolutely critical”, he says. “There is a real challenge for all public agencies to bring in or keep enough research skills in the organisation, when good researchers and data scientists have increasing opportunities elsewhere.
“It is much cheaper to pull out data and do the analysis, than it is to target services in the wrong place,” says Mr Smith. As a striking illustration, he cites the ten-yearly UK census, the biggest government data source. This costs some £400 million each decade, but has an estimated return on investment at ten times that figure, from helping locate expensive public services, such as schools, to use by retail firms in deciding where to open shops.
Another new area of data use is to predict and mitigate future need. “One of the ways to address cuts is to look for early intervention that is cheaper and has a bigger impact down the line”, he says. “Data can be put together to find out which kids are most at risk of going into care, for example, or which older people are most in need of social care. Then councils can ask can they use that information to put in place smaller, cheaper support that helps people get on with their life for longer without requiring more expensive intervention?”
WHICH HALF WORKS?
The final area of data value is to measure the impact of services once they have been deployed. Mr Smith says: “There’s that classic business saying – I know half of my advertising spend works, I just don’t know which half. Local authorities are in a similar position: they know some of their programmes work, but they don’t know which ones.”
Evaluation that does take place is often cursory and can also be hampered by the fact that service impact from one public body could best be measured by another.
“For example, a local authority running a programme to get people back to work would have to survey users afterwards every few months, which is expensive, and some of them will move away. But the Department for Work and Pensions [DWP] already knows exactly who has a job or who has come off benefits,” he says.
It is much cheaper to pull out data and do the analysis, than it is to target services in the wrong place
“If you could just send 100 National Insurance numbers to DWP and ask how many of these are still on Jobseeker’s Allowance, you could see the impact you were having.”
There are a few scattered examples of this kind of collaboration, such as the Ministry of Justice project, Justice Data Lab, helping other bodies track the effectiveness of rehabilitation programmes by offering anonymised percentages of how many re-enter the justice system. “This is fantastic, but we should be getting sharper at it,” says Mr Smith.
GOOD FOR HEALTH
Another service area on the brink of data-driven revolution is healthcare.
With the expansion of electronic health records, telemedicine and mobile health devices, an ever-growing data pool is waiting to be exploited, says Charles Lowe, president of the Royal Society of Medicine’s telemedicine council.
The data can be valuable at a general as well as an individual level, says Mr Lowe. At the service level, anonymised data can be used to establish connections between activities or outcomes that can point to new cures and treatments for disease. “For example, if you know someone wakes up in the night a lot and two or three weeks later something more serious happens, that is going to direct your research to a wider pattern, and help you intervene more effectively the next time,” he says.
Meanwhile for the individual patient, remote monitoring can help services react the moment alarms are triggered. And the tools will soon be commonplace: the next generation of smartphones will feature sensors that can gather a wide range of health data with very little input from their owners.
“They will measure blood pressure, blood oxygen content, pulse rate, respiration, even body temperature, because when make a phone call it looks into your ear,” says Mr Lowe.
Add to this specialist accessories to measure signs such as blood sugar level and we are entering a new world of data-driven care. But there are also concerns. With health data being highly sensitive, “people worry it is going to be given away and used to sell them stuff – or increase the cost of health insurance”, he says.
Unfortunately, there is no simple way to restrict the use of data entirely to those desired by patients, Mr Lowe says, since data is so easy to replicate. So how best to tackle such a sensitive issue?
New work is emerging that provides a practical route to extracting public value from data while preserving privacy, says Jeni Tennison, technical director at the non-profit Open Data Institute (ODI).
The starting point is a “privacy impact assessment” framework created by the Information Commissioner’s Office (ICO), Ms Tennison says, and new practical guidance on how to anonymise public service data is also due out in the spring from UK Anonymisation Network, a consortium comprising ODI, Manchester and Southampton universities, ICO and the Office for National Statistics.
“This will be a little more ‘how to’. Anonymisation materials that are already available tend to discuss the problem, but are not oriented about what you should do,” she says. “It will be a step-by-step guide on how you should deal with issues, such as looking at how data that is already available could be matched up with yours, and how this might reveal details of individuals.”
Most government departments and agencies accept it is right to release their data, though they can be overcautious, Ms Tennison says.
“In general, there is not much resistance among civil servants to the idea of being open because they are public servants and they get that,” she says. “The culture you run into is traditional risk aversion, where if you can be less risky by not publishing then you don’t publish, even when actual risk or potential impact is low.
“The civil service is right to be cautious, but there is a limit to that caution, and thinking it through step by step should help make sure we get to a good compromise between openness and privacy.”