‘What the budget numbers tell us, and what they don’t’ an RSS-getstats parliamentary panel event took place on 19 March. Read on for a brief account of discussion.
Budgets are ‘political’ and interpretation of the numbers they present will always be ‘pluralist’, the RSS getstats panel audience was told - the event taking place a day before Chancellor George Osborne did his best to prove the point.
Doctors have to have a minimum understanding of basic statistics and if they don’t they put patients and professional integrity at risk. That surely is a lesson from the report of the Mid-Staffs inquiry and now the enforced closure of a children’s heart unit at Leeds.
Doctors will complain their training curriculum is already crowded – they don’t just have to conquer medicine but acquire personal, business and (if they are to play a part in the brave new world created by the Health and Social Care Act 2012) learn to be managers too. But without stats, how can they know whether a therapy or an intervention (or the performance of surgical colleagues) is effective and safe?
We’ve been here before, but that doesn’t make the pain of statistical abuse any lighter.
A government, down in the polling dumps, gets anxious to extol its policies. It seizes eagerly on any sign they are working. Temptation looms, in the shape of exaggerating or, as some would say, actively misinterpreting data.
Statisticians may sometimes seem over-meticulous and detail-obsessed, but if anyone can see the wood for the trees, it’s them. By checking through detail, they are really just bringing everything together so that they can look at the big picture.
At the weekend, in an interview on Radio 4 the ONS’s chief economist Joe Grice said that the ‘did it/didn’t it?’ debate around the UK and double dip recessions was “counter productive” and that we’d all do better to worry less about shorter-term figures “whether one particular quarter was up a smidgen or down a smidgen” and instead stay focused on the big picture and the general direction of trends in the economy over time.
Last year there was a surge of measles in England and Wales and already this year health authorities in South Wales and the north east of England are reporting spikes in cases of a disease that had been on its way out – thanks to the success of the MMR vaccine says NHS Choices.
A causal link can’t definitively be made with the Wakefield case in 1998 and the way, then and since certain media – notoriously the Daily Mail – have campaigned against immunization. But rates of vaccination did slip, probably because parents had read and believed reports linking MMR to autism. There’s more here about the disease.
This morning’s Radio 4 Today programme included an interview with Nate Silver, the statistician and analyst renowned for predicting the most recent US election results via models based on electoral history, demographics and polling. He correctly called all 50 states in the US Presidential election. His stock is now very high and he is viewed by many as the go-to predictor in the political arena. But he is determined not to be misunderstood or considered infallible. Indeed, he very humbly forecast his own future ‘unravelling’. As numbers go up and down and failure follows success, he could also see that, at some point in the future, he will, undoubtedly, get things wrong, maybe even very wrong.
The interview focused quite a bit on the distinction between predicting and forecasting. Nate encouraged us not to trust anybody who is too confident in their predictions, especially long-term predictions. Predicting the economy more than 3-6 months ahead is ”nearly impossible”. Political pundits have been found to have no more than a 50-50 chance of turning out to be correct, most do no better at predicting election results, than if they had just flipped a coin.
We’re not all statisticians but we are all, to some extent, programmed to reason statistically. In a bid to make sense of the world around us, we compare, contrast, look for patterns and are drawn to a statistical technique called correlation, a way of measuring the extent to which a change in one measurable thing – a ‘variable’ – is associated with the change in another measurable thing.
Indeed, you can calculate the correlation between pretty much any two things which can be quantified, counted and measured. But Statistics doesn’t operate as a set of techniques, its value is in providing insight into a problem, so if you are going to calculate correlations it makes sense for there to be some reason for doing it i.e. because you want to take action of some kind. SO there’s no point in looking for correlation between things which aren’t measurable such as eye colour and personality traits, or others which are clearly connected e.g. breast cancer and wearing skirts?. Or between entirely random things such as how many tatoos someone has and the amount of jam they eat each week. Any connection, mirroring or linearity found has to be down to chance. ’Findings’ here won’t tell you anything useful.