Resolve to make the world more statistically savvy

Written by Web News Editor on . Posted in Features

Happy 2013 ! An opportunity for a fresh start, a new era. A time when most of us (let’s admit it) have given some thought to New Year resolutions. But which resolutions? and how good are we at keeping them?
 
The top 10 most common resolutions* are listed at Statistic Brain and we can reveal that…at no 1 we have… ”Lose weight’ and at no 10 we have ‘Spend more time with family’. (So, not much emphasis on improving society or the environment then?). However, ’Learn something new’ is on the list  too and all the evidence is that we are more likely to succeed with education-focused self-improvement than we are to lose the pounds.
 
So if education looks to be the way to go and you are thinking of strengthening your stats know-how and skills, why not start by taking our stats quiz and see how you do?
 
Be inspired by Statistics 2013, the International Year of Statistics. This statistics awareness and engagement initiative is supported by over a thousand organisations worldwide, united in their ‘resolve‘ to strengthen statistical understanding and good use of stats.
 
Looking for more ideas?  Reading  ’Some Statistical Habits to add or subtract in 2013‘ and Tips for a Statistically Savvy 2013 by Carl Bialik for the Wall Street Journal, on developing statistical resolutions, should help.
 
And to help us further fine-tune our resolutions, please let getstats know about any statistical bugbears – in the media, in the workplace, in school, in parliament or in any other aspect of life – you would like to see eradicated in 2013.
 
___________________________
 
*Although based on (not so recent!) US research, the list’s ‘weight loss, get fit, spend less/save more’ refrain is timeless, universal and uncomfortably familiar and so ‘just for fun’, we took a closer look. The Statistics Brain listings were published in December 2012, citing University of Scranton, Journal of Clinical Psychology as the source. The underpinning research ‘Auld Lang Syne: Success Predictors, Change Processes,and Self-Reported Outcomes of New Year’s Resolvers and Nonresolvers‘ by Dr John Norcross and colleagues, was originally  published in 2002. The research is based on a study of 282 ‘resolvers’ and ‘non-resolvers’ in 1995 into 1996.
 

Health reporting may be on the mend

Written by Web News Editor on . Posted in Features

NHS Choices reckons health reporting has been improving with ‘wonder cures’ hitting the headlines less often and peer-reviewed medial reports covered more responsibly. But the paper illustrated, the Daily Express, is a ‘dishonourable exception’ to the trend, according to the Department of Health-supported information site, which monitors how reports of new therapies and treatments are handled by the media.
 
However ‘headlines can often give a different impression’. Rely on them and you may get a sensationalist and mistaken sense of the news.
 
It’s not that news media have stopped churning out stories about miracle cures, for example (from last year) that curry might help stave off dementia and exercise could change your DNA. In both there’s a kernel of truth: the former reported animal studies and the latter was based on informed speculation about the course of genetic expression. But the facts did not lend themselves to either the headlines or the ‘story’ as presented.
 
Journalists still cannot be trusted with the evidence, especially when it is a matter of probability and subtle distinction between absolute and relative risk. Last June media ran with a story linking men’s tea consumption and prostate cancer. There is a link but so tenuous as to make the story a piece of scaremongering.
 
During the same month another statistical story ran, along the lines that we are 14% more likely to die on our birthday than other days of the year. Unpack the figures and it turns out to be a lot less mysterious.
 
Unfortunately for bloggers, Tweeters and journalists, the message in 2013 remains the same as during last year: being accurate may mean junking the story, unless you are prepared to educate your readers in the subtleties of risk and probability.
 
 

Chances are we all get probablility

Written by Web News Editor on . Posted in Features

With advances in technologies like cancer screening, we need to be as clear as possible when stating results in terms of probabilities.
 
It’s not just patients who sometimes find risks and probabilities difficult to understand. Doctors can be challenged by them too.
 
In an experiment in 2004, psychologist Professor Gigerenzer and colleagues at the Max Planck Institute for Human Development set a group of experienced doctors the following problem:
 
About 1% of women have breast cancer and a cancer screening method can detect 80% of genuine cancers but also has a false positive (or false alarm) rate of 10%.  What is the probability that women whose test produces a positive result actually do have breast cancer? Most of the doctors thought it was 70%.
 
Another set of doctors were asked the same question, but this time they were given the data as whole numbers. They were told that 10 in every 1,000 women have breast cancer and that, of these 10, 8 will give a positive screening result…while of the 990 who do not have cancer, 99 will produce a false positive result. Asked to estimate the probability that women with a positive result have cancer, most of the doctors could see that it was 8/(99+8), so roughly 7.5 %.
 
Changing raw probabilities into hard numbers helps makes things much clearer.  Indeed, simply changing 8% of people to 80 people out of every 1,000 can make a big difference in understanding.
 
Knowing how the human mind computes clearly helps when it comes to deciding how best to communicate probability. Percentages depend on your base figure and context doesn’t always make the base figure clear.  The same information presented as ‘counting heads’ clearly spells out the group being referred to at each stage and so avoids the problem.
 
So chances are we do all get probability, it’s how it is explained and communicated that matters most. And when it comes to worrying about having or not having serious conditions, it can make the difference between reassurance or added anxiety.
 
See the Understanding Uncertainty site for a great animation on screening.
 

Drive for better stats consumption and production in the voluntary sector

Written by Web News Editor on . Posted in Features

The push is on to help charities and the voluntary sector to be better consumers and producers of statistics. At a recent meeting convened at the Royal Statistical Society, representatives of the National Council for Voluntary Organisations, the UK Statistics Authority, the Third Sector Research Centre and major players in the sector such as the Joseph Rowntree Foundation resolved to work together to improve uptake and training.
 
For its part the government in the shape of the Office for Civil Society – part of the Cabinet Office – is mounting a new community survey to provide data on volunteering and charitable activities, a part replacement for the Citizenship Survey, which the Communities and Local Government Department decided to end two years ago.
 
The third or voluntary sector is data rich. Delivering services depends on good knowledge of people and social conditions; charity trustees and grant givers need to measure the effectiveness of their work; voluntary bodies produce data about the people and causes they serve.
 
But this ‘sector’ is highly differentiated. Many charities are tiny and intensely local. They may lack the time, energy and capacity to conduct reliable sample surveys, read spreadsheets or advance much beyond story-telling and subjective impressions. ‘Evidence is the plural of anecdote,’ the RSS meeting was told. Few charities have the wherewithal to mount a randomized control trial, the gold standard for assessing the effectiveness of social interventions.
 
Official statistics are not always useful, because they cannot be broken down to the local areas, communities and estates where charities operate.
 
Can norms around the use of data, statistics and evidence be changed? Only if a concerted effort were to be made to educate trustees and staff, and to provide consultancy and external assistance – which is where RSS getstats comes in. But ‘capacity building’ funds of the kind that were available under the previous government have been cut.
 
In a report last year the UK Statistics Authority saw the need for ‘ongoing support and closer engagement with experts’. Umbrella groups such as the National Council for Voluntary Organisations (NCVO) should get closer to those producing statistics, and shape their decisions about what data to collect.
 

When no change is big news: RPI announcement

Written by Web News Editor on . Posted in Features

After a short review, the Office for National Statistics has decided to leave the Retail Price Index (RPI) unchanged. Instead, a new additional index of inflation – RPIJ (the ‘J’ stands for Jevons, a new geometric formula) – will be brought in by March 2013.
 
It’s a very important issue. The RSS statement on the announcement spells it out: “how inflation measures are calculated is not just a technical issue for professional users but one of widespread public importance affecting millions of UK citizens“. RPI and Consumer Prices Index (CPI) figures impact on government out-goings such as pensions and benefits. Also on taxation levels.
 
The National Statistician’s decision to leave RPI broadly as it is (with some tweaking to the measurement of private housing rents) means that index-linked final salary schemes are safe for now – these would have been affected if RPI had been reduced. Train fares, student loans, some utility bills etc might have been affected, but stay the same for now…the list goes on.
 
What are the RPI and CPI? The RPI has been around since the 1950s/40s and is a measurement of the change in the cost of a basket of retail goods and services. The CPI was taken up in the 1990s and measures the inflation of the price of consumer goods and services purchased by UK households. They sound the same but they are different. The RPI doesn’t include households in the top 4% income bracket and pensioners who rely on state benefits for at least 75% of their income. The RPI tracks owner-occupier housing costs, including mortgage interest costs and council tax.  The CPI doesn’t.
 
Perhaps most importantly, the RPI and CPI use different types of averages to reduce 180,000 individual price quotations – a monthly ‘basket’ of goods and services – into fewer individual item indices. The RPI uses arithmetic averages whereas the CPI generally uses another kind of average, the geometric mean. The latter is less or equal to the arithmetic mean.  Generally, this makes CPI around 1% below RPI.
 
Statisticians have long been concerned that these statistical treatments should generate such a substantial difference in the two indices – other countries do not seem to have the same issues. There can be no case for a continued ‘pick ‘n’ mix’ approach to use of one or the other index in different situations according to preference. The ONS review was prompted by wanting to address the  ”formula effect” –  the gap between the RPI and CPI –  and to find out more about how these differences arise.
 
Watch this space.  The new RPIJ measurement will be published in March 2013. In the meantime, the ONS is continuing to pursue its research programme in the area of consumer price statistics and to work with users to maintain the quality of its consumer price statistics.
 
 

It all depends what you mean by average

Written by Web News Editor on . Posted in Features

Statisticians often make quite a fuss about the various ways of measuring the average – and that’s because averages used wrongly can give a very misleading impression.
 
The following story, based on a survey of 2000 drivers, appeared in the Metro newspaper. And it raises quite a lot of questions.
 
 
A typical driver will jump 87 red lights, spend 99 days stationary on gridlocked roads and share 680 kisses during a lifetime behind the wheel. Motorists will get stuck in traffic 10,000 times, make 1,992 phone calls and check for emails or texts more than 1,000 times.  The average driver will cover 269,296 miles. And the typical adult will have sex 4 times in the car from the age of 17.
 
 
The best place to start is with the numbers. How reliable are they? Does anyone really know the number of red lights they jump, or the number of kisses they share? Why is the number of phone calls (1,992) so precise, while the number of times stuck in traffic (10,000) is a round number which sounds like a guess? And how can it possibly be justified to quote the distance travelled by the average driver (269,296 miles) to six significant figures?
 
Then there are the issues to do with the different types of average. It is not clear whether the figures are means, medians or modes – or a mixture of all three. Perhaps the most common number of red lights to jump is 87, in which case it is the mode. Or perhaps 50% of motorists jump fewer than 87 red lights and 50% jump more, in which case it’s the median. And if the total number of red lights jumped divided by the total number of motorists comes to 87 it’s the mean.
 
But does this distinction matter? Yes! Look at the last sentence in the story. My guess would be that motorists are divided into two groups: those who do and those who do not. And I would further guess that those who do not are in the majority. So the modal number of times for a motorist to have sex in a car is, I imagine, zero. And then for those who do, both the mean and median will be much higher than 4.
 
Combining two quite distinct subgroups can give a completely misleading impression. It’s not just a matter of saying what sort of average you are using, but also thinking carefully about whether any sort of average is appropriate at all.
 
goodStats (communicating the information in numbers)
 
Some great and some not-so-great stats and thoughts on how bad stats can be made good
 

Neil Sheldon has taught at The Manchester Grammar School for 40 years. He is a Chartered Statistician and Fellow of the Royal Statistical Society. He has been an RSS Guy Lecturer since 2007.  He is also course leader for the Certificate in Teaching Statistics offered by the RSS Centre for Statistical Education

 

New school league tables - A user's guide

Written by Web News Editor on . Posted in Features

Scotland, Wales and Northern Ireland have taken the decision not to publish school performance indicators. This is not the case in England. Last week saw the release of the DfE’s school league tables based on the results of Summer 2012 exams in 4,000+ state and independent secondary schools.
 
During the same week, the British Academy’s Policy Centre published ‘School League Tables: a short guide for head teachers and governors‘. The guide’s author, Harvey Goldstein FBA, Professor of Social Statistics at the University of Bristol, warns that school league tables seem to offer an easy way of seeing which schools are doing well compared to others but they are too simplistic a measure to adequately examine the relationship between the quality of what schools provide and the results of tests and exams.
 
The guide will make head teachers and school governors (and – in our view – parents, press and policy makers too) more confident in their knowledge of what school league tables can and cannot tell you. getstats encourages anyone who might benefit from the guide to take a read.
 
For more on measuring school performance, including a review of the available evidence to determine the benefits and the problems associated with use of school league tables, see ‘Measuring Success’, a report written by Professor Goldstein and Beth Foley and published by the BA’s Policy Centre in March 2012.
 
For further data presentations and visualisations around school performance, see the Guardian’s DataBlog.
 

Join the RSS

Join the RSS

Become part of an organisation which works to advance statistics and support statisticians

Copyright 2019 Royal Statistical Society. All Rights Reserved.
12 Errol Street, London, EC1Y 8LX. UK registered charity in England and Wales. No.306096

Twitter Facebook YouTube RSS feed RSS feed RSS newsletter

We use cookies to understand how you use our site and to improve your experience. By continuing to use our site, you accept our use of cookies and Terms of Use.