Addressing the deficits in statistical thinking and data analysis skills across academia should be a priority, says RSS fellow Darren Dahly, principal statistician and senior lecturer in research methods at HRB Ireland Clinical Research Facility Cork. Here he explains why.
We see the impact of poor research methods everywhere around us. Reviews focusing on the appropriate use and reporting of research methods are almost universally depressing. Chalmers and Glasziou have opened our eyes to massive research waste. Ioannidis tells us that most research findings are false. Reproducibility, or lack thereof, is on many of our minds. And of course there is the late Doug Altman warning us of 'the scandal of poor medical research', way back in 1994. Had we only listened.
We are now to the point where the enemies of science, those who fear informed citizens and decision-making, are using these problems to undermine the public’s trust in our work (not that we aren’t doing a pretty good job of that ourselves). Make no mistake, science is pointless without trust, and given the serious threats we are facing as a species, it’s not a tool we want to do without.
I am not the first person to point out these issues, or even the thousandth. Yet there seems to be a substantial proportion of stakeholders that don’t take them seriously. It is of course understandable that some individual scientists might not be interested in reform, particularly if their career has thrived without it. It is harder to understand the relative lack of effort on the part of universities and funders to address these problems — and by 'address', I mean with actual money that isn’t many orders of magnitude less than that spent on research itself.
Perhaps the relative lack of action is because it’s hard to know where to start. At the risk of sounding self-serving, I would like to offer the following suggestion: many of the problems just described are substantially driven by deficits in statistical thinking and data skills that are common across the sciences. Below I hope to justify this position, and offer ways that these deficits might be addressed.
More statisticians? It’s not that simple.
An understanding of statistics* is required to properly design, analyse and interpret scientific studies. While most studies would thus benefit from the involvement of an experienced statistician*, almost everyone seems to agree that there aren’t enough of them to meet this need. So shouldn’t we just fund more statisticians? To unpack this question, I think it’s important to first clarify what kinds of statisticians we probably need more of, and to consider the relative merits of consultation vs collaboration.
Wait, there are different kinds of statisticians?
There is a common misconception among researchers about what it is that academic statisticians do. Many seem to believe that they are sitting around, waiting to be asked to contribute to a project where their input would be helpful, perhaps eager to be rewarded with authorship. Yet, at the same time, researchers also frequently commiserate on how hard it is to get one of these statisticians to help. So with apologies to anyone who already understands this, please allow me to clarify: academic statisticians tend to want to conduct their own research, just like any other academic. It’s just that the usual topic of their research is statistics itself. We’ll call these people theoretical-statisticians. Every so often, the theoretical-statistician’s methodological interests will overlap with a substantive research question, and a fruitful, even long-term, collaboration may thus emerge. However, I think this is the exception, not the rule. So while you might work at a university with a large statistics department (or even more than one), they are never going to be the long-term solution to the systemic deficits we are concerned with.
In my opinion, what most researchers need is access to an experienced applied-statistician. There is of course no hard line separating the applied- from the theoretical-statistician, and many statisticians will occupy both roles; but generally we can think of an applied-statistician as someone who is keenly interested in the correct application of statistical methods to substantive research questions. Importantly, this is not a job that requires a PhD in statistics, though of course many applied statisticians have them (or PhDs in other fields). There are many highly regarded applied statisticians with Masters or Bachelors degrees working in industry, and in my opinion, universities should mimic this practice.
Consultancy vs collaboration
So where exactly would these applied-statisticians sit everyday? You might be tempted to house them all in a room on the outskirts of campus, and place a sign out front that reads “Statistical Consultancy Here.” In fact, many universities have tried this approach, but in my opinion it is doomed to fail. This is because it misses the important distinction between consultancy and collaboration. On the one hand, statisticians tend to use a fairly compact tool kit to handle most analyses, perhaps giving the impression that applying their skills across different research projects is a rote task. However, I find the following quote from JM Hammersley (via Sir David Cox) telling:
"There are no routine statistical questions, only questionable statistical routines".
The point I think Hammersley was trying to make is this: the correct application of statistics is substantially informed by the analyst’s understanding of the research topic at hand. As an applied statistician gains experience working in a particular research area (eg oncology vs cardiology) they will become more adept at analysing those types of data. Two excellent examples would be understanding how particular outcomes behave, and how to best use covariate information for drawing both causal and statistical inferences.
However, because life is a zero-sum game, the statistician is not likely to surpass the knowledge of the subject-matter expert, eg a clinical investigator — just as the investigator is unlikely to learn more statistical theory and/or tools than the statistician. Thus the optimal situation is a close collaborative relationship between the investigator and the statistician — the kind of relationship that tends to be built over time. Thus the idea that an investigator might walk into an office and solve their analytical challenges with a relative brief statistical consult is a fantasy held by those who think a statistician’s job is to simply 't and p the data'. Following from this point, consultancy is also doomed because it pretends that the statistician’s job is to analyse data, full stop. Critically, this fails to recognise the need for their input into the design of scientific studies, as well as other important tasks such as designing data collection instruments and properly reporting research results.
Following from these points, I would recommend individual labs, departments, and research institutions fund long-term, full-time posts for one or more applied statisticians to work closely with their investigators. Importantly, we shouldn’t expect that these strategically deployed statisticians will be numerous enough to handle every analysis. Thus their ability to provide continued training to the scientists they work with, supervise their analyses, and develop quality systems for their institution will all be important aspects of their work. Consequently, when these statisticians may hold academic posts, it will be important to ensure that their career advancement isn’t hampered by applying traditional academic criteria to their promotion prospects. Conversely, when they don’t hold academic posts, it will be important that they are still recognised as scientists, and not service providers. Finally, universities and research institutions must recognise that the data and analytical skills possessed by many statisticians are in high demand in other industries, and be prepared to offer competitive salaries.
A new vision for the statistical training of scientists
Even if every research institution on earth were to follow the advice above, most scientists will still need to analyse their own data (though hopefully with more guidance and support). Thus scientists will continue to need statistical training. However, this training, in its current form, is typically limited in the following ways:
- It centers on a statistical toolkit from which researchers select the appropriate test or procedure based on the characteristics the data they are using. While this tool kit is often sufficient for the analysis of well-conducted, relatively simple experiments, it lacks tools needed for common statistical challenges that occur in the wild, such as missing data, clustered observations, preventing overfit, measurement error, and model selection
- The rationale for different statistical methods are often left unexplored, leaving students ill-equipped to apply them critically
- There is little training, if any, in statistical programming, data management, manipulation, or visualisation
- Statistics is typically presented as something you do to data once they are collected, so that important links between study design and statistical methods are obscured
- Because almost all scientists receive some training in statistics, the mathematical underpinnings are often omitted or glossed over to accommodate the wide range of learners’ previous maths training and quantitative aptitude
- Students are frequently left with the impression that statistics is a monolith, firmly rooted in mathematics and thus somehow pure — while in truth the subject is highly contentious and as closely aligned to philosophy as it is to maths.
Ultimately, statistical training for researchers is too shallow, and devoid of training in what many of us refer to as statistical thinking. In my opinion, researchers would be better off with training in the foundational principles of both frequentist and Bayesian statistics, more training in causal inference, explicit consideration of how study design and statistics are intertwined, and greater emphasis on data management, manipulation, and basic statistical programming. Unfortunately, this will require a complete overhaul in the vast majority of introductory statistics courses — but what we have now simply isn’t fit for purpose. To improve science, we must change how we train scientists in how they approach data and statistics, and there is no time like the present to do so.
Funding research on how to best support researchers
I hope that the suggestions above make sense to you. But if we are being true to our scientific selves, I must point out that they are just that — suggestions. Despite the fundamental role statistics should play in science, there is virtually no evidence to inform us about the best way support researchers and research institutions, or to train scientists in statistics. So I would encourage governments, charities, and other research funders to start spending money developing an evidence base for these crucially important questions.
Science tends to be presented as a firmly established epistemological method. But it isn’t. Science, on the grand scale of things, is new. This is the dawn. So while science has made many amazing contributions to human progress, we shouldn’t sleep on the idea that it can’t be made better — or ignore the risk that it might be taken from us completely. Consequently, addressing the deficits in statistical thinking and data analysis skills across academia should be a priority. This means more applied statisticians working in academic research centers, improved statistical training for researchers, and more research on how to best deliver these reforms.
Statistics: a data science for the 21st century, Peter Diggle, J. R. Statist. Soc. A 178:4 (2015), 1–18
Cargo-cult statistics and scientific crisis, Philip B Stark and Andrea Saltelli Significance Magazine (2018)
The professionalization of the ‘shoe clerk’, Andrew P Grieve, J. R. Statist. Soc. A 168:4 (2005), 639-656
Beyond calculations: A course in statistical thinking, E Ashley Steel, Martin Liermann & Peter Guttorp, The American Statistician 73, (2019) 392–401
Mathematics: Governess or handmaiden, Stephen Senn, JRSS Series D (2002) 47:2, 251:9
For more discussion, please see this topic on DataMethods.org.
*By statistics I refer to the broad set of methods used to process and make sense of data. Similarly, any use of the term statistician refers to any person with the expertise needed to properly employ one or more of these methods, regardless title, degree, or other expertise.
Darren has organised two sessions at our forthcoming International Conference; one on Risks and rewards of social media use by statisticians and another on Data FAIRification using R/Rstudio workflows.
This article was originally published on Darren's Medium blog and is reproduced with permission.