Journal’s ban on null hypothesis significance testing: reactions from the statistical arena

Written by Oz Flanagan on . Posted in Opinion

The decision of one academic journal to ban null hypothesis significance testing procedures (NHSTP) has caused a stir in the statistical community. It makes you wonder if the editors of Basic and Applied Social Psychology, David Trafimow and Michael Marks, realised what a nerve they would hit when they published their editorial.

The journal’s new rule has opened up debate surrounding the wider issue of how statistics is performed in the pursuit of scientific analysis. It poses the question of what conclusions can you draw from the result of a statistical test and how definitively does this verify your conclusion. It also makes you ponder what Ronald Fisher, Jerzy Neyman and Egon Pearson would have made of the way modern science employs the statistical theories they developed. 2.0: why its 'Big Data' approach should be reconsidered

Written by Emmanuel Lazaridis on . Posted in Opinion

A public outcry followed the announcement last year that personal data from across the NHS were to be comprehensively collected, linked together and distributed. This is an important initiative that is floundering and needs a major rethink. An approach that elicits greater public participation, based on sampling of the general population plus registries for rare conditions and events - would enhance personal privacy and result in better medical science.

Where should government start with Big Data?

Written by Eddie Copeland on . Posted in Opinion

Where a complaint comes from.
The value of the property.
The building’s square footage.
The age of the building.
The timeliness of its tax records.
Whether its utility bills are in arrears.

Taken together - with some clever maths applied - these are the predictive indicators for identifying some of the most dangerous buildings in New York City.