Coupled with an increasing availability of data, was the sense that this election was ‘too close to call’. Of course post-exit poll hindsight tells us this wasn’t the case, but during the election this was the perceived landscape. In an attempt to understand the battle between the main parties, and the rise of the smaller ones, journalists and academics turned to data sources to find who was in the ascendancy.
Deborah Mattinson (co-founder of Britain Thinks) was the first speaker and her presentation took an unashamed small data approach, asking the important question – is small data less accurate? In conjunction with The Guardian, her team initiated the Battleground Britain project. They examined five constituencies across the nation, through the thoughts of a panel of voters picked from each area.
The small data approach actually proved very insightful, with the importance of the economy shining through in voters’ minds, coupled with Labour’s troubled and muddled brand.
Rachel Gibson, professor of political science at the University of Manchester, then tried to answer the question of whether we had arrived at a ‘big data’ election this year. She set the background for this through Barack Obama’s re-election campaign in 2012. The Obama campaign set a benchmark for fully embracing data as an outreach tool and was attributed to being a major factor in defeating his Republican challenger.
While British political parties have come a long way from their early attempts at digital campaigning, their efforts this year still only reached a small percentage of the voting age population. However, Rachel’s analysis did show some interesting differences in how Labour and the Tories used social media and pointed to how this kind of campaigning will grow in the future.
While this event was organised before May 7, John Curtice (professor of politics at Strathclyde University) felt obliged to address the elephant in room of the pre-election polls. He gave his personal take on the possible reasons for the inaccuracy of the polls and made the point that ‘big data’ may be a great resource, but if it’s the wrong data, it’s useless.
Megan Lucero, data journalism editor at The Times, was next up and she gave an overview of how their team made use of the range of data sources to tell the story of this election. While other media outlets were attempting seat level forecasts for the election outcome, Megan said her team was not confident enough in the quality of the data to do this. Instead they took data that they did trust to look at the factors that would determine the vote.
Next, Carl Miller (research director of the Centre for the Analysis of Social Media at Demos) gave his view on how the election played out on Twitter. The social media platform is hosting an increasing amount of political debate, albeit between the more politically engaged ends of the population. The analysis his team produced showed some interesting reactions to the individual performances in the leaders’ debates, especially the positive reception to Nicola Sturgeon’s entrance on to the national stage. He also presented a ‘Twitter galaxy’ that depicted the information flows between the different political factions, with highly partisan clusters forming around the different parties.
Finally, Federica Cocco (now at The Times) also gave an overview of how the Daily Mirror covered the election through their ampp3d website. The concept behind ampp3d was to introduce statistics and data analysis to a tabloid audience in a fun and accessible way.