Event report - Cathie Marsh Memorial Lecture 2013

Written by Chris Kershaw on . Posted in Features-OLD

This annual memorial lecture is jointly organised between the Royal Statistical Society and the Social Research Association. Cathie Marsh was an inspirational social scientist involved with both organisations, and the seminar is seen as an appropriate way of remembering her.

The question under discussion this year was: What can RCTs bring to social policy evaluation? Watch the lecture here.

The two very experienced speakers covered the major themes of this important debate. Jeremy Hardie LSE, author, economist and lecturer posed the questions what are RCTs for? and what is good evidence? The Maryland scale that sorts strong versus poor evidence, with RCTs at the top, isn’t the full picture. RCTs can quite brilliantly allow one to conclude that an intervention ‘worked’ with a certain population at a certain time, but doesn’t show that it would work generally or how it worked. The history of replication is extremely disappointing, with pilots working but not roll-outs. Roll-out problems are often seen as implementation failure but the context for roll-out may not be appropriate. What has been seen to work in RCTs may itself not be well defined (for example, what are ‘stroke units’ in hospitals?). Ben Goldacre mentions the importance of ‘professional judgement’ to assist in tailoring an intervention to its context but this is very hard to pin down.

Jeremy concluded that he wasn’t against RCTs but people need to understand the need for also testing logic models and collecting information that supports roll-out.

Leon Feinstein director of evidence at the Early Intervention Foundation started by saying he wouldn’t view RCTs as the answer to all questions. Mixed methods research is important but RCTs provide a good framework for evaluation and enabling people to be held to account for policy. They support a culture change to improve the quality of evaluation more generally. There is a challenge to improve the quality of evaluation in central government citing, for example, a lack of evidence for ‘early years’ interventions.

Leon felt that more progress was being made with respect to quality of local evaluations where there is less demand for immediate ‘results’. Local authorities in the context of resource cuts need to target preventative services well and value evidence. Leon mentioned that evidence for effectiveness of youth clubs indicates that some can be positively harmful. It is important we have a culture of experimentation. RCTs are not always possible but he also cited some natural experiments that can also provide good evidence.

Some very interesting points were then raised in discussion. There was a concern about the proliferation of ‘packaged’ interventions on the back of RCTs. These can be expensive to implement and not adapt well enough the context in which they are placed. The issue of ethics came up and the huge barrier placed in the way of RCTs by professionals making an assumption that their interventions cannot do harm. The pain of implementing RCTs also came up and the decision to stop that should come once there is a good enough understanding of the logic model for an intervention.

The lecture is available to watch here.

Join the RSS

Join the RSS

Become part of an organisation which works to advance statistics and support statisticians

Copyright 2018 Royal Statistical Society. All Rights Reserved.
12 Errol Street, London, EC1Y 8LX. UK registered charity in England and Wales. No.306096

Twitter Facebook YouTube RSS feed RSS feed RSS newsletter RSS newsletter