The Royal Statistical Society's Primary Healthcare study group hosted its penultimate event on 30 June 2016 before it is due to merge into the Society's Medical Section in early 2017.
The topic of the afternoon’s presentation and discussions internal pilot studies and early phase designs, and there was a full house in the Council Chamber of the Royal Statistical Society’s Headquarters on Errol Street. David Gillespie, a research associate in statistics at the Centre for Trials Research in Cardiff University, gave a statistician’s perspective of internal pilot studies for NIHR-funded studies. Then Nigel Stallard, Professor of Medical Statistics at Warwick Medical School, spoke about optimal sample sizes for pilot studies to obtain evidence of efficacy. And lastly, Gareth McCray, a research associate at Lancaster University, talked about sample size re-estimation in paired comparative diagnostic accuracy studies with a binary response.
Following on these highly varied presentations, the discussions covered a broad range of subjects.
Those with David Gillespie discussed the merits of internal pilot trial, and what we can learn from them. This included the ability to explore complex outcomes, filter out unnecessary data collection, and potentially identify other relevant data. There are also opportunities to improve case report forms, update the protocol and refine outcomes by receiving feedback from those involved. Also, having multiple criteria over various outcomes for success will lessen the risk of a poor future trial. David also cleared up the differences and similarities between pilot and feasibility trials: whilst it is possible to look at whether an intervention works in a pilot trial, this is the role a feasibility trial. Also pilots trials often have feasibility-based outcomes, hence the confusion around separating the two.
Nigel Stallard’s group discussed topics raised in the presentation around sample sizes for pilot trials. The application of Nigel’s work requires Bayesian methods, but, as ever a hot topic, members of the group asked whether Bayesian methods are likely to be correct? On the matter of publishing, the group noted that there is no pressure to publish interim analyses and that unsuccessful pilot trials are often not published. There were questions on whether an independent data monitoring committee was required for pilot trials: Nigel suggested that it largely depended on whether safety was a concern. Regarding sample size, there were discussions about recruitment: if this becomes a concern, the statistician can advise hitting a lower power. The numbers for a Simon’s design trial were questioned with Nigel recommending more than 10 to 15 because of lessened effect sizes. Funders were also discussed: whilst lenient and generally not too harsh, it was questioned why they are more concerned by numbers and not efficacy. Lastly, it was noted that pilot trials give an excellent opportunity to adapt things that have gone wrong.
Those with Gareth McCray continued on with the themes he had presented. To undertake the methods shown, there should be good estimates for sensitivity, specificity and prevalence in your population, and a good gold standard is required for these diagnostic accuracy tests. However, the association will be not known beforehand, and it is suitable to re-estimate it throughout. Also, if the prevalence in the population drops due to societal issues, or if only certain subgroups are recruited, or an intervention has been put in place, you can re-estimate the prevalence using the Alonzo method. It is also possible to look at your prevalence in your internal pilot to make sure it isn’t different to your initial estimate.
Presentations from the session are available on the PRIMSTAT website.