The RSS Medical Section held a meeting on Serious Adverse Events on 23 March 2015 at the Royal Statistical Society in Errol Street, with three speakers, each providing different perspectives on the challenge of accurately identifying adverse events to clinical interventions.
Professor Yoon Loke (University of East Anglia) questioned the assurance often given in trial reports that there was 'no evidence of harm'. While studies are usually carefully designed to ensure a sample size sufficient to detect the benefit of a treatment, there is often little prior thought given to numbers required to detect uncommon serious adverse events. As these are thankfully rare and may not be apparent in the relatively short-term follow-up of most trials, the sample size may be too small to show any statistically significant incidence. Furthermore, there is a large number of different possible types of events, many of which may not have been foreseen and therefore not pre-defined in the trial protocol.
This led to a plea to avoid reliance on reported p values to ‘prove’ the absence of harm due to an intervention and instead use hazard ratios, which may suggest a negative outcome where the sample size is insufficient for significance, letting confidence limits display the level of uncertainty. Subsequent discussion also raised the point that, particularly for non-drug trials, the reporting of adverse events is not harmonised, making events of any specific type difficult to identify in meta-analyses.
The next speaker, Professor Julian Higgins (University of Bristol), picked up on the difficulties of detecting harms in randomised controlled trials (RCTs) which tend to identify only relatively common and short-term adverse events. On the other hand, non-randomised studies (NRS) can be valuable in detecting rarer and longer-term events, so although these studies cannot provide the same level of evidence as RCTs, there is a place for taking their results into account in systematic reviews and meta-analyses.
To this end, a Cochrane working group has developed a tool to aid reviewers in assessing the risk of bias in these studies. ACROBAT-NRSI (A Cochrane Risk of Bias Assessment Tool for Non-Randomised Studies of Interventions) requires the reviewer to consider a hypothetical randomised trial that the NRS mimics, referred to as the ‘target trial’. Seven domains of bias are then considered. Three of these occur before or during intervention and highlight aspects of the study that differ most from RCTs: confounding, selection of participants and measurement of interventions. The remaining four domains are similar to those that may occur in RCTs: departures from intended intervention, missing data, measurement of outcomes and selective reporting. It is hoped this tool will be used by reviewers to decide in a transparent and reproducible way the extent to which results from NRS should inform conclusions of a systematic review.
Finally, Dr Harry Southworth (Data Clarity Consulting) focused more on reporting of adverse events for drugs through the Development Safety Update Reports. The regulator requires these reports to be submitted by manufacturers on each of their drugs, every year. The data comes from both trials and spontaneous reports from patients taking the drug. The current system results in the regulator being swamped in vast amounts of paperwork each month. Dr Southworth presented work undertaken that aimed to provide a ‘statistically-guided clinical data review’. He explained the current system, which converts a free-text description of the adverse event into a series of ‘preferred terms’ (PTs), choosing from some 17,000 terms grouped into 26 classes.
The new method is essentially a simple point estimate, presenting PTs ranked by relative risk. He compared this to the more complex Berry-Berry model which is popular, but has some problems; for example it considers PTs which may be unrelated as exchangeable within a class. The simple point estimate approach was comparable to the more complex model and identified many of the drug safety concerns. As the quality of data on adverse reactions was so low, a fact noted by both earlier speakers, Dr Southworth concluded that there was only so much that could be done with it, no matter how sophisticated the statistical methodology. He also stressed that this assessment of the drug formed only a small part of the overall picture, as much other clinical and scientific data is collected and assessed. On this basis, a simple method that would be widely used is preferable to a more complex approach.
One common theme to the day that by their nature, the quality of adverse event data is low, resulting in difficulties in undertaking meta-analyses. Challenges remain, particularly in clear reporting and consistency of definition, to allow aggregation of results between studies. However, these will give a clearer picture of the potential negative outcomes of some interventions.