They are the costs of finding out which are the best molecules, what is a good dose, what is a good formulation, what are the benefits and what are the side-effects. The costs of producing the drug are unimportant in comparison. It is for this reason, of course, that generic companies can so easily enter the market once the patent has expired.
Thus pharmaceutical companies are paid for information and it seems inescapable that it is far better that it should be provided directly to those who will use it rather than at one remove, that is to say to regulators only. It would thus be curmudgeonly and hypocritical of me to complain about the latest developments in making clinical trial data available.
Nevertheless, it seems to me that many commentators have misunderstood the issues and that we are in danger of descending into complete confusion as to what to do. As an example of this, consider the British Medical Journal, having devoted much ink to a campaign that all trials should be published, is also asking for its readers’ opinions as to whether all work originating from the pharmaceutical industry should be banned.
Amongst those misunderstanding the issues, are many who work in the pharmaceutical industry. A common assumption is that any company that releases data will help competitors who will, of course, mine their publicly released information for valuable insights. The point is, however, that releasing companies will also gain such information from their competitors and this is not a zero-sum game. On average, the industry should be better off.
The problem is that it is to nobody’s advantage to be the first to reveal and it is in this context that strong regulatory action is desirable. (Some years ago I chaired a Royal Statistical Society working party on first in man studies that suggested that the regulator needed to take a lead in data-sharing for healthy volunteer studies3.) No doubt the AllTrials campaign is contributing in providing a vigorous reminder to the regulators to make this a priority.
However, some of those associated with the campaign have also misunderstood an important point. Publication in the medical press must become irrelevant. Self-publication is necessary. The first reason is that submission to a journal is not enough to guarantee publication, since the editors may reject the paper. The second reason is that journals may have a bias against negative studies. Although a great deal of research on this topic claims that this is not true, this research is inadequate for reasons I have explained in detail elsewhere4.
In particular researchers have naively (implicitly) assumed that authors make their decision to submit to a given journal based on quality of the research but not on anticipated probability of its acceptance. If the latter is the case we would not necessarily see different acceptance rates of papers submitted but higher quality of the negative papers5.
The third reason is that the review process, by requesting changes in statistical analysis, can harm the quality of a trial which (certainly, if it is a phase III pharmaceutical trial) will have had a detailed proposal for the statistical analysis registered prior to un-blinding the data. Whether or not, others subsequently decide that alternative analyses are superior, it is important as a matter of historical record that the results of the pre-specified analysis are available. For an example of a dispute between the Food and Drug Administration (FDA) and the New England Journal of Medicine (NEJM) regarding exactly this point, concerning a trial in anti-fungal treatment, see the letter by Powers et al complaining about the action of the NEJM in publishing a version with which the FDA disagreed6.
A final point is that the regulators (although of course they make mistakes) simply do a better job than the journals7. Ben Goldacre states in Bad Pharma8, ‘Where people have compared the methods of independently-sponsored trials against industry-sponsored ones, industry sponsored trials often come out better,’ (p71). But dismisses this as irrelevant due to industry dominance. This is a strange argument. I would say whatever the reason, a difference in quality is worth considering and my explanation is that greater scrutiny by professional regulators compared to lesser scrutiny by amateur reviewers has a knock-on effect. Pharmaceutical companies have to up their game to match the FDA.
Thus, the whole business of journal publishing simply adds confusion to the process of making trial results available. Consider that many journals place an embargo on discussing or revealing results before publication. Add to this the vanity and ambitions of scientists (from which I am not at all immune myself) not to mention the greed for publicity on behalf of a drug by the pharmaceutical company and you have a system where papers will be trawled around many journals by decreasing order of the impact factor until either one is found that is prepared to take the article or the authors give up.
My solution is rather different. I think that we should move to a system where in addition to Q for quality, E for efficacy and S for safety aspects of a drug and, increasingly, V for value for money, we should have a P for publication requirement. That is to say there should be a publication plan submitted with each regulatory dossier and a license to market should not be granted until this is fulfilled. By publication I do not mean publication in journals, I mean self-publication on the web or in some publicly searchable registry such as ClincalTrials.gov.
Some associated with the AllTrials campaign have protested that self-publication is perfectly acceptable to them but they don’t always act consistently to demonstrate they really mean this. For example, when Adam Jacobs pointed out in a blog post recently that a review in Nature with the headline ‘Half of US clinical trials go unpublished’ was citing a recent paper in PLOS Medicine in support of this, when the paper said no such thing, a comment from someone associated with the AllTrials campaign soon appeared on his blog claiming: ‘Your denialism is getting pretty desperate’.
The point was that this interesting paper by Riveros et al actually looked at a random sample of studies available on ClincalTrials.gov to see how many had been published in journals. They found that of 600 studied just about one half had not been published. However, why should anyone care that they were not available in journals if the results of all 600 were available on ClinicalTrials.gov? As regards publication (in registries) of trials associated with registered medicines, a recent study by Rawal and Deane9 of medicines approved 2009-2011 by the European Medicines Agency suggests that matters might be improving. (I won’t attempt to summarise these results here, since, as with all time to event data, the issues are complex but the report is available here.)
In fact, since trials are supposed to be published on ClinicalTrials.gov what would be really interesting would be a reverse study. How many trials published in journals are not available on ClinicalTrials.gov? The major finding of Riveros et al had nothing to do with the Nature headline, but was:
'Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.'
This of course, exactly supports the point I am making. Forget the journals. Let’s look elsewhere.
Some issues remain. Many critics of the pharmaceutical industry assume that the only suspect motive is money. However, I think that few scientists are immune to David Humes’s ‘love of literary fame’. My declaration of interest lists anything I can think of as being possibly relevant to anything I do but one of them I include is that my career benefits (and has benefitted) from publishing. Academics, or for that matter journalists, are every bit as assiduous in marketing themselves as pharmaceutical companies are at marketing medicines. On the other hand, the regulatory framework provides a straitjacket for analysis but once the data are out there anybody can analyse it and ‘interesting’ findings will be more publishable than boring ones. In epidemiology good news is no news.
We are moving from an era of private data and public analyses to one of public data and private analyses. Just as we have learned to be cautious about data that are missing, we may have to be cautious about missing analyses also.
I thank Adam Jacobs for helpful discussions.
Declaration of Interest
I am a consultant to the pharmaceutical industry and my career is furthered by publishing. I am a member of the data review panel recently established by Pfizer. The views expressed here are mine alone and should not be ascribed to any other party.
- 1. Senn SJ. Statistical quality in analysing clinical trials. Good Clinical Practice Journal 2000; 7: 22-26.
- 2. Senn SJ. Authorship of drug industry trials. Pharmaceutical statistics 2002; 1: 5-7.
- 3. Working Party on Statistical Issues in First-in-Man Studies. Statistical issues in first-in-man studies. Journal of the Royal Statistical Society, Series A 2007; 170: 517-579.
- 4. Senn S. Misunderstanding publication bias: editors are not blameless after all. F1000Research 2012; 1.
- 5. Senn S. Authors are also reviewers: problems in assigning cause for missing negative studies. F1000Research 2013; 2.
- 6. Powers JH, Dixon CA, Goldberger MJ. Voriconazole versus liposomal amphotericin B in patients with neutropenia and persistent fever. N Engl J Med 2002; 346: 289-290.
- 7. Senn S. Bad karma. Medical Writing 2013; 22: 252-255.
- 8. Goldacre B. Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients. Fourth Estate: London, 2012.
- 9. Rawal B, Deane BR. Clinical trial transparency: an assessment of the disclosure of results of company-sponsored trials associated with new medicines approved recently in Europe. Current Medical Research & Opinion 2013: 1-11.
The views expressed in the Opinion section of StatsLife are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of The Royal Statistical Society.