A new study of e-cigarettes’ efficacy in smoking cessation has not only pitted some of vaping’s most outspoken scientific supporters against among its fiercest academic critics, but in addition illustrates lots of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted more than a paper published in The Lancet Respiratory Medicine and co-authored by Stanton Glantz, director of the Center for Tobacco Control Research and Education on the University of California, San Francisco, in addition to a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is actually named as first author but does not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare and contrast the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: in other words, to find out whether utilization of e-cigs is correlated with success in quitting, which might well mean that vaping allows you to stop trying smoking. To get this done they performed a meta-analysis of 20 previously published papers. That is certainly, they didn’t conduct any new information entirely on actual smokers or vapers, but instead made an effort to blend the results of existing studies to find out if they converge over a likely answer. This can be a common and well-accepted method of extracting truth from statistics in numerous fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online as well as from the university, is the fact that vapers are 28% more unlikely to stop smoking than non-vapers – a conclusion which will advise that vaping is not just ineffective in smoking cessation, but actually counterproductive.
The result has, predictably, been uproar from your supporters of Ecig Starter Kit in the scientific and public health community, particularly in Britain. Among the gravest charges are those levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and by Carl V. Phillips, scientific director in the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) in the United states, who wrote “it is apparent that Glantz was misinterpreting the info willfully, instead of accidentally”.
Robert West, another British psychologist and also the director of tobacco studies with a centre run by University College London, said “publication of the study represents a significant failure from the peer review system in this journal”. Linda Bauld, professor of health policy on the University of Stirling, suggested the “conclusions are tentative and often incorrect”. Ann McNeill, professor of tobacco addiction in the National Addiction Centre at King’s College London, said “this review is not scientific” and added that “the information included about two studies i co-authored is either inaccurate or misleading”.
But what, precisely, are definitely the problems these eminent critics see in the Kalkhoran/Glantz paper? To respond to a few of that question, it’s essential to go beneath the sensational 28%, and look at that which was studied and just how.
Meta-analysis is a seductive idea. If (say) you have 100 separate studies, every one of 1000 individuals, why not combine them to create – in effect – a single study of 100,000 people, the outcomes that needs to be much less susceptible to any distortions which may have crept into someone investigation?
(This may happen, for instance, by inadvertently selecting participants having a greater or lesser propensity to quit smoking due to some factor not considered through the researchers – an instance of “selection bias”.)
Obviously, the statistical side of the meta-analysis is pretty more sophisticated than just averaging the totals, but that’s the overall concept. As well as from that simplistic outline, it’s immediately apparent where problems can arise.
If its results are to be meaningful, the meta-analysis must somehow take account of variations in the design of the person studies (they may define “smoking cessation” differently, for instance). When it ignores those variations, and tries to shoehorn all results in to a model that a number of them don’t fit, it’s introducing its very own distortions.
Moreover, if the studies it’s based upon are inherently flawed in any way, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This is a charge produced by the reality Initiative, a United states anti-smoking nonprofit which normally takes an unwelcoming look at e-cigarettes, regarding a previous Glantz meta-analysis which comes to similar conclusions towards the Kalkhoran/Glantz study.
In a submission last year for the U.S. Food and Drug Administration (FDA), responding to that federal agency’s call for comments on its proposed electronic cigarette regulation, the facts Initiative noted it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of them have already been included in a meta-analysis [Glantz’s] that states reveal that smokers who use e-cigarettes are not as likely to stop smoking compared to those that do not. This meta- analysis simply lumps together the errors of inference from the correlations.”
It also added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate as well as the findings of these meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and anticipate to have an apple pie.
Such doubts about meta-analyses are not even close to rare. Steven L. Bernstein, professor of health policy at Yale, echoed the Truth Initiative’s points when he wrote within the Lancet Respiratory Medicine – the identical journal that published this year’s Kalkhoran/Glantz work – the studies a part of their meta-analysis were “mostly observational, often with no control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies simply do not exist yet”.
So a meta-analysis could only be just like the investigation it aggregates, and drawing conclusions from it is only valid when the studies it’s according to are constructed in similar methods to each other – or, at least, if any differences are carefully compensated for. Of course, such drawbacks also pertain to meta-analyses that are favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms in the Kalkhoran/Glantz work rise above the drawbacks of meta-analyses generally, and focus on the specific questions posed by the San Francisco researchers and also the ways they made an effort to answer them.
One frequently-expressed concern has become that Kalkhoran and Glantz were studying a bad people, skewing their analysis by not accurately reflecting the actual variety of e-cig-assisted quitters.
As CASAA’s Phillips points out, the electronic cigarette users inside the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes once the studies on the quit attempts started. Thus, the study by its nature excluded those who had started vaping and quickly abandoned smoking; if these people appear in large numbers, counting them would have made e-cigarettes seem a much more successful way to quitting smoking.
Another question was raised by Yale’s Bernstein, who observed that does not all vapers who smoke want to give up combustibles. Naturally, those who aren’t wanting to quit won’t quit, and Bernstein observed that if these people kndnkt excluded from the data, it suggested “no effect of e-cigarettes, not that e-cigarette users were less likely to quit”.
Excluding some who did manage to quit – then including anyone who has no goal of quitting anyway – would likely manage to affect the outcome of research purporting to measure successful quit attempts, despite the fact that Kalkhoran and Glantz argue that their “conclusion was insensitive to a variety of study design factors, including if the analysis population consisted only of smokers thinking about quitting smoking, or all smokers”.
But additionally there is a further slightly cloudy area which affects much science – not only meta-analyses, and not simply these particular researchers’ work – and, importantly, is frequently overlooked in media reporting, in addition to by institutions’ public relations departments.