152. Garbage In, Garbage Out: Every Failure of Sameer Jauhar, et.al's New Paper on Antidepressant Withdrawal
The Incidence and Nature of Antidepressant Discontinuation Symptoms: A Systematic Review and Meta-Analysis, published in JAMA Psychiatry
A few days ago, I received the following text from a key player in the world of psychiatric drug withdrawal: “There’s a huge study coming out tomorrow that says antidepressant withdrawal does not exist. We had an emergency meeting this morning with a bunch of folks to come up with a strategy to push back.”
I was not part of this meeting—I’m rarely at the cool kids’ table when it comes to these things—but I did get a confidential copy of the paper and its accompanying press release. Having been through withdrawal and interacted with thousands of folks over the years who are in withdrawal, I know the paper is bullshit. The hundreds of thousands of people collaborating on tapering and withdrawal-dedicated Facebook groups and message boards know it’s bullshit too, but of course that’s not enough. We need scientists with zero personal experience to weigh in with their fancy numbers and data-crunching algorithms! That’ll prove it.
So, I read the study. It’s complete junk, but it’s going to get traction simply because it was published in JAMA Psychiatry and because of the cohort size—over 17,000 patients.
But because I’m assuming none of the study authors or the folks at JAMA Psychiatry have experienced antidepressant withdrawal or unpacked this topic with any sort of nuance or common sense—otherwise they would see the junk forest for the trees—the study was conducted and published, despite a litany of clear and present issues. Let’s go through them.
Issue #1: This is not new research. It’s a systematic review and meta-analysis of existing research, the same research that never accounted for withdrawal symptoms in the first place.
Mark my words, that 17,000 number (17,828 to be exact) is going to get thrown all around the media as “evidence” that this study is “more robust” compared to more recent analyses with smaller cohorts.
Systematic reviews and meta-analysis certainly have there place, and if any of the existing literature accounted for withdrawal symptoms, this might be a valid argument. But it doesn’t. In another recent systematic review by Rennwald & Hengartner, who were looking at the established literature on post-acute withdrawal symptoms (PAWS), only 7 out of 1,286 screened antidepressant studies met the eligibility criteria, highlighting how little research actually exists on withdrawal.
Now, Rennwald & Hengartner were looking for long-term withdrawal rather than short-term/acute withdrawal, which is an important distinction. However, in the introduction of Jauhar’s paper, he flat out proves my point by saying:
Separate analyses were conducted to assess symptoms 1 and 2 weeks following discontinuation. Discontinuation symptoms at 3 weeks were only reported in 1 study.
This is a paper that screened 6,292 records and chose 50 of those to include in the study, and only one looked beyond the typical two-week mark. It doesn’t matter if the cohort study has 17, 170, or 17,000 participants if effectively none of the existing literature considered withdrawal—especially beyond two weeks—in the first place.
These authors have enough curiosity about withdrawal to research the topic. I do not see a world in which they haven’t heard about withdrawal lasting for months to years, which is likely why they chose to investigate its prevalence in the first place. How they could then turn around and ignore the fact that these studies do not account for issues lasting beyond the two-week mark and still present their evidence as solid is beyond my comprehension.
Issue #2: RCT-Derived Data Are Inherently Unsuitable for Withdrawal Analysis
The 50 randomized controlled trials used were not designed to assess withdrawal. While we don’t yet have the supplemental sheet that will tell us exactly which studies the authors analyzed, it can be assumed that most were industry trials intended to evaluate efficacy of the drugs, not long-term adverse effects or withdrawal syndromes.
Industry studies are short-term, typically 4–12 weeks. We know that short-term use of antidepressants is less likely to cause withdrawal, and that not a single RCT exists on long-term use of antidepressants. Real-world users are on antidepressants for years, not weeks or months. With precisely zero original research on long-term use, it’s asinine to extrapolate long-term conclusions from short-term data—and yet, here we are.
Also, RCTs often exclude people with histories of prescription drug sensitivity, suicidality, and complex medical cases—precisely the same people who are most likely to experience withdrawal.
This is like looking at 50 studies on peanuts to determine if eating peanuts lowers cholesterol levels, digging around in the literature to figure out how many people had an allergic reaction to peanuts, and proudly stating that the number of people with an allergic reaction was significantly less than previously reported so therefore, peanut allergies don’t exist at the rate we thought. So keep giving everyone peanuts. Never mind that the research was never looking at allergies to begin with, that the folks with severe allergies would be excluded from the research, and that some folks develop allergies over time due to constant exposure that lasts longer than the short-term studies.
Again, asinine.
Issue #3: Conflicts of Interest and Author Industry Ties
This paper has 19 authors, 12 of whom have current or historical financial ties to pharmaceutical companies including Janssen and Eli Lilly. One of the authors, Peter Haddad, has a long track record of working with Eli Lilly ghostwriters to ensure Lilly’s psychiatric drugs are framed in a positive light, as revealed by a lawsuit involving the antipsychotic Zyprexa.
I don’t think I need to say a lot more here, and that most people with a couple of brain cells understand why industry ties are such an issue.
But I will say that the recent systematic reviews that point to much higher rates of withdrawal—from Rennwald & Hengartner, Moncrieff, Horowitz, Fava—all had one thing in common: none of the authors had industry ties to the makers of the drugs they were analyzing.
Issue #4: The DESS Scale as a Matter of Measurement
The paper uses the DESS (Discontinuation Emergent Signs and Symptoms) to assess withdrawal symptoms—a self-rating scale designed to parse out new symptoms versus old symptoms, and whether the old symptoms are worse, unchanged, or improved.
Jauhar and his team went outside the scope of the purpose of the DESS scale by claiming the threshold for “clinically significant” withdrawal syndrome would be four or more symptoms on the DESS scale. This is an arbitrary threshold, literally made up by Jauhar and his team. The DESS scale is continuous and “has no diagnostic purposes and cannot be used to assess persistent post-withdrawal symptoms,” according to literature assessing DESS’s effectiveness.
Nowhere else, except this paper, is the number of symptoms on the DESS scale correlated to the incidence or severity of withdrawal. Treating it as such is a questionable application and inflates precision that isn’t really there.
This isn’t science. This is strategy. Withdrawal was measured too early, too narrowly, and too gently to ever show up as a real concern.
This paper doesn’t tell us anything about antidepressant withdrawal. It tells us what happens when you ask the wrong questions, in the wrong way, to protect the wrong people.
After the title of the paper (what is it about) the second most important part is the declarations of the authors ties to industry. The fact that there were so many $ ties to Big Pharma shows the conclusions are going to be heavily skewed to favor the paymasters, and therefore, the study is useless.
In war the first casualty is truth. Psychiatry should about moral and philisopical healing eg a social rather than medical science. Big Pharma is business.
The inflection point is the split between biological as seperate from humanistic psychiatry.
Most people wont read this stuff.
Our point of attack is the two domains within psychiatry.
My two cents, for what its worth.