Chelation Therapy: What To Do With Inconvenient Evidence

Harlan Krumholz Contributor   http://www.forbes.com/sites/harlankrumholz/2013/03/27/chelation-therapy-what-to-do-with-inconvenient-evidence/

What do we do with inconvenient evidence? Imagine studying a seemingly absurd practice that is used to an alarming extent by those who believe in it despite the lack of evidence – and finding that the intervention improves outcomes. And imagine that the people conducting that trial are famous scientists with impeccable credentials who have extensive experience with this type of investigation. Imagine that the practice is so out of the mainstream that the investigators cannot even posit how the treatment could reduce patient risk?

We live in a world of evidence-based medicine, where we are urged to base our medical recommendations and decisions on clinical studies. We base our guidelines on the medical literature and evaluate our practices by how well we adhere to the evidence. But what should we do with inconvenient evidence?

The National Institutes of Health sponsored a $31 million trial of chelation therapy, a therapy that involves the infusion of vitamins and a substance that binds certain minerals, such as calcium. Some practitioners embraced this therapy and have recommended it for patients with heart disease. Although I never learned about it in my training as a cardiologist, it is quite widespread with reportedly more than 100,000 people using it in 2007, an increase of 68% from 2002.

The trial, published in JAMA, compared 839 patients who had a heart attack. They randomized these individuals to chelation with 869 to a placebo infusion. To the surprise of many (including me), after almost 5 years of follow-up, the chelation group had a lower risk of the combination of death, heart attack, stroke, hospitalization for angina or a procedure to improve blood flow to the heart. There was an 18% lower risk in the chelation group – and for about every 25 patients treated with chelation, there was one few adverse event. Also, there were no safety issues. This trial, like most others, has some limitations – but it is a positive trial.

The authors, who are quite esteemed, seemed surprised. They noted that no one knows how this therapy works. They said that the results were not strong enough to support the routine use of chelation therapy. It is not clear what they mean by routine – they seem unable to make a strong recommendation – as if they have some uncertainty how to act on what they found.

The irony is that if a drug manufacturer had gotten this result, they would have celebrated. We have billion dollar drugs like niacin and fenofibrate and ezetimibe that have less evidence than chelation therapy has now. None of those drugs has contemporary outcomes studies showing benefit – and 2 of them (niacin and fenofibrate) have 2 recent negative trials.

So why are scientists not accepting the verdict of this study? Why the reluctance to incorporate this therapy into our armamentarium?

The answer is more than just a reluctance to accept results that we do not like (though medicine is not beyond that behavior – see the slowness with which medicine adopts new information into practice). I believe that the answer here is that when confronted with a truly surprising result that is hard to explain. In this situation we need to examine our assumptions – and the consequences of being wrong. The amount of evidence we require may vary based on the treatment. For example, I am more likely to demand strong evidence for the use of treatments that may cause risks and incur substantial costs – except perhaps in dire circumstances where no alternatives exists and in these cases we need to be able to track the effect after approval and spread so that the interventions can be reassessed over time. And I may want stronger empiric evidence where I have no underlying expectation that a treatment would be beneficial based on previous studies – or the absence of previous studies.

If we have little faith in chelation therapy, then it is hard to turn 180 degrees with a positive result and suddenly completely believe in it and recommend its use. Any trial can give an anomalous result and we need to be careful about jumping to a new position with each new piece of evidence. However, we cannot on one hand promote evidence-based medicine and on the other hand ignore what we do not like.

I am glad that we are subjecting popular but out of the mainstream practices to rigorous study. If I endorse that course I cannot ignore the evidence because it goes against what I expected. But I need to interpret the findings through the totality of what is known about it and determine if it is really ready for prime time. In this case, I want to see more studies of this approach to be sure. However, this study has opened my mind to the possibility that there may be something more to this therapy than I originally thought. And given what I thought about it before, I can hardly believe I am writing that.

Comments

    • We have the mistaken notion that we know EXACTLY how drugs work (or in this case, shouldn’t work). Because we don’t really understand all the details of human physiology, >90% of clinical trials end in failure. This leaves open the question about what non-obvious compounds remain that have “undiscovered” drug properties. Take a look at dimethyl fumarate for multiple sclerosis from biogen
      http://www.biogenidec.com/press_release_details.aspx?ID=5981&ReqId=1799179
      A priori, who have guessed a metabolic intermediate could function akin to an interferon for an auto-immune disease??

  • Graham Walker 1 year ago

    As always, the answer can be found in the methods section.

    When you use a silly quadruple composite endpoint like “death AND MI AND stroke AND coronary revascularization AND hospitalization for angina,” you’re going to have some silly results. You’re mixing a bunch of fairly objective outcomes (the first three) with two hugely subjective ones. It’s the same as most of the cardiology trials: the damn triple composite endpoint typically only shows a benefit because the “urgent revascularization” outcome, and then the study gets billed as effective.

    Until research stops accepting that primary outcome as acceptable, we’ll continue having studies just like this. The triple-composite outcome is not a patient-important outcome.

    • Called-out comment
    • Graham Walker 1 year ago

      *Quintuple composite outcome.

    • Dzos Levis 1 year ago

      I just bought a great Ariel Atom, from earning $9844 this last 4 weeks and $10k last month. Its by far the easiest and without any doubt the most financially rewarding job I’ve ever had. I actually started ten months ago, and practically, straight away got at least $81p/h! This is what I do – Rich45.cℴm

  • This is a vivid example why a Bayesian approach is absolutely crucial to interpreting clinical trial data. A low prior probability puts this study outcome in appropriate context.

    Also reiterates the message of FDA statistician Bob O’Neill that “a p-value less than 0.05 has only about a 50% probability of being seen again in a second identical trial.” Astonished that this is not more widely known and appreciated.

    • Called-out comment
  • Steven Salzberg, Contributor 1 year ago

    Harlan, I’m disappointed that you seem to be taking this result at face value. There are many very cogent criticisms of this trial that have appeared in the blogosphere, recently and going back several years, pointing out serious problems with its design (and its ethics). The marginal statistical significance they report is easily explained by flaws in the design. E.g., see Orac’s discussion which includes many links to other articles discussing problems with the study: http://scienceblogs.com/insolence/2012/11/05/the-results-of-the-unethical-and-misbegotten-trial-to-asess-chelation-therapy/
    Some of the problems include serious lack of blinding, which we all know will bias results. Others include many many dropouts from the therapy. And are you aware that the treatment included not only chelation, but also heparin? The placebo group did not get heparin. That alone might explain the (marginally) positive result.
    I’m also disappointed that you call the lead investigators “famous scientists with impeccable credentials.” This is an example of an argument from authority – the fact that the PI (Gervasio Lamas) is at Mt. Sinai Medical Center in Miami doesn’t make him right (or wrong). In fact, Steven Nissen at the Cleveland Clinic criticized Lamas’ study because many of the physicians offered “quack therapies.” In any case, even if Lamas isn’t a quack – even if he is has “impeccable credentials” (and I don’t agree that he does), that doesn’t prove the study is valid. A close look at the study itself makes it pretty clear that the findings are not valid.

    • Called-out comment
  • Author
    Harlan Krumholz, Contributor 1 year ago

    Thanks for the comment. Tony Lamas, Kerry Lee, Christine Goertz, Dan Mark – you may disagree – but it is a strong group of investigators. I know that Steve raised the issue of blinding – but I did not see evidence of that. He did criticize the CAM sites – but the effect was even stronger in the non-CAM sites. I don’t know about the heparin – but if it was also included in the cocktail then it is worth evaluating – certainly chronic heparin infusions are not considered effective secondary risk reduction therapy – and not sure why that would be the active agent. The study has limitations – all the trials do – opinions may differ and this is certainly inconvenient evidence – but I just don’t think that I can dismiss it out of hand. The authors are presenting a result that is at odds with what they expected – that alone tells you that it is worth taking seriously.

    • Called-out comment
  • Steven Salzberg, Contributor 1 year ago

    I am even more surprised that you say you didn’t know about the heparin. It is clearly spelled out in the paper, and even in the abstract. Here’s a quote from the abstract: “…a 500-mL chelation solution (3 g of disodium EDTA, 7 g of ascorbate, B vitamins, electrolytes, procaine, and heparin)”. So treatment includes heparin and other things – and placebo includes none of those elements.
    I would hope you would read the paper carefully before taking a public position on it. If you do, you will also see that the statistics just barely reach significance, and only for the composite endpoint – and that goes out the window if any biases affected the results, as I believe they did.

    • Called-out comment
  • Author
    Harlan Krumholz, Contributor 1 year ago

    Steven, I really don’t think you understand my post. If you did you would then see that the heparin issue does not invalidate anything I said. Yes, I should have included that the solution included a very low dose of heparin) – but that is not really germane to the larger point I was making.

    • Called-out comment

 

Be Sociable, Share!

Leave a Reply

Your email address will not be published. Required fields are marked *