The Case of Statins, Diabetes, and Bias in Observational Studies
In the Study of the Week, academic cardiologist Andrew Foy explains a study that his group published on the matter of statins and diabetes.
Sensible Medicine is pleased to have Dr. Andrew Foy back. His group at Penn State has published a clever study that, I think, clarifies the association of statin drugs and the incidence of diabetes.
The goals of the Study of the Week are to explain both specific and larger lessons in evidence translation. Foy’s short column achieves these goals. The specific lesson involves the question of whether statin drugs cause diabetes; the larger lesson surrounds the differences in observational studies and randomized trials.
This post is open to all readers. Please consider supporting our work with a paid subscription. JMM
Statin drugs are one of the most studied and prescribed medications in the world and high-level evidence supports their effectiveness for reducing cardio-vascular events.
But the level of evidence related to statin-induced adverse events is less certain.
One particular adverse effect of statins, diabetes, has gained significant attention and data from randomized controlled trials (RCTs) and observational studies diverge on this matter.
RCTs have found a small but statistically significant increase in diabetes for patients in the statin arm, which is outweighed by the beneficial effect on cardiovascular outcomes. Observational studies, on the other hand, have reported much larger associations with diabetes.
Which is correct?
As a reader of this Substack, and a likely proponent of evidence-based medicine, you might be tempted to think, “this is easy, the RCTs are correct” and I would be inclined to agree.
However, some would argue that RCTs are not generally designed to systematically investigate side effects. I’m sympathetic to that claim as well.
This is not a trivial matter, especially for patients who take statins to prevent a first event. (We call this primary prevention.) In these patients the absolute treatment effect of statin therapy is small – meaning you have to treat many patients with statin drugs (around 50 to 100) to prevent 1 person from experiencing a cardiovascular event.
If the estimates from the observational studies are true, the risk of developing statin-induced diabetes would likely wipe out the cardiovascular benefit of statin use.
To highlight this point, a meta-analysis of RCTs, including 13 statin trials with over 90,000 participants, found that statin use increased the (relative) incidence of diabetes by 9%. According to these findings, 255 patients would need to be treated with statins to cause 1 additional case of diabetes.
In contrast, an observational study of healthy adults in the US compared diabetes rates in statin users versus nonusers. Despite statistical adjustments, these authors found that statin use increased the incidence of diabetes by a whopping 87%. Based on these findings, only about 10 patients would need to be treated with statins to cause 1 additional case of diabetes. That’s quite a difference compared to 255 patients based on the RCTs.
How can these results be so divergent?
One possibility is selection bias.
In a properly conducted RCT, the participants in the treatment and control arms are the same, so it can be assumed that the only thing driving differences in outcomes (in this case diabetes) is the treatment (in this case statins). In observational studies, the participants in the 2 groups may not be the same, despite efforts to make them so (i.e., propensity matching, logistic regression, etc.) and unrecognized factors other than statin use could be contributing to diabetes.
In other words, simply being selected—by a treating clinician—for treatment vs no treatment, may introduce biases that statistical adjustments cannot overcome.
My colleagues and I wanted to assess the potential impact of selection bias on the issue of statin-induced diabetes in observational studies. The American Journal of the Medical Sciences published our study.
To do this, we used a national claims database from private payers in the US to create two groups of patients – one group was exposed to statins and the other was not. Exposure was based on prescription claims for statin drugs and patients could not have a history of diabetes prior to initiation of statin therapy.
Then, from the exposure group, we created a sub-group of ‘statin continuers’ and ‘discontinuers’ based on patients who did or did not fill a prescription claim for a statin drug after 6 months from the initial claim.
In both groups, (users vs nonusers and continuers vs discontinuers), we compared the incidence of diabetes after controlling for common confounders.
Our hypothesis was straightforward: if selection bias is a major contributor to the incidence of diabetes in observational study designs, then the association between statin use and diabetes will be much stronger in the statin exposed vs unexposed group than in the statin continuer vs discontinuer group. This is because all patients in the statin continuer vs discontinuer group would have been selected—by a clinician—for statin use.
Here is what we found
In statin users vs nonusers, statins were associated with a 120% increase in diabetes (hazard ratio [HR] 2.2) and this was highly statistically significant (P<0.001). About 18 patients would need to be treated with statins to develop 1 extra case of diabetes. These estimates align well with those reported in observational studies.
However, for statin continuers vs discontinuers (i.e., the comparison controlling for selection bias, which was otherwise done in the same manner as above) there was no statistically significant increase in diabetes (HR 1.03; 95% CI 0.98-1.1).
Furthermore, the absolute difference between groups was in the ballpark of the RCT estimates. Approximately 143 patients would need treatment with a statin to develop 1 extra case of diabetes.
Despite limitations of our analysis, which are described in the main paper and on a Twitter thread, we believe these results demonstrate how selection bias influences the results of comparative observational studies.
Finally, our findings support the notion that the incidence of statin-induced diabetes is similar to that observed in randomized trials. And its very small effect size is outweighed by the drug’s reduction in cardio-vascular events.
Thanks for the reminder about selection bias. Interesting to note that I’m an extremely compliant person with my recovery from addiction, in fact if I was that compliant with all of my chronic issues I’d probably be able to get off a few meds. Which is kinda my point. I noticed that when I got on a statin, I let my healthy diet go a bit “bc the statin will take care of it.” And lo and behold my A1C got wonky. Back to regular diet and much better again. Took some time! I noticed that, before my recovery, my coincidentally high LFT’s were bc of autoimmune hepatitis, not alcohol and too much Tylenol (Percocet - yep - I combined em like every good addict should). But once the AIH was managed and LFT’s went down, well you know what addict logic dictates! Fortunately the extreme excesses there gave me enough consequences to get help and recover from addiction, a day at a time.
Anyway, I think it’s extremely difficult to do true RCT’s and I applaud all of you who think through and ahead and remind us that although we strive for perfection with rigorous science, it’s never perfect. It’s as complex as each N=1 that we all are.
I would love to see Dr. Aseem Malhotra comment on this!