16 Comments

Thank you for taking the time to write this. Dana

Expand full comment

Thank you so much for this lucid summary!

Expand full comment

Thank you so much for this lucid summary!

Expand full comment

Good luck getting authors to admit their trials are inconclusive when they can get away with reporting a definite (if statistically incorrect) result.

Expand full comment

A larger sample size can be the answer, but not always.

I 100% agree that increasing sample size increases confidence (by reducing the width of the confidence interval) which can change an inconclusive finding to a conclusive one; even when using the old school standard of p<0.05. This can be true to a fault. This is the issue of a study being over-powered. With a huge sample size, yes we can be very confident and most likely find statistical significance, but we still need to interpret the importance and utility of that result. A statistically significant result can still have very limited practical implications. This occurs when the effect size is still near zero even though statistical significance is found. This represents a different way to game the system. If you have enough money to buy enough trials, likely you'll be able to publish positive results even when those results really aren't going to matter. Granted, given the expense of medical trials this is a less likely scenario. I'm a social scientist working in education where there can exist very large secondary data sets that researchers use. Either way, like what everyone else is pointing to, the way conclusions are written in journal articles needs to be super clear. This should include not just statistical significance, but practical significance.

My 2 cents, thanks.

Expand full comment

Thank you for writing this post in a way that any layperson can understand. Much appreciated!

Expand full comment
Sep 12, 2022·edited Sep 12, 2022

Concise and informative essay, as usual.

Now, please consider an evaluation of the ISCHEMIA trial, which has resulted in a pivot away from revascularization, to the detriment of long-term outcomes. Years one through four show an advantage for underwriting loss ratios, with providers ignoring an increase in suboptimal outcomes when the patient survives beyond the fourth year.

At least, that's how I read it. I'd very much like to be proven wrong.

Expand full comment

The foundational assumption is that medical research is all about "saving lives" seems to me to be what's responsible for driving our critical thinking bus down a blind alley. Earnest criticism of the flaws in studies and conclusions drawn from them will only get us so far. Given that we've come to be a society which long ago abandoned whole person health care for the "modern/better" find-a-symptom-find-a-pharmaceutical business model, I'd say we need to refocus on a new foundational assumption: medicine and medical research is all about generation of ROI. From Rockefeller looking for new uses of his petrochemicals to tech bros looking for places to park the Fed's decade plus of free money, there are few in power who care to look beyond the misleading titles and conclusions of marquee level research efforts if that might derail the generation of profits in quarterly cycles. In fact, they like building AI models with creative statistical assumptions just fine, thank you. "Move fast and break things" suggests innovative omelets are in the investment menu -- and yet I'd say far too often we're simply being sold a "new & improved" McMuffin microwaved sandwich, simply because the margins are better. (Just don't ask where they're sourcing the eggs or they're likely to tell you a fairy tale about a new way to free-range caged chickens, all without having the expense of actual outside access. But I digress.) It's all about building market share and repeat customers for consumable products. Ask Bed Bath what it's like for any company which doesn't have guaranteed contracts from national governments. I'm sure their c-suite would rather be in pharmaceuticals.

Expand full comment

With due respect to all of the Big Cigars in Boston who are worthy of that designation, the NEJM has been repeatedly polluted with articles submitted by authors seemingly mesmerized by what NHST (null hypothesis statistical tests) can lead to when improperly deployed. There are many egregious examples that can easily be dredged up. There is huge misunderstanding, for many years now, about what Sir Ronald Fisher hatched so long ago. I can only highly recommend Deb Mayo's *brilliant* 2018 text ("Statistical Inference as Severe Testing: How To Get Beyond The Statistics Wars"). Bring a sharpened number two pencil and prepare to read slowly and with seriousness as you jot notes. Forego the after-dinner brandy. Put down the bong.

Expand full comment

The NEJM forcing authors to conclude that a treatment doesn't work whe p > 0.05 is and out-and-out statistical error. BUT the study used the lowest power approach available for analyzing multiple endpoints (time to first endpoint). Had they used the timing and severity of all component events, and respected the word "recurrent" in recurrent hospitalization, the result might have been different. An ordinal longitudinal analysis would have respected all the raw data.

Expand full comment
Comment removed
Expand full comment