39 Comments

One more consideration came to my mind. I would not for example believe that this study works with antidepressants. All the antidepressant studies say there's a small and arguably clinically insignificant effect. The problem is it's all due to bias and antidepressants don't actually work. There's unblinding bias, publication bias, sponsorship bias, something called the telephone game bias, cold turkey bias, short trial duration bias which stops before the median time of natural remission of untreated depression. Then there's the fact that the non-responder subgroup actually have their depression scores do worse than placebo. It goes on. Basically it's an anti-science field and yet the studies have a consensus. I would actually believe the studies that are outliers and not the ones that align. So the presumption of this kind of study design is that we actually have a tendency towards quality to begin with which I am afraid may oftentimes be untrue

Expand full comment

Scanned over the BGC contract withe DoD. Thanks to Sasha, we're all getting a much clearer picture as to where thus is all going to lead: back to the psychopaths at the DoD.

Expand full comment

You should read Dr. Pierre Kory’s substack called “Medical Musings” regarding the collapse of reliable scientific studies‼️

Expand full comment

Very cool notion, but equal to assigning some weighting factor to results. But given how many publications seem already captured by pHarma, what does that mean?

I see that the NIH requiring those funded to do clinical studies to report might very well force out the studies that showed no effect from the studied thing and might go a long way. All results should see the light of day and not require conversations (off-the-record) among researchers.

I am reminded of the studies that went out of their way to ignore recommended doses and processes to defame IVM and HCQ. That discounts those studies that were purposely fraudulent. Such fraudulent studies when discovered ought to get a full investigation revealing the why.

Expand full comment
Jan 26, 2023Liked by Adam Cifu, MD

#1. This spirit will move us more in a direction of curious exploration of better practice - Yay!

#2. where's iii ?

#3. To answer questions more "forest for the trees" and "how could the current mess be functional?" I am pondering not the content of "The Science", or even process of procuring and producing "The Science", but our relationship with "science"...any thoughts?

Expand full comment
Jan 26, 2023Liked by Adam Cifu, MD

Adam I think your article was great and David AuBuchon’s comment had me burst out laughing, especially the last paragraph.

Regarding your modified funnel plot picture I can only wonder if in prior drafts of this post you may have used pictures of common hospital devices that would symbolize flushing out the truth and collecting the “distractions”

Expand full comment
Jan 26, 2023Liked by Adam Cifu, MD

This approach will work equally well if applied to University departments, and specific labs and primary researchers. We know that much of specific research associated with economically profitable and politically sensitive is false. Why not simply and easily eliminate innate the actors? Simply because they are the large, favored, and major research universities in the country?

Expand full comment

Fantastic!! This just made my day🤣

Expand full comment
Jan 26, 2023Liked by Adam Cifu, MD

Super article, thank you. Where would the "Wisk of Wisdom" fit in?

Expand full comment

I always appreciate your views. One other type of analysis I’d LOVE to see is results from studies that were funded by pharma vs NIH vs independent - forgive my ignorance on who else funds studies, but is there anything showing “financial compensation bias” that isn’t just, oh I don’t know, cooler talk?

Expand full comment

I love it! and the comments.

Expand full comment
Jan 26, 2023Liked by Adam Cifu, MD

First and foremost it sounds like the comparison between NY Times to the National Enquirer. I would choose the NYT because of the writers, reputation, information that fits my needs. I wouldn't pick up the NE because of their reputation, writers, information. So, choosing the reputable doctors researching and testing resourceful and useful information that affects most physicians would be in the NYT, which is the go-to literature for me. I'm sure that all of your physicians already know what is truth and what is bunk.

Who is your audience? Medical students in their 3rd year or the country physician serving 5 counties because he is the only practitioner. A new resident or the PCP who has delivered 3 generations of babies. My presentation, my subject of study, my presentation would widely vary between those sets of people. A bar or line chart maybe including some bubble data would be good for the country doc and PCP. Whereas I'd use an Excel Advanced chart and graph in a Powerpoint presentation for the others. This is conjecture and I am not pigeonholing anyone or being prejudicial, because I put myself in the first category of a simple bar graph and good presenter. Because I'm older I can assimilate and understand information when presented in a familiar manner. I prefer writing as opposed to point & click. I prefer counting IV drops per minute rather than running it through a machine that counts for me. Etc.

The study producing data is what is important. The presentation should not be one size fits all.

1 - Find the most crucial need for a study

2 - Use the most reputable and truth seeking physicians to perform the trials

3 - Publish the outcomes in a format and journal pertinent to whom this study affects.

Above and beyond that anyone outside of your assumed interested group may pick it up and use it, and may or may not get it but you don't have to reinvent the wheel.

Of course, not being a physician I could have read your entire paper incorrectly leaving you all with just a nondescript, few paragraphs of nothingness.

Expand full comment
Jan 26, 2023·edited Jan 26, 2023Liked by Adam Cifu, MD

If I recall correctly a similar proposal was made in Stuart Ritchies' "Science Fictions" which offered ideas to "right the ship" of the replication crisis.

On this argument you propose....:

"I’d hypothesize that if we attached some measure of journal quality (probably the impact factor) to each point (study) on the original funnel plot we would find that the higher quality journals routinely publish studies that fill the pipette of truth while lower quality journals routinely publish articles whose results fill the colander of distraction"

Can we trust "higher quality journals" when even the NEJM publishes obvious nonsense like "Lifting Universal Masking in Schools — Covid-19 Incidence among Students and Staff"?

https://www.nejm.org/doi/full/10.1056/NEJMoa2211029

Quick, obvious issues with the Boston study:

1) Figure 1 shows that the students in the schools which would eventually lift their mask mandates had much higher cases before the mandates were lifted, indicating that whatever caused these districts to have higher cases was happening before masks removed. Yet the authors cut this off by starting the graph in figure 1 in February, though it is clear cases were much higher in January. Classic technique of data drudging.

2) The authors apparently didn't realize that 13 of the schools they counted as "keep mandate" had successfully received an exception earlier (there was a condition that if you meet a % of vaccination you could be exempt from mask mandate).

You can cross reference this list with table S1 to see the 13 schools they missed:

https://www.cbsnews.com/boston/news/massachusetts-schools-mask-mandate-lifted-list-dese/

Example of one of the schools lifting it: https://www.kingphilip.org/important-mask-update-2/

3) One of the authors, when questioned on the lack of accounting for testing differences (many schools had students taking twice a week antigen tests, other schools used the CDC guidance that you only need to test for exposure when not wearing masks) argued that you should just trust her, because she has a PhD.

(edit: forgot to add source: https://twitter.com/EpiEllie/status/1557497452781096960?s=20&t=20X-EaQtKJAw3a0mwTzSTg)

4) The authors successfully organized a successful Change.Org campaign to get masks back on kids in Boston earlier that year, yet make no mention of this conflict of interest in their disclosures

https://twitter.com/EpiEllie/status/1429102872470433795

5) One of the authors had penned the Op-Ed in the Boston Globe "It's too soon to lift the school mask mandate", also didn't disclose this conflict of interest

https://www.bostonglobe.com/2022/02/11/opinion/its-too-soon-lift-school-mask-mandate/

6) Just as aside, almost all of the authors are on record being supportive of masking children prior to the study. Is it any surprise that they would be able to find high efficacy using one of the lowest tiers of evidence?

____________________

If the NEJM can publish nonsense like this, are there perhaps bigger problems to address before fixing meta research bias?

Expand full comment
Jan 26, 2023Liked by Adam Cifu, MD

In my brain, the inspiration to turn the tables, break the pattern and start reporting on studies that evaluate the media with numbers, and no adjectives, would be one of the most effective ways to raise this populations spirits. A dose of their own proverbial medicine.

Aside from the spite laced last sentence, behavioral modification by means of changing patterned behavior, has never had a better opportunity to correct a toxic cycle.

We need more scrappy young quarterbacks in the big game against integrity. If you're faced with a line of looming, defensive, cumbersome opponents, you have to scramble and change your pattern.

Expand full comment

We had a similar idea to your IF ranking...

https://journals.lww.com/annalsofsurgery/Abstract/2016/12000/Underreporting_of_Secondary_Endpoints_in.18.aspx

We had a look at trials reporting wound infection as a secondary out outcome and essentially its as accurate as guessing... unless it was the primary outcome of the trial.

So we need to either do better measurements of secondary outcomes or stop the effort of reporting them.

Expand full comment
Jan 26, 2023·edited Jan 26, 2023Liked by Adam Cifu, MD

- Or we might find that impact factor means squat.

- Don't forget this novel funnel plot use could itself have publication bias. What journals reject more trials that have negative results? High-impact or low-impact ones? This study might also find something about that. Or the results might also be thrown by the publication bias.

- Maybe also do this same type of study, but instead of doing 100 funnel plots, each representing 1 meta-analysis, do just 1 funnel plot that has 100 meta-analyses each on the same age-old question.

- Or maybe do it, but only examining efficacy of placebos. A kind of negative control.

- Also journal impact factor may not be the best measure. May want to in addition look at a standardized measure of the impact of the specific papers in question. The metric would have to adjust for year of publication being more or less "poppin'". One might find outliers, like the most inflammatory or the most inaccurate studies get shared the most. There might a a goldilocks zone. If you found that zone you could do the original study on journal impact factor all over again, but restricting analysis to studies that were in that goldilocks zone. Or do the same thing in the reverse order.

- Or maybe we may learn more about how we suck at interpreting funnel plots:

https://pubmed.ncbi.nlm.nih.gov/10812319/

https://pubmed.ncbi.nlm.nih.gov/16085192/

- Need low-impact due to gatekeeping. How many doctors know some RCT says black seed oil cures 40% of kidney stones? Or that a whole-foods plant-based diet can remit diabetic neuropathy? Or an RCT says melatonin cut covid mortality 90% in some hospital? etc etc etc.

- Lastly, if you find that low-impact journals have higher quality, make sure to submit your results to a high-impact journal so you can get rejected and publish low-impact. "Study in low-impact journal claims low-impact journals are better." Or conversely, "Study in high-impact journal says low-impact journals are better. Journal's impact factor soars."

It's 4am...hope this is still coherent in the morning.

Expand full comment