Consider getting access to exclusive RBDR benefits on Patreon! Click here for more information.
March 25, 2019

Today on RBDR: A two-pronged call by distinguished publications (The American Statistician and Nature) for elimination of dependence on “statistical significance.” What will the impact of those articles be, and how much will it disrupt the research industry? (Link:

No need to spend time searching for today’s RBDR video. Subscribe to receive a FREE, personal email as soon as the new RBDR is uploaded. Click here.

4 thoughts on “Overwhelming Calls to Eliminate “Statistical Significance” / RBDR

  1. And about time!
    I wrote about this topic a few years back, called “Raise Your Hand If The Truth Starts At .05” at:

  2. Dennis Q. Murphy: “The most annoying question asked in a meeting – usually by someone who thinks they’re being enlightened – is “Is this result significant?” The answer is always “it depends upon at what level you consider significant”.

    Eric Marder – who I just learned moved on to a different realm in December – never used significance in the traditional way but rather quoted odds. If something exceeded 90% he’d say the odds were 10 to 1 that is was coincidental. Even at 80% it was four to one which is certainly not “insignificant.”

  3. Liz Puccianti, Founder, “I would have to agree with Joel and others here. The writer of the article is proposing that we have more nuanced conversations and be comfortable with more “thinking” – and that’s what the scientific community has been doing endlessly, but it’s when we face a business audience that has asked for more simplified interpretation and decision-making expediency, that we have adapted accordingly.”

  4. Nicholas Tortorello: “This is one of the reasons that I decided to basically retire from the Survey Research business. The importance of the science behind the research was being diminished all the time. It was becoming the Wild West: There were those arguing against probability samples, others arguing for question wordings that would leave out “Don’t Know” and “Not Sure” response categories, others arguing for complex weighting schemes to make online and cell phone polling more accurate, etc. All in the name of lower costs, higher cooperation rates, easier subject studies, and faster response tabulations, etc. Much of these changes were undermining the validity and science of survey research and making it impossible to charge what doing the research correctly costs. There were a long list of culprits from clients to online and cell phone researchers, and people who couldn’t tell a good poll from a bad one. Unfortunately, those of us who really cared and believed in the survey research technique were shouted down by those data collectors and data scientists who had no real concern for rigorous psychological, political and social science techniques. This resulted in increasing bad polls, poor political and social analysis, and bad-decision-making. So called younger researchers see persons of my ilk being old-fashioned and anachronisms: So be it!”

Leave a Reply