Effect Size


Q
: What would be a simple way to increase the likelihood that published scientific research is meaningful?
A: Effect Size

I believe many peer-reviewed scientific journals should require that the researchers do more than report if a correlation between variables has been detected and it is indeed statically significant (which simply means that the result is unlikely to have occurred by mere chance). Researchers should also have to provide an extremely important statistic called the effect size.

This could be a Pearson r correlation, Cohen’s d, Hedges’ g, Glass’s delta, or the Odds ratio, depending on the particular discipline or the kind of results that are being discussed.

The effect size (which can be qualitatively thought as small, medium, or large) describes the overall magnitude of a treatment effect as a correlation may exist to pass a significance test, but it also may so small in size that it has little impact in real-world situations outside the lab.

Unfortunately, few papers currently include this statistic, especially ones where the correlation is weak, and even fewer journals require it. If every editor began to demand that authors calculate the effect size and then explaining the statistical results for each experimental situation as criterion for publication, the number of studies investigating almost meaningless correlations would be immediately reduced and it would save everyone (tax payers, researchers, students, media, the public-at-large) both time and money.