Reinhart studied physics as an undergraduate but did a masters in statistics after realizing problems that misunderstandings of statistics were causing in physics and science as a whole. He is now working on a PhD in statistics at Carnegie Mellon.
I don’t know if this would be the best book for someone who has no background knowledge whatsoever. While the author does a good job at explaining a lot of the concepts, his target audience is people who’ve encountered bad statistics in advanced level research, such as medical studies. This book is a good start for those who want insight into the mind of a statistician, even if their math skills aren’t quite there. Although the book isn’t numbers heavy, I still definitely got lost a few times and had to re-read some of the passages. However, I really like the writing style of the author. All I want is to be able to write and articulate difficult concepts in a way as clear, concise, and even funny as the author.
One of the main takeaways of the book was:
“Scientists may be superhumanly caffeinated, but they’re still human, and the constant pressure to publish means that thorough documentation and replication are ignored”(Reinhart, 2015).
I remember learning about the difficulty that psychologists have faced replicating their results (and glad to hear that they as a discipline are improving their efforts to make sure studies and results can be replicated.) and this book explained a lot of the factors at play. For example, Reinhart discussed statistical power, and how not all journals checked if researchers had enough data to determine if their results were statistically significant in the first place. And the book goes in depth into the various issues that can make finding statistical significance a poor measure of whether a phenomena is occurring or not. Furthermore, while I found the debate about publication bias of studies about publication bias as well as “False-Positive Psychology” by Joseph P. Simmons (doi:10.1177/0956797611417632) hilarious, I too am now worried about the state of research.
One suggestion from Statistics Done Wrong is more options for making scientific data more accessible so that people don’t try the same failed methods again and again and can learn from others’ experiences. In other words, ways to share the raw data and software code used, even in studies that did not get published in a journal, before the format they are in becomes obsolete. Even though that’s often a major pain to do. Research Data Service are the best people to talk to for learning more about the efforts on campus for solving this problem. They are your best bet for learning about ways to store and make scientific data more available.
Another problem mentioned in the book is the overall lack of statistical knowledge and education. Here at Illinois, Scholarly Commons is just one of many resources available. For more specific technical questions, we recommend asking a statistician through consulting services through the statistics department, however, this service costs money unless you get the STAT 427 students during spring semester. There are also some free resources and workshops through ATLAS-CITL and online tutorials through Lynda.
Overall, Statistics Done Wrong is an interesting read and a good starting point for those interested in having a better understanding of what to look out for when using statistics in research and ways to improve the way research is done on a whole.