Statistics
"There are three kinds of lies: lies, damned lies, and statistics."
- Mark Twain
I. Hate. Statistics.
Well, maybe I shouldn’t be so extreme. Statistics do powerful things with scientific data. They give us a structured way of analyzing all those precious numbers and figuring out what they mean. Statistics carry us from a messy matrix of observations to a single measure of significance. They provide a tangible, grounded means of assessing patterns in the world. They are necessary and valuable.
But they also suck. Learning how to use statistics properly is always a challenge, because it’s not just a matter of running the test. It’s figuring out which statistical test is the proper one to use for any given dataset - and there are a lot of tests out there! Sometimes, I’m convinced that I’ve conducted a statistical test correctly, but then someone will point out there’s a test that fits my data even better than the test I used. Then it’s back to square one.
I had the experience this week of having to re-evaluate and re-do my statistics. I had written a paper about the experiment I conducted last summer using Crepidula fornicata larvae. The paper was reviewed by the editor and three anonymous scientists, who provided their comments and suggestions for improvement. Most of the paper came through the review process intact (well, more or less), but not my statistics.
As it turns out, my experimental design required a statistical test that I was not familiar with. Sure, I had heard its name before, but it’s one of the tests you mostly skip over in a semester-long stats class (or in my case, in your reading of an 800-page textbook over a summer in grad school) because the textbook says it’s used rarely. It only applies in a small number of very specific cases - but as it turns out, my experiment is one of those cases.
After a good couple days of reading, thinking, and researching, I think I’ve used the test correctly. I do have to admit, my data analysis is much clearer now. I’ve revised the paper and sent it back to the journal. We’ll see what comes of it in the next round of review!
- Mark Twain
I. Hate. Statistics.
Well, maybe I shouldn’t be so extreme. Statistics do powerful things with scientific data. They give us a structured way of analyzing all those precious numbers and figuring out what they mean. Statistics carry us from a messy matrix of observations to a single measure of significance. They provide a tangible, grounded means of assessing patterns in the world. They are necessary and valuable.
But they also suck. Learning how to use statistics properly is always a challenge, because it’s not just a matter of running the test. It’s figuring out which statistical test is the proper one to use for any given dataset - and there are a lot of tests out there! Sometimes, I’m convinced that I’ve conducted a statistical test correctly, but then someone will point out there’s a test that fits my data even better than the test I used. Then it’s back to square one.
I had the experience this week of having to re-evaluate and re-do my statistics. I had written a paper about the experiment I conducted last summer using Crepidula fornicata larvae. The paper was reviewed by the editor and three anonymous scientists, who provided their comments and suggestions for improvement. Most of the paper came through the review process intact (well, more or less), but not my statistics.
As it turns out, my experimental design required a statistical test that I was not familiar with. Sure, I had heard its name before, but it’s one of the tests you mostly skip over in a semester-long stats class (or in my case, in your reading of an 800-page textbook over a summer in grad school) because the textbook says it’s used rarely. It only applies in a small number of very specific cases - but as it turns out, my experiment is one of those cases.
After a good couple days of reading, thinking, and researching, I think I’ve used the test correctly. I do have to admit, my data analysis is much clearer now. I’ve revised the paper and sent it back to the journal. We’ll see what comes of it in the next round of review!
Comments
Post a Comment