Let me sum up: Awesome infographic from @jenicarhee and crew detailing the murky underbelly of research...
Let me splain: I love me some science. Human ingenuity at its best has been a source of happytears for both myself and my husband. But "science" at its worst can produce some very nasty tears.
I'm not planning on pursuing a career in academia, and concern about the crazy/shady subjectivity of research is one of several reasons that path doesn't appeal to me. This will probably break my advisor's heart once he figures it out (the "research vs. practice" bias is another interesting dynamic in psychology). I've been very lucky because my advisor is a stand-up guy...when his students have gotten results that don't map onto models that he's published, he's been really cool about it. But I've heard about students coming up with results that conflict with their advisor's work getting run into the ground, and out of their programs.
One of the most disheartening experiences I had was a few years ago. I have one (third author) publication to my name. It had just gotten accepted for publication. I went to a conference on a young researcher grant which included a mentoring session. One professor there had a similar publication with seemingly contradictory results. There were differences in our methodology and the populations being examined that I thought might account for this, and have some interesting implications, but when I tried to engage her in a conversation her response was a brisque, defensive, "No, you guys must be wrong." That experience (and the time a well-meaning mentor made me take Sesame Street clip art out of a presentation) really soured me.
At my dissertation defense, the stats expert on my committee raised a lot of concerns that weren't specific criticisms of my thesis, but broader concerns about ways that psychologists routinely misuse statistics (violating assumptions of normality, etc.). I'm pretty stats-phobic, but I've learned enough about stats and the way my colleagues use stats to appreciate that there is a lot of gray area, and a lot of choice points that can drastically impact your results. Throw the tenure system and sketchy lab dynamics into the mix, and you've got a powder keg.
It's worth noting that the DSM, our diagnostic bible, is based on studies that are vulnerable to all these flaws. It's disheartening because so many people come to a psychology clinic hoping they'll find a magic diagnosis that will obliterate their distress. But our diagnostic system is deeply flawed and subjective. And I think psychology has done itself disservice by trying to cling to a medical model that is all about symptom obliteration. Trying to get people less focused on what labels best apply to them or how they can quickly numb their symptoms and more focused on changes they want to make in their lives can be very difficult.
Raw data submission is a good call (but can also be systematically faked). And I'd like to believe that some would be willing to publish anonymously for the sake of science, but people do seem to value career advancement over truth/knowledge. Which is a huge problem.
I might jump back into research if a really compelling opportunity to study intervention efficacy arises. But this might be more dangerous for me than anything, since I strongly believe in the benefits of therapy, and experimenter bias is a well documented catalyst for bad science. Of course, that's coming from studies conducted by people who believe in the dangers of experimenter bias. Confused yet? I always thought it would be cool to win the lottery and run my own research projects, Darwin style--no funders, no universities, just me and a healthy dose of science-love. Mmmmm. Empiricism.