Like FARMS before it, the Interpreter has developed an impenetrable groupthink about certain topics, buttressed by the citation cartel. I came across the following article about a similar development in another field. I think you'll find the comparison fascinating. I put a few highlights in bold. (Hint: substitute "Mesoamerican" for "left-wing.")
__________________________________________
__________________________________________
At the back of a small room at Coogee Beach, Sydney, I sat watching as a psychologist I had never heard of paced the room gesticulating. His voice was loud. Over six feet tall, his presence was imposing. It was Lee Jussim. He had come to the Sydney Symposium of Social Psychology to talk about left-wing bias in social psychology.
Left-wing bias, he said, was undermining his field. Graduate students were entering the field in order to change the world rather than discover truths.1 Because of this, he said, the field was riddled with flaky research and questionable theories.
Jussim’s talk began with one of the most egregious examples of bias in recent years. He drew the audience’s attention to the paper: “NASA faked the moon landing – therefore (climate) science is a hoax.” The study was led by Stephan Lewandowsky, and published in Psychological Science in 2013. The paper argued that those who believed that the moon landing was a hoax also believed that climate science was a fraud. The abstract stated:
We…show that endorsement of a cluster of conspiracy theories (e.g., that the CIA killed Martin-Luther King or that NASA faked the moon landing) predicts rejection of climate science as well as the rejection of other scientific findings above and beyond commitment to laissez-faire free markets. This provides confirmation of previous suggestions that conspiracist ideation contributes to the rejection of science.
After describing the study and reading the abstract, Jussim paused. Something big was coming.
“But out of 1145 participants, only ten agreed that the moon landing was a hoax!” he said. “Of the study’s participants, 97.8% who thought that climate science was a hoax, did not think that the moon landing also a hoax.”
His fellow psychologists shifted in their seats. Jussim pointed out that the level of obfuscation the authors went to, in order to disguise their actual data, was intense. Statistical techniques appeared to have been chosen that would hide the study’s true results. And it appeared that no peer reviewers, or journal editors, took the time, or went to the effort of scrutinizing the study in a way that was sufficient to identify the bold misrepresentations.
While the authors’ political motivations for publishing the paper were obvious, it was the lax attitude on behalf of peer reviewers – Jussim suggested – that was at the heart of the problems within social psychology. The field had become a community in which political values and moral aims were shared, leading to an asymmetry in which studies that reinforced left-wing narratives had come to be disproportionately represented in the literature. And this was not, to quote Stephen Colbert, because “reality had a liberal bias”. It was because social psychology had a liberal bias.
Jussim explained that within the field, those on the left outnumbered those on the right by a ratio of about 10:1. So it meant that even if left-leaning and right-leaning scientists were equal in their bias, there would be at least ten times more research biased towards validating left-wing narratives than conservative narratives. Adding in the apparent double standards in the peer review process (where studies validating left-wing narratives seemed to be easier to publish) then the bias within the field could vastly exceed the ratio of 10:1. In other words, research was becoming an exercise in groupthink.
...
Very early in his career, Jussim faced a crisis of sorts. An early mentor, Jacquelynne Eccles, handed him some large datasets gathered from school children and teachers in educational settings. He tried testing the social psychology theories he had studied, but consistently found that his data contradicted them.
Instead of finding that the teachers’ expectations influenced the students’ performances, he found that the students’ performances influenced the teachers’ expectations. This data “misbehaved”. It did not show that stereotypes created, or even had much influence on the real world. The data did not show that teachers’ expectations strongly limited students’ performances. It did not show that stereotypes became self-fulfilling prophecies. But instead of filing his results away into a desk drawer, Jussim kept investigating – for three more decades.
Some months after Jussim’s presentation at the 2015 Sydney Symposium, the results of the Reproducibility Project in psychology were announced. This project found that out of 100 psychological studies, only about 30%-50% could be replicated.
The reproducibility project follows in the wake of a crisis that has engulfed social psychology in recent years. A slew of classic studies have never been able to be fully replicated. (Replication is a benchmark of the scientific method. If a study cannot be replicated, it suggests that the results were a fluke, and not an accurate representation of the real world).
For example, Bargh, Chen and Burrows published one of the most famous experiments of the field in 19963. In it, students were divided into two groups: one group received priming with the stereotype of elderly people; the other students received no priming (the control group). When the students left the experiment, those who had been primed with the stereotype of the elderly, walked down a corridor significantly more slowly than the students assigned to the control. While it has never been completely replicated, it has been cited over 3400 times. It also features in most social psychology textbooks.
Another classic study by Darley & Gross published in 1983, found that people applied a stereotype about social class when they saw a young girl taking a math test, but did not when they saw a young girl not taking a math test.5 Two attempts at exact replication have failed.6 And both replication attempts actually found theopposite pattern – that people apply stereotypes when they have no other information about a person, but switch them off when they do.6
In the field of psychology, what counts as a “replication” is controversial. Researchers have not yet reached a consensus on whether a replication means that an effect of the same size was found. Or that an effect size was found within the same confidence intervals. Or whether it is an effect in the same direction. How one defines replication will likely impact whether one sees a “replication” as being successful or not. So while some of social psychology’s classic studies have not beenfully replicated, there have been partial replications, and a debate still rages around what exactly constitutes one. But here’s the kicker: even in the partial replications of some of these stereotype studies, the research has been found to be riddled with p-hacking.4 (P-hacking refers to the exploitation of researcher degrees of freedom until a desirable result is found).