• Report finds 8 out of 21 social science studies, 38%, couldn't be replicated
    13 replies, posted
https://www.indy100.com/article/scientific-studies-accuracy-questioned-human-behaviour-8514221 Researchers have found that of the journal papers they sampled, nearly two fifths of their results could not be replicated. The report, published in Nature Human Behaviour, looked at 21 social science studies published between 2010 and 2015 in the field of science and nature. Brian Nosek, of the University of Virginia-led researchers, tested the studies by conducting the same experiments, using methods approved by the original authors to test their conclusions. They increased the sample sizes by approximately a factor of five in an effort to increase accuracy. Results found that out of the 21 studies, eight of them could not be replicated - that’s 38 per cent. In addition, of the 13 studies whose results they managed to replicate, the effects measured were just half of what had been initially reported - though Nosek believes this is because of the increase in sample size. It is widely accepted that it is difficult to make big generalisations about social, scientific or medical topics - any topic - with a small sample size. These results feed into a much wider scientific trend called the reproducibility crisis. The field of psychology research has taken the brunt of this scepticism, and a 2015 report detailed that just 36 per cent of 97 studies could be replicated. Richard Klein, who has previously worked with Nosek, responded to the analysis and told Nature: The emphasis of novel, surprising findings is great in theory. But in practise it creates publication incentives that don’t match the incremental, careful way science usually works. Some scientists are objecting to the way Nosek reproduced their findings, but one, Will Gervais, agreed that his 2011 study was "downright silly". He told Vox that the sample size had been "tiny". It was a really tiny sample size, and barely significant… I’d like to think it wouldn’t get published today. Nosek also set up a separate experiment called the ‘prediction market’ in which researchers bet on which results could be reproduced and which couldn’t: approximately 61 per cent could be replicated. "If the original result was surprising, participants report having a sense that it is less likely to be true," Nosek said. Hence the aphorism that extraordinary claims require extraordinary evidence.
It is at least promising that social scientists and their curriculums spend a looot of time on methodology nowadays, but there are still p. biiiiig obstacles to getting good research done.
The field of psychology research has taken the brunt of this scepticism, and a 2015 report detailed that just 36 per cent of 97 studies could be replicated. That study has since been discredited, so it's pointless and actively counterproductive to even mention it as a source of evidence.
social studies get too much shit for this, they're an easy and popular target but the replication problem is plaguing many fields of science including the typically prestigious and ones people are likely to think of as rigidly factual like medicine. https://en.wikipedia.org/wiki/Replication_crisis#In_medicine Anyone who has ever looked into this has probably seem some of the famous, controversial quotes by former EICs of respected journals such as this one from the former editor of The Lancet: "“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness."
The main issue isn't bad science being done per say, it's more of a fact that these studies tend to be done as advertising campaigns or lobbying attempts with set outcomes.
Why are we even paying for the paywall for papers anymore. If publishers can't even do peer reviews what are we paying them for?
But can this study be replicated? 🤔
So the study discrediting many social science studies is, in itself, discredited? Love it.
Not the study, the article that mentions it, afaik the two reports have nothing to do with each other
It's no surprise, human behaviour study is hard to orchestrate in an ethical way and on a large scale, so setting proper controls in place is near impossible. I see too many shitty social science studies that rely on surveys and interviews, both of which are pretty unreliable methodologies.
Another problem IIRC is that there just isn't much incentive for scientists to replicate studies; it's a lot harder for them to get grants and funding to replicate something than it is to make their own "discovery" even if it's complete nonsense.
The 2015 study that claimed social science couldn't be replicated, itself, couldn't be replicated, and it was found the data in the study did not adequately support its conclusion. I don't know if that's the case with this new study though, clearly someone qualified needs to investigate.
One useful thing to keep in mind if you need access to a paper; email the primary author and ask for a copy. Almost no publications actually give a researcher money for the paywall fee you spend. There's no incentive (short of the researcher fucking up and signing IP rights away to the publication) for an author to not send you the paper. If they refuse to send you a copy it's probably a good time to question their capacity as a scientist. A fair few of them are more than happy to share knowledge, it's why they do what they do after all.
They're an easy target because outside of the forensic and anthropological venues they're utter tripe. You can get a Doctorate spouting actual and literal nonsense should you be best buds with the board getting you the certification, and in the Pacific Northwest and California, for example, this is a thing that can be and has been done.
Sorry, you need to Log In to post a reply to this thread.