Is there a word for studies which confirm existing prejudices so are trumpeted everywhere as there now being scientific backing for such prejudices? If not, there should be.
The latest that has come to my attention is a study which is being reported as showing that women don’t want to be friends with women who have had a lot of sexual partners, while men don’t mind as much if their male friends are getting a lot of sex. Entitled “Birds of a Feather? Not When it Comes to Sexual Permissiveness“, the study claims to provide some support for a sexual double standard and suggests these findings might have an evolutionary basis. Male and female participants were provided with a profile of a fictional person, and asked questions pertaining to friendship with them. The only difference between the profiles was how many sexual partners the fictional person had had: two, or twenty.
As with most of these prejudice-confirming studies, though, there are a few problems with how the authors reached their conclusions. As with so many studies of this ilk, there is a huge issue with sampling, and the generalisability of this study. Participants were US college students, the overwhelming majority of whom were economically privileged–only 11% rated themselves as working class or lower middle class. But most importantly of all, gay and bisexual participants were excluded from the study to ensure that none of the results reflected any sort of sexual attraction (because, apparently, queer folk are unable to just be friends with heterosexuals). The findings, therefore, can only ever be generalised to heterosexuals and be applied to heterosexual culture. This is something which has not been reported in any media discussions of the study, probably due to a combination of the fact that journalists tend to regurgitate press releases rather than read studies, and a hefty dose of good old-fashioned heterosexism.
There is also a major problem with the stats used in this study. When conducting statistical tests, we use a probability that the finding was down to chance. The conventionally-accepted figure for a statistically significant finding is that there is only a 5% possibility that this finding is due to chance, and it’s important to report the values of this probability as a p-value, where we convert the percentage into a decimal. For example, p=.04 is statistically significant, meaning the result is unlikely to be due to chance. Meanwhile, p=.09 is not significant as it’s more likely that the findings were just chance. In this study, several findings were reported as being “marginally significant”. The threshold for this was p<.08. “Marginal significance” is a phrase which pisses me the fuck off, as it means it actually isn’t significant by any conventions which are used, it’s just kind of close and the authors wanted something else to talk about.
Then we run into another problem. When multiple tests are run, the possibility of a false positive increases. At a significance threshold of p=.05, if a researcher were to run 100 statistical tests, five would come up as significant just by chance. So it’s important, when you’re doing a lot of statistical tests, to adjust for this. The authors of this paper didn’t. The good news is, there’s enough data there for me to undertake a quick and dirty* adjustment called a Bonferroni Correction. As I said above, the generally-accepted significance threshold if p<.05. A Bonferroni Correction takes this significance threshold and divides it by the number tests run. Charitably discounting the 64 descriptive stats tests run, I count 160 tests undertaken (though I may have missed a few). p<.05 divided by 160 gives us the significance threshold of p<.0003, and from the data tables, it looks like there isn’t much that’s significant by this measure. Some findings are reported as p=<.001, although we cannot conclude from the information available whether they manage to reach the revised threshold.
For those of you whose eyes glazed over during that dry statistical excursion, take the findings of this study with a hefty pinch of salt.
If you did read the stats paragraph, you’ll notice I’m feeling charitable today, so now it’s time to talk about something I found really interesting in this paper, and I’m a little sad the authors didn’t examine more. Along with filling in questionnaires which largely formed the basis of the analysis, participants were also invited to write down things they liked and disliked about the fictional person. Almost everyone, men and women, had something negative to say about sexuality, even when it was the fictional person who had only had two sexual partners. These negative things included negative statements about extramarital sex and stigmatising phrases like “whore-like tendencies”.
We cannot draw the conclusions the authors and overexcited journalists have drawn–that women don’t want to be friends with slutty women. What we can see, though, is that among heterosexual US college students, there is still a pretty dodgy attitude towards sexuality, with people holding views which are generally quite negative even when a fictional person has had relatively few sexual partners. This is something we need to work on as a society: sex is a thing a lot of people do, and it’s much nicer to do it without judgment from peers. We need to support others in having safe sex rather than think unpleasant things about them, accepting people. We’ve all internalised a lot of shit, living as we do in a society with a decidedly wack view of sexuality. And that needs to change.
*Why do I find Bonferroni Corrections quick and dirty? Because I’m a Monte Carlo Simulation gal. Far more fun, you just get to leave a computer running while you go out for lunch and it feels like you’re working. Also, it’s more robust, or something, but mostly it’s the fact you get to pop out for lunch.