An analysis of published studies of a range of biological specialties shows that, when data are reported by sex, critical statistical analyzes are often lacking and results are likely to be misreported.
The newspaper eLife published the analysis, performed by neuroscientists at Emory University, encompassing studies from nine different biological disciplines involving human or animal subjects.
“We found that when researchers report that men and women respond differently to manipulation such as drug therapy, 70 percent of the time, the researchers did not statistically compare those responses at all,” explains the senior author Donna Maney, professor of neuroscience. in Emory’s psychology department. “In other words, an alarming percentage of claims of gender differences are not supported by sufficient evidence.”
In articles lacking the appropriate evidence, she adds, gender-specific effects have been claimed in almost 90 percent of cases. In contrast, authors who statistically tested gender-specific effects reported them only 63 percent of the time.
“Our results suggest that researchers are predisposed to find differences between the sexes and that the gender-specific effects are probably overestimated in the literature,” explains Maney.
This particular problem is common and relates to Maney’s earlier work. “Once I realized how widespread this was, I went back and checked my own published articles and it was there,” she says. “I myself have claimed a difference between the sexes without statistically comparing men and women. “
Maney stresses that the problem should not be dismissed just because it is common. It is becoming increasingly serious, she says, due to increasing pressure from funding agencies and journals to study both sexes, and the interest of the medical community in developing gender-specific treatments.
Maney is a behavioral neuroendocrinologist interested in how research on gender differences shapes public opinion and policy. High standards are needed, she says, to ensure that people of all genders have access to the care that is right for them.
Yesenia Garcia-Sifuentes, Emory PhD student in the Neuroscience Graduate Program, is co-author of the eLife To analyse.
Better training and better oversight are needed to ensure scientific rigor in gender research, write the authors: “We call on funding agencies, journal editors and our colleagues to raise the bar when it comes to gender issues. it’s about testing and reporting gender differences. “
Historically, biomedical research has often included a single gender, generally biased in favor of males. In 1993, Congress enacted a policy to ensure that women were included in clinical studies funded by the National Institutes of Health whenever possible, and that the studies were conducted in such a way that it was possible to analyze whether the variables studied affect women differently. than the other participants.
In 2016, the NIH announced a policy that also requires consideration of sex as a biological variable when possible in the basic biological studies it funds, whether that research involves animals or humans.
“If you’re trying to model anything that’s relevant to a general population, you have to include both sexes,” Maney explains. “There are many ways animals can vary, and gender is one of them. Omitting half the population makes a study less rigorous.
As more studies take gender differences into account, Maney adds, it’s important to ensure that the methods behind their analyzes are sound.
For the eLife analysis, Garcia-Sifuentes and Maney looked at 147 studies published in 2019 to investigate what is commonly used as evidence for gender differences. The studies spanned nine different biological disciplines and included everything from field studies in giraffes to immune responses in humans.
The studies that were analyzed all included both men and women and separated the data by gender. Garcia-Sifuentes and Maney found that the sexes were compared, either statistically or by assertion, in 80 percent of the articles. And, in those articles, gender differences were reported in 70 percent of them and seen as a major finding in about half of them.
However, some of the studies that reported a gender difference made a statistical error. For example, if researchers find a statistically significant effect of a treatment on one sex but not the other, they usually find a difference between the sexes even if the effect of the treatment is not compared. statistically between males and females.
The problem with this approach is that statistical tests performed on each gender cannot give “yes” or “no” answers as to whether the treatment had an effect.
“Comparing the results of two independent tests is like comparing a ‘maybe’ with a ‘don’t know’ or ‘too early to tell’, explains Maney. “You are just guessing. To show real evidence that the response to treatment differs between women and men, you must statistically show that the effect of treatment depends on gender. In other words, to claim a “sex-specific” effect, you must demonstrate that the effect in one sex has been. statistically different from the effect in the other.
On the other hand, the eLife The analysis also encountered strategies that might mask gender differences, such as pooling data from men and women without testing for a difference. Maney recommends reporting the size of the difference, that is, the extent to which the sexes do not overlap, before aggregating the data. It provides a free online tool that allows researchers to visualize their data to assess the magnitude of the difference.
“At this point in history, the stakes are high,” says Maney. “Wrong results can affect health care decisions in dangerous ways. Particularly in cases where differences based on gender can be used to determine what treatment a person receives for a particular condition, we must proceed with caution. We must maintain a very high level of scientific rigor. “
– This press release was originally posted on the Emory Health Sciences website