Last week, I posted about a recent article in The Guardian concerning a group of male authors who have decided to hide or obscure their gender as authors in the belief that it helps them sell their work. In that post, I asserted that one reason this may be optimal from a marketing standpoint is that it might be true that most readers are women and that women may have a preference (explicit or implicit) for reading work written by women (although I acknowledged that I had no hard data to back this up).
It turns out that hard data does support the claim that most readers, and especially most readers of fiction, are women. In 2014, the Pew Research Center published the results of a survey on Reading in America. Conducted in 2013, the survey found that 82% of women reported that they had read at least one book in 2013, as opposed to only 69% of men. Women also read more books: roughly half-again as many books per year as men regardless of whether you look at the average number of books read or the median number of books read.
Larger differences across the sexes were reported in a 2007 NPR article entitled Why Women Read More than Men. And that article also reported a staggeringly large difference in reading fiction: men account for only 20% of the market for fiction! The report is unclear on how this number was determined, but I'll take it at face value.
What about the claim that people prefer to read fiction written by someone of their own gender? A 2014 article published in The Guardian reviewed Goodreads data. The article was a little unclear about the methods, but it seems that they looked at the 20 thousand most active male, and the 20 thousand most active female, readers only. In this highly selected sample, men and women turned out to have read the same number of books per year. They then looked at books published in 2014 and how often they were read by members of this select sample of active readers. They found that 80% of the readers of female authored books published in 2014 were women, while 50% of the readers of male authored books were men. This is consistent with my assertion/conjecture. (Interestingly, but not inconsistent with my claim, both men and women gave higher ratings to books authored by women).
All of the above is consistent with the idea that there are incentives for publishers to solicit and publish works by women, and hence incentives for male writers to portray themselves as women.
Importantly, the men referenced in the earlier Guardian article were established authors with agents and publishers, and it was with the approval of their agents and editors that they were hiding their gender. One might expect, based on the above data, that agents and editors would be seeking out women authors and giving them preference over male authors (or, at least, those male authors who do not wish to disguise their gender). I've certainly heard anecdotally that there exists an explicit preference for women authors, and some people have reported that an implicit bias against male authors exists as a result of the fact that roughly 75% of editors are women (see, for example, this story about the lack of respect for male editors published in The Huffington Post: Why Men Don't Read: How Publishing is Alienating Half the Population).
Set against this, today I read a recent post Catherine Nichols entitled Homme de Plume: What I Learned Sending My Novel Out Under a Male Name at Jezebel which describes the author's experience submitting query letters to agents under a male nom de plume. Nichols reports that out of 50 queries submitted under the male name she received 17 requests for the manuscript, as opposed to only twice when she submitted 50 times under her own name. This is striking and, to me, shocking. Shocking both because of the disparity in treatment, but also because getting 17 requests out of 50 submissions seems like a really high number (to me, someone who has never send a query letter of my own). I presume that was one hell of an opening chapter or two.
Nichols throws out a couple of possible explanations that do not amount to bias on the part of agents. Perhaps the book was assumed to be, and written off as, "Women's Fiction" when perceived to be written by a woman? She notes that the responses were roughly the same regardless of whether the agent was male or female. Implicit bias appears the most likely answer, but I can't help wishing for more information. Did Nichols keep records of who asked for the manuscript? Would she share the data so anyone could analyze the data? Was there something about the title of the manuscript or its subject matter that made a difference? I'd like to know more so that we can begin to draw general conclusions from individual reports like this one.
So what does all this mean for authors and readers of science fiction and fantasy? And how does it inform the Puppy debate? I am not 100% sure. I have seen some evidence suggesting that male readers make up a larger proportion of the market for SFF than in the market for fiction more broadly, and perhaps are even the majority of the market. And so maybe any bias in favor of women is weaker, or bias in favor of men is stronger. It would be interesting to know more not only about Catherine Nichols experience, but also about the experiences of women submitting to SFF outlets. Has anyone tried the Nichols experiment in SFF?
More importantly, can we generalize beyond these anecdotes? Getting human subject approval for this kind of thing is hard (which is why some academics have, rather controversially, flaunted the human subject approval process), but it would be interesting to conduct a larger study in which several different versions of an author query were sent out to multiple agents (and editors) under randomly chosen male, female and gender-neutral names. The study design could replicate the famous Bertrand-Mullainathan article on racial discrimination "Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination" (for a non-technical explanation, see the write-up in the New York Times) and we could test for racial bias similarly.
To do this, we'd need an author or three to help generate the author queries. Is anyone interested?
No comments:
Post a Comment