One of the (unwitting and female) participants in the study is raising some methodological issues on a theatre-related listserv. Anonymized money quote:
... First, the samples were incredibly short. I remember being asked whether or not I thought the characters were likable, and thinking, "How the hell do I know? They each have, like, three lines." The samples were not proofed for spelling, punctuation, and grammatical errors-- not the deliberate kind, btw; after all these years, I can surely tell the difference-- and that made me, as it always does, grumpy. That, in combination with the shortness of the samples, made me conclude that these were in all likelihood very amateur scripts. The shortness is relevant because the samples weren't even close to long enough to get a feel for the arc of the play, the characters, or even to adequately assess the quality of the writing, errors aside. I had very little to go on. I know and like the writing of the women whose samples, as it turns out, I was reading, and, in retrospect, the samples, in my opinion, did not reflect the quality I'm used to seeing from them.
Additionally, we were asked to rate how likely we were to slot such a play and how much we thought our company and marketing people would be behind it, but we were never asked why we thought those things. I would have appreciated being able to-- at the very least-- choose from a list of things detailing why we were interested or uninterested. "Wouldn't fit in our theatre space" and/or "No roles for our resident actors" are honest, practical answers that complicate the idea of gender bias in assessing interest. This is different than "It sucked," which, of course, is very prone to biases of all sorts.
I remember also being asked how well we thought this or that script fit our company's mission, but never asked what that mission was or why we thought it would or wouldn't fit. Some companies, my own included, have very specific missions that eliminate certain scripts regardless of quality. This score must have been included in the aggregate, and I think the honest answers would complicate the notion of gender bias. Many of us surely rated some scripts very low in this category for reasons other than quality, just as some would surely rate them higher
for reasons other than quality.
The writer also mentions that she recommended a female playwright, namely Sheila Callaghan, to the people doing the study. She also mentions that she's generally supportive of the efforts behind the study and is not dismissive of the idea of gender bias in theatre, but simply would like to see the study redone with better methodology.
I personally wish, having looked over some of the study, that they had had more theatre people involved in devising it. A professional theatre person would've told you that doollee at its best is roughly as reliable as the low-end of wikipedia, and someone with some experience in the institutional dramaturgy world could probably have filled them in on how to design the gendered submissions in a more real-to-life way.
That being said, I certainly don't think one e-mail to a listserv invalidates the study, and I think that at least in general terms the bias study still seems sound and it does back up the facts on the ground in terms of the fate of female playwrights today.