A report claiming that close to 10 percent of children in public schools—more than 4.5 million—endure sexual abuse or misconduct by school employees has recently touched off a media-fueled panic.

However, “Educator Sexual Misconduct,” by Carol Shakeshaft of Hofstra University, is seriously flawed, both in its methodology and in the way researchers defined sexual abuse and misconduct.

Rather than critically evaluating the report, the media have instead been trumpeting its frightening figures of abuse. Parents deserve better; they deserve the facts.

In the report, Shakeshaft defines “sexual abuse” in an extremely broad manner to include “physical, verbal, or visual” behavior by an educator ranging from sexual intercourse to telling inappropriate jokes. One study cited in the report included sexual comments, gestures or looks in its definition of sexual abuse and asked students if other students had committed such acts toward themselves or each other.

In his preface to the report, Deputy Secretary of Education Eugene Hickock observes that the terms “sexual abuse” and “sexual misconduct” are used interchangeably, which he calls “potentially confusing.” Some data-analysts use loose definitions in order to include differing yet relevant studies within their analysis. This does not appear to be Shakeshaft’s intent.

Although the subtitle of Shakeshaft’s report is “a synthesis of existing literature,” the report is not a meta-analysis. Meta-analysis consists of combining the data provided by multiple sources, all the while being meticulously careful to acknowledge and to adjust for differences in how those sources have collected or defined the data.

For example, they may have studied vastly divergent populations.

Shakeshaft states that, because so few empirical studies in this area exist, meta-analysis is not merited. Instead, she offers a review of “existing literature” which purportedly excludes “discussions ... not based on data.” Which literature does she review?

The use of sources in the report is no less confusing than its definitions. There are nearly 900 citations to news reports from Australia, Britain, Canada and all corners of the U.S., which date from 1989 to 2003. Some citations have little bearing on the report’s focus, e.g., accounts of abuse by priests. Presumably the citations are meant to indicate the prevalence of the claimed sexual abuse. If so, the attempt fails. Several hundred stories stretched over 15 years and three continents do not point to 4.5 million American children being abused today.

Wisely, Shakeshaft does not base her conclusions on evidence that comes so close to anecdotal accounts. Instead, Shakeshaft looks at five Canadian/U.K. and 14 U.S. studies, including two early works of her own. For quantitative results, her focus narrows down to five studies that she summarizes; one study is listed twice to accommodate two of its questions.

The studies make widely disparate claims. For example, the estimated percentages of U.S. children who have experienced any educator sexual misconduct vary from 3.7 to 50.3 percent. (The highest figure derives from a small sampling of 185 volunteer students—that is, from what is called a self-selecting sample—on the frequency of sexual harassment. Specifically, from a question that asks students to estimate the rate of sexual harassment endured by other students.) There is no apparent attempt to reach composite statistics.

One of the primary sources in the report is a study by the Association of American University Women. The AAUW study found the prevalence of abuse to be 9.6 percent—or, rounded off, 10 percent. Thus, the 10 percent figure cited in this latest report seems to be based exclusively on one study and not on “the existing literature” as the title of the report suggests.

The precise title of the AAUW study as used in the new report is “AAUW data (2000) and Shakeshaft secondary analysis (2003).” “Secondary analysis” presumably means that Shakeshaft re-interpreted AAUW’s original data—thus making the 10 percent figure derive partially from her own prior work. In short, she is quoting herself.

Yet, Shakeshaft concludes that “because of its carefully drawn sample and survey methodology,” the Association of American University Women report is the “most accurate data available.”

Among the questions asked of students by the one AAUW study was, “during your whole school life, how often, if at all, has anyone (this includes students, teachers, other school employees, or anyone else) done the following things to you when you did not want them to? Made sexual comments, jokes, gestures or looks.” A list of 13 other behaviors follows.

The question seems to be the nexus at which sexual abuse in school is established. Thus, the 10 percent figure properly includes “sexual abuse” by fellow students and other non-school employees. That fact alone invalidates the AAUW study for Shakeshaft’s purposes. It also invalidates her conclusions.

To their credit, some voices in the media have noted the fact that the AAUW data is the sole source of Shakeshaft’s conclusions. But few questions have been asked about the validity of the data itself or the appropriateness of Shakeshaft’s use.

The AAUW has been widely accused of using biased studies to further a gender feminist agenda. Controversy has specifically swirled around the AAUW’s 1992 study “How Schools Shortchange Girls,” which was pivotal in creating nationwide policies that give preference to girls in public school. This study was eviscerated by Christina Hoff Sommers’ 1992 book; academic articles such as Judith Kleinfeld’s “The Myth That Schools Shortchange Girls: Social Science in the Service of Deception” have brought the methodology, honesty and intent of that study into further question.

At this point, studies from the AAUW merit critical scrutiny. Before participating in what might be politically motivated panic-mongering, the media should actually read the report. Strangely enough, this seems to be a rare practice.