Herceptin, a therapeutic drug for breast cancer, was trumpeted across the news last week after a glowing report appeared in the New England Journal of Medicine.

Forbes called it a "wonder drug"; the London Times declared Herceptin "stunning"; CNN heralded the drug as "perhaps the most powerful cancer medicine in a decade," which "can halve the risk of relapse" in many cases.

I hope the reports are accurate.

But another body of research has gone comparatively unnoticed.

"Why Most Published Research Findings Are False" by John P. A. Ioannidis—an epidemiologist at the University of Ioannina (Greece)—presents convincing evidence that an alarmingly high number of scientific "findings" are eventually proven false.

Dr. Ioannina first published his controversial claim in the Journal of the American Medical Association (JAMA, July 2005). JAMA’s report studied "all original clinical research studies published in three major general clinical journals or high-impact-factor specialty journals in 1990-2003," each of which had been "cited more than 1000 times" in subsequent literature.

In short, Ioannina focused on original research published by influential journals and widely accepted as accurate by other scientists. (These studies are the cream-of-the-cream and can be expected to return the highest accuracy rate within medical research in general.)

Their data impact the health and, sometimes, the life of patients.

Of 49 studies, 45 "claimed that the intervention [examined] was effective," which means they changed the day-to-day decisions of medical care.

Fourteen of those 45 studies (approx. 32 percent) were subsequently refuted. Twenty (44 percent) were replicated or validated. Eleven (24 percent) remain unchallenged and, so neither validated nor refuted.

One refuted study concerned the safety of hormone-replacement therapy for women...in case you wondered why that therapy was deemed safe one minute and risky the next. Fortunately for women, the media focused on the medical establishment’s about-face on hormone replacement. But the refutation of other studies hasn’t received similar publicity.

Nor has Ioannina’s claim that fully 50 percent of medical research is wrong, with approximately the same chance of accuracy as flipping a coin.

Ioannina’s conclusion is speculative but, given that so many prestigious studies have been contradicted, it is not wildly improbable.

The 50 percent figure came from asking, Why are so many prestigious research proven wrong?

In response, Ioannina designed a mathematical model; he expressed the general character of medical research in mathematical terms. The model allowed him to manipulate variables in order to determine how changing circumstances impacted research, especially with regard to well-known sources of error.

The Economist reported on just one such variable: statistical significance. "To qualify as statistically significant a result has...to have odds longer than one in 20 of being the result of chance...In fields where thousands of possibilities have to be examined, such as the search for genes that contribute to a particular disease, many seemingly meaningful results are bound to be wrong just by chance."

Small samples, "weak effects", badly designed studies, researcher bias...the crunching of such variables through Ioannina’s model resulted in the 50 percent figure.

I question the value of speculatively crunching variables. But I’m convinced by Ioannina’s empirical finding that 32 percent of prestigious studies are false, with 24 percent unverified. And I applaud the caution he raises.

In fairness and returning to Herceptin, not all news reports of the research are uncritical and the major responsibility does rest with the medical establishment.

The UK newspaper The Guardian reported, "No one is completely sure how it works, and the tests on it are far from complete."

On FOXnews.com, Web MD reported, "Approved by the FDA in 1998, Herceptin isn’t a cure, and it’s not without drawbacks.... The small study—which was sponsored by the makers of the drugs used (including Herceptin)—doesn’t gauge long-term survival."

But such qualifiers hardly balance the circus-through-town news coverage of sensational research. It is not as though the calls for caution are new. The warning bell has been ringing for years.

In 1998, "The Great Health Hoax" by Robert Matthews appeared in the UK newspaper The Guardian.

It opened, "In 1992, trials in Scotland of a clot-busting drug called anistreplase suggested that it could double the chances of survival. A year later, another ‘miracle cure’ emerged: injections of magnesium." By 1995, the "amazing life-saving abilities of magnesium injections had simply vanished. Anistreplase fared little better."

Why does sloppy research succeed? That is, why is it accepted by the medical establishment and, then, heralded by the press? The Herceptin case—which may not be "sloppy" but which serves as an example of incentives—provides an explanation.

CNN reported, "American sales of Herceptin leaped by two-thirds, to $215 million, in the three months ending October 1, compared with the year’s first quarter....A year of Herceptin could cost $48,000 even at wholesale prices."

The Toronto Sun observed, "He’s known as Mr. Herceptin but his name is actually Dr. Dennis Slamon and his Los Angeles lab conducted the research that led to the development of the new breast cancer drug."

Big money, big reputations are on the line. Within the medical establishment, these factors push toward acceptance and against critical analysis. Within the media, the words "miracle drug" grab bigger ratings than "indication of progress is suggested."

As a woman, I hope claims about Herceptin are true. But, unless a study has been thoroughly analyzed with a sharp critical eye, I have no reason to believe anything said about its accuracy. I must agree with Ioannina when he cautions, "There is increasing concern that most current published research findings are false."

Given that patients make life-or-death decisions based on such research, this is a non-trivial matter.