If recent reports are to be believed, academic crimes are on the rise. In a world with shrinking enrollments and many underemployed professionals with PhDs, the temptations to cheat and lie to get published are intense. We are in an academic world in which an article in the Journal of Last Resort is often, somewhat sadly in my judgment, worth more than a slew of teaching awards and a devoted following of students.

A recent (July 18) article in Nature by Richard Van Noorden suggests that some observers believe “at least one-quarter of clinical trials might be problematic or even entirely made up.” Writing in the Guardian on Aug. 6, Ivan Oransky and Adam Marcus argue, “There’s far more scientific fraud than anyone wants to admit” and add that “the academic world still seems determined to look the other way.”

My own analysis of the “success rate” for grant applications to the National Institutes of Health or the National Science Foundation is consistent with this: for every grant application accepted, typically two to four others are rejected. For some scientists, rejection literally means job loss or, minimally, a significant income reduction.

This issue received new levels of prominence recently, leading to the resignation of the president of Stanford University. Long-time president Marc Tessier-Lavigne, himself a prominent research scientist, resigned after an outside review of his work concluded that it did not meet standards of “scientific rigor and process” and he failed to correct the record when notified of the problems.

The problem extends far beyond the hard sciences. I think of my own decades as a researcher in economics and some of the issues associated with asserting that some relationship is a “truth” or “economic law” that others can replicate and ultimately teach both students and the broader public. In the hard sciences, strict laboratory controls make it possible to rather precisely replicate the work of other researchers, but in the social sciences, which operate outside a controlled laboratory environment, many things are constantly changing, making “proving” a relationship difficult if not impossible.

From my own research, I see how easy it is for researchers trying to proclaim a novel idea worthy of publication or promoting a congenial ideological position to be manipulating the results. Let me give a hypothetical but quite plausible example.

Suppose I believe that lowering state and local income taxes increases the rate of economic growth, measured by changing personal income per capita. Suppose I gather some different data sets and use econometric testing of 25 models. Some of the models include some seven to eight additional variables besides the income tax measure of special interest (e.g., spending on education, the number of heating degree days in a year, or the proportion of the population working in manufacturing). Some of the models use time series data (looking at data relationships over time), others use cross-sectional data (comparing different states within one geographic area, such as the United States, or even different nations).

Suppose I get 24 sets of results showing the expected negative relationship between income tax burden and economic growth, but one that shows a positive relationship, however statistically significant, at only a 90 percent level of confidence. Suppose 16 of the 24 expected negative relations are believable with a 99 percent level of confidence, five with a 95 percent level of confidence, two with only a 90 percent level of confidence, and one with only a 75 percent level of confidence (meaning there is a 25 percent probability the observed positive relationship does not exist). What do I report to the reading public?

What I typically would do is report several (possibly all) of the results, typically summarizing the 25 regressions by saying that “the predominance of evidence suggests there is a negative relationship between income taxes and economic growth.” Another researcher much more ideologically hostile to that finding might conclude “the evidence is decidedly mixed on the tax-growth relationship.” And some pro-tax highly progressive researcher might even claim, based on one study, that “results show that higher taxes actually increase growth,” ignoring both the low level of confidence in that result and, more importantly, the other 24 tests contradicting this conclusion. That novel result actually might also have a higher probability of journal acceptance because it contradicts most other studies, making it provocative. Moreover, it reaches a progressive policy conclusion that most academics would like. In other words, outside the laboratory sciences, the interpretation of results is highly manipulable.

As standards of morality and a respect for the rule of law decline generally, so too they apparently decline in academia. It is very sad to say, but I would be very suspicious about buying a used car from many academics these days.