Neoclassical welfare economists maintain that consumers suffer when risky goods are supplied in an unregulated market. Consumers are said to possess imperfect information (Stiglitz 1988, pp. 78–79; Barr 1992, pp. 749–50) and limited ability to process complex information. Moreover, because information is presumed to be a public good, markets are ipso facto supposed to produce and disseminate a suboptimal amount of information (Stiglitz 1988, p. 79; Greer 1993, p. 416; Scherer 1993, pp. 98–99,101). Under these conditions, neoclassical welfare economists maintain, consumers make choices that cause them to be worse off than they would be, say, if a regulator constrained their choices by banning very risky products from the market. The alleged market failure may stem from outright consumer ignorance, but it occurs even if consumers conduct what seems to them an optimal search for information. Given their inability to process complex information and the public-good problem with respect to information, inefficient risk bearing occurs (Greer 1993, pp. 413–14), as consumers bear more risk than they would choose to bear if they could process all information flawlessly and the public-good problem with respect to information creation and dissemination had been solved.

Some analysts have noted, however, that U.S. regulatory agencies such as the Food and Drug Administration, the Consumer Product Safety Commission, and the Department of Transportation, which enforce product bans, face incentives of the sort recognized in public choice theory that lead them to impose too much safety on consumers by denying some risky products access to the market (Weimer 1982; Grabowski and Vernon 1983; Gieringer 1985, 1986; Kazman 1990; Higgs 1993). To analyze and remedy this government failure, neoclassical analysts propose the application of social cost-benefit analysis (Peltzman 1974; Grabowski and Vernon 1983, pp. 11–13). As Austrian economists appreciate well, however, social cost-benefit analysis can- not solve this (or any other) problem, because, inter alia, it rests on unjustifiable implicit aggregation of different individuals’ utilities (Buchanan 1979, pp. 60–61, 151–52; Pasour 1988, pp. 114–16; For- maini 1990, pp. 39–65; Cordato 1992, pp. 57–60,111). Other analysts have tried to sidestep this problem by conducting an appraisal in terms of lives lost and lives saved by various regulatory decisions (Gieringer 1985; Kazman 1990, pp. 47–50). [1] shall criticize both approaches. Neither gets at what economic analysis is supposed to be about: consumer welfare as evaluated by the consumers themselves and demonstrated by their actions.

Fundamental Ideas

Risk is an inescapable condition.’ However much people may prefer to live in a world of complete certainty, they simply cannot do so. Just banishing risk, whether by regulation or otherwise, is not a feasible option. Whatever the institutional arrangements for distributing the gains and losses associated with risky actions, someone must bear the risks inherent in the choices made. Insurance can pool and spread risks. Government can tax or subsidize risk bearing. But at any time, given the knowledge and resources available to the members of society, any set of choices has associated with it certain irreducible risks. As Mises (1966, p. 105) put it, “The most that can be attained with regard to reality is probability.”

Given that no action has a completely certain outcome and that the degree of risk attached to various actions differs, every consumer choice represents a selection in two dimensions: (a) selection of good X (itself a package of attributes) instead of alternative goods and (b) selection of a certain degree of risk instead of the alternative degrees of risk associated with goods not chosen. If people care about the degree of risk assumed, which I suppose they generally do, then each choice they make represents a deliberate selection from alternative two-dimensional objects, each being a good-cum-risk package. “The opportunity cost of the selection . . . is not the utility of outcomes foregone but some foregone convolution of utility and probability” (Langlois 1982, p. 29 and Figure 3). People choose the most preferred package. Risk-averse consumers make tradeoffs, choosing something other than the good with the greatest expected benefit whenever a lower degree of risk associated with another good more than compen- sates them for the sacrifice of the greater expected benefit. Having different tastes for risk, people make such choices differently.[2] As Buchanan (1969, p. 50) has noted, “In the face of uncertainty, the evaluation of alternatives by the actual decision-taker may differ from the evaluations of any external observer.”

Every market, then, involves allocations of both goods as such and risk-bearing. Economists, especially those in the field of finance, are familiar with the principle of market efficiency that takes account of both dimensions. Just as market exchange of existing goods can improve the subjective well-being of consumers with different preferences, so the opportunity to trade in the risk dimension of goods can improve the subjective well-being of consumers otherwise stuck with some fixed distribution of risk bearing.[3] In both cases, one presumes that a restriction of the field of choice can make some or all traders worse off but cannot make anybody better off. Yet neoclassical welfare economists continue to argue that market failures of the sort mentioned above may invalidate this general presumption in favor of unimpeded consumer choice of risk bearing.

Can Free Choice in Risk-Bearing Make Consumers Worse Off?

Suppose that, left to my own discretion, considering everything I know about the prospective benefits and risks of consuming good X, I choose to consume it. Now suppose that you know something about X that I do not, say, that it causes death once in every 100,000 cases in which someone consumes a certain amount of it daily for a year.[4]

Can we say that preventing me from consuming the good improves my welfare?

We cannot. Two possible cases exist. In one case I would have chosen to consume X even had I known what you do about its risk, because I would have regarded the risk as worth taking in order to gain the expected benefits of consuming the good. In the other case I would have refrained from consumingX had I possessed your knowledge. But banning the product is quite different from giving me new information. By simply denying me the option to consume X, you have definitely made me worse off, because you have removed my most preferred object of choice from the set of alternatives open to me. The utility that consumers maximize by their choices is prospective and subjective utility, not ex post utility and not utility as gauged by someone else in possession of different information (Rothbard 1977; Buchanan 1969, pp. 42–44; 1979, p. 59).

Of course, consumers sometimes conclude afterward that they regret a particular choice. Their regret only validates the fact that their choice was indeed risky, that an undesired contingency could occur. Consumers know this when they choose, and they make their choices in the light of that knowledge.[5] To deny them access to a particular risky option does not differ essentially from denying them access to goods of a particular taste, color, location, or any other dimension of choice. The perceived degree of risk is a dimension of goods considered by consumers when they make a (forward-looking) choice. To ban a good because a third party believes it to be in some sense riskier than the consumer believes it to be or because a third party values risk-avoidance more than the consumer does is simply to impose the third party’s preferences on the actual consumer.

This remains the case even though the consumer would have chosen differently had he known what the third party knows. The neoclassical economist’s lament with regard to “imperfect information” rests on an irrelevant and misleading standard of reference (“perfect information”). In reality, everyone without exception is necessarily ignorant of many things known by others. If consumer choice were to be permitted only to consumers whose knowledge, whether of risk or any other dimension, equaled or exceeded that of all other persons, then persons in general would not be permitted to choose anything for themselves, and no genuine market order could exist.

An arrangement in which only the most knowledgeable may choose raises problems of its own. Who will identify the most knowledgeable person for each dimension of choice, determining that John knows most about colors, Mary about textures, Carlos about risks? How will disputes about who has the most knowledge be resolved? Does everybody agree as to how risk ought to be conceptualized and measured? What will be done if even when Juanita is recognized as the most knowledgeable about the risk of getting a headache from using product X, some consumers seem to care a great deal about avoiding a headache whereas others seem to care hardly at all?[6]

Even if someone knows the degree of risk better than I, important questions remain. Why don’t I know? Is it because I am not concerned about this particular risk? Is it because I regard the expected cost of acquiring knowledge of the risk to be greater than the expected benefit of possessing such knowledge? Again, no one can possibly acquire more than a few sorts of knowledge. Should consumers who decide to direct their information search along other lines be forbidden to choose all goods that someone else knows to be riskier than the consumers in question do?

It is instructive to apply to the information question the general Misesian position as stated by Cordato (1992, pp. 19,21). “Since there is no optimal outcome [in the market for information] apart from that which is generated by the actual interaction of market participants, there is no standard by which to argue that ‘too little’ [information] is being produced. . . . There is no way for the economist or policy maker to know the preferences of market participants [with respect to how informed they wish to be on various subjects] apart from what the individuals reveal them to be through action.”[7]

Of course, it is trivially true that if I had the superior knowledge now possessed by others, I might be able to improve my post-choice evaluation of my welfare. But this is only to say that if people knew more, they could act successfully more often. So what? If altruists were to disseminate free information, some people might take the time to absorb that information and be glad they did. But again, so what? Are we to allow individuals to reveal their own valuations of information by the efforts they make to inform themselves, or are we to wait for more altruists to spread free information before allowing individuals to make their own choices in the market? How many more altruists are necessary? Who will decide when consumers are finally well enough informed to make decisions about their own consumption, and on what grounds will the decision rest?

The neoclassical argument that, because of the public-good character of information, people will be suboptimally informed cannot justify a policy of banning a risky product. The argument is general. How can it justify banning a new medicine but not a pork chop? If it be countered that the medicine is harder to understand and therefore consumers expose themselves to greater danger by consuming it at their own discretion, the counterclaim itself may be questioned. Who really knows the dangers best? What justifies the assumption that one or a few federal bureaucrats actually know more about risks than consumers? Andy Rutten has written, “The real flaw in the traditional argument is that [neoclassical economists] invoke the information arguments so as to avoid the difficult (because impossible) work of showing that third parties really would make better decisions.”[8] If it be countered that some consumers are obviously dullards, then the question becomes: How can one justify a comprehensive ban rather than a ban applicable to the dullards alone? And if a discriminatory ban is to be enforced, who will classify each member of the population as either a dullard or not, and what will be the basis for making the discrimination?

Upon closer inspection, the neoclassical argument founded on the public-good character of information appears to depend on the implicit assumption that someone omnisciently looking down on a situation populated by imperfectly informed (i.e., real) actors can say what the “correct” degree of information is. Further, to justify government restrictions of the market, the neoclassical analyst must imagine that this heavenly onlooker counsels government employees, such as the drug reviewers at the FDA, as they make a decision about the date-the same for all persons, regardless of differences in their knowledge, health condition, or attitude toward risk bearing-when it will be “optimal” for everyone simultaneously to gain access to a new drug.

“Perfect information,” as it is commonly understood in neoclassical analysis, is not a condition that can exist in reality; nor is it an appropriate standard of reference in welfare economics.[9] We may choose only among feasible institutional arrangements for conducting our affairs. Comprehensively banning a beneficial but risky product from the market is the bluntest of policy instruments, the crudest sort of central planning. A free market in risky goods, on the other hand, permits the flexibility for individuals to adjust their choices to the differences in their conditions and preferences. Some consumers desire to become very well informed before taking the risk of using a new drug or device; others are willing to assume the risk quicker, either because they are more comfortable with risk bearing or because they stand to lose more, in their own subjective estimation, by waiting longer before using the product (Eraker and Sox 1981).In the free market each individual can adjust the mix of products consumed, the kind of risk borne and, within limits, the degree of risk borne. Inasmuch as both the expected benefit of using a product and the burden of risk bearing are subjectively experienced and knowable only to the individual actor, and both vary from one person to another, it should be clear that no central planner can possibly improve on the outcome of a flexible market process by crushing it beneath the weight of a single comprehensive decision imposed on everybody from above.[10]

Finally, consider an alternative argument in support of banning a risky product. Suppose one could establish that, by banning product X from the market, life expectancy definitely would be increased. May we now conclude that the product should be banned? Of course not. A policy founded on such a decision rule implicitly enforces a one-dimensional utility function: only length of life has value. Clearly people do not have such limited preferences. Every day in various ways people choose to place their lives at risk in order to pursue other goals.[11]

If a risky new medicine may justifiably be banned, why shouldn’t the government also ban portable ladders, cigarettes, red meat, fast cars, firearms, private aircraft, and countless other goods that consumers value and purchase, all of which may reduce the user’s life expectancy? It might be countered that ordinary people can more accurately estimate the risks of using these goods than they can the risks of using a new medicine. But this need not be so.[12] Who really knows all the risks (or benefits of) of eating beef steak or drinking cow’s milk? To give people the option to consume a risky good is not to insist that they consume it. People are free to consult expert sources of information and advice before making their choices, and the experts may know a great deal-far more than government regulators know-about the probabilities of adverse contingencies. Moreover, consumers may bear a much greater cost-which, because it is subjective, only they can know-by foregoing the use of the new medicine than by foregoing the use of a ladder.

In sum, banning a risky product, which often appeals to paternalists, is indefensible in relation to the maximization of consumers’ utility properly understood. Banning a product always represents the imposition on consumers of someone else’s preferences. (Every parent understands this proposition.) Banning a product cannot make any- one better off in terms of the properly construed objective analyzed in economic theory: maximization of the prospective and subjective utility of responsible adult consumers.

Applications to FDA Testing Requirements

Since 1962 the Food and Drug Administration has permitted the marketing of a new drug only after the manufacturer has conducted to the agency’s satisfaction an elaborate series of tests, including laboratory and animal experiments and three phases of clinical trials with human subjects, to establish that the drug is both safe when used as recommended and effective for its intended use (Grabowski and Vernon 1983, pp. 21–27; Weimer 1982, pp. 246–50; Gieringer 1986; Kazman 1990, pp. 37–40). As the regulations have become more extensive and the agency’s requirements and standards more demanding and unpredictable, the time and expense of the necessary testing have grown. Presently the average drug takes about a decade to complete the approval process (DiMasi, Bryant, and Lasagna 1991, p. 480). While the product awaits approval, consumers who might have benefited from it suffer unnecessarily and, in many cases, die prematurely.

To evaluate this regulatory system, I construct a simple model based on the ideas expressed in the preceding section of the paper. The model provides a means of assessing several different aspects of the FDA’s regulations. Each aspect can be seen as a restriction that cannot improve the well-being of any consumer but can-and no doubt does—diminish the well-being of some consumers.

The model shows the relations between two sources of marginal utility and the testing time t of a drug before it is permitted on the market. In general, the more demanding the FDA standards for establishing safety and efficacy, the longer the time required to satisfy the standards. Thus the duration of testing can serve as a measurable index of other dimensions of the required testing such as number of subjects, number of separate tests, number of variables monitored, total expense, and so forth (Weimer 1982, p. 256; Ward 1992, p. 49). Notice, however, that letting the duration of testing serve as a proxy for other dimensions of testing is only an expositional convenience. The basic logic of the model remains the same, even if one considers the problem piecemeal for each separate dimension of the testing.

Figure 1is a diagram of the model. Note first that the diagram pertains to a given individual, Person A, at a given date. Person A may relocate the functions at any time in accordance with changes in personal valuations. The units in which each individual measures the marginal utilities are known to that individual only. Interpersonal utility comparisons cannot be made. Nor can the utilities of different individuals be aggregated to arrive at a “social benefit function.” There is no common unit for such aggregation; nor in reality is there an institutional arrangement by which a common unit might be revealed as it is, for example, by dollar prices in the neoclassical model of a perfectly competitive economy in general equilibrium with the dollar serving as a numeraire. By labeling the functions as marginal utility (MU) functions, I hope to forestall anyone’s confusing these functions with the social marginal cost and social marginal benefit functions used by neoclassical analysts to analyze issues of this sort. The Austrian analysis offered here, unlike the corresponding neoclassical analysis, rests squarely on methodological individualism and subjectivism.

When testing first begins, the individual gains a definite marginal utility, denoted MU(I), from acquiring the information yielded by the test about the drug’s efficacy, its toxicity, and other side effects. One is reassured to know, for example, that the test subjects did not drop dead after taking the drug on day one. As the duration of the testing increases, then eventually if not immediately the marginal utility of the information gained from the last day of testing declines: MU(I) is a decreasing function of t.[13] I assume nothing else about the shape of MU(I); the linearity of the function as drawn in Figure 1is arbitrary. One may also think of MUU) as depicting the marginal benefit of testing good X as evaluated by Person A on a given date.

On the other hand, the longer the duration of the premarket testing, the longer the consumer must forego the benefits of using good X, denoted MU(B). While the foregone marginal utility of using good X may be low at an early stage of the testing, MU(B) rises as t increases. The longer one waits to use X, the greater the likelihood that one’s condition will worsen to the point that X will no longer suffice to alleviate the problem. Hence, MU(B) is an increasing function oft. I assume nothing else about the shape of MU(B); the linearity of the functions drawn in Figure 1is arbitrary. One may also think of MU(B) as depicting the marginal cost of testing good X as evaluated by Person A on a given date.[14]

In extreme cases people will soon die without access to a potentially life-saving drug. Such persons might be willing to use a new product immediately, notwithstanding the possible hazards associated with its use, which are initially quite uncertain because it has been tested only in the laboratory and with animals. A member of this desperate group would have an MU(B) function like that labeled MU(B)2 in Figure 1. At any positive test duration, the marginal utility of the benefits foregone because of another day’s testing exceeds the marginal utility of the information gained by another day’s testing. For these people, the optimal test duration is zero days.

For others, presumably the more typical cases, immediate use of X would be undesirable. Before the good has undergone any clinical testing at all, the marginal utility of the information gained from at least a few days of testing would be worth waiting for, because the marginal utility of benefits foregone would be relatively low for low values oft. As t increases, however, MU(B) increases and, as shown by the function labeled MU(B)1 in Figure 1, it eventually equals and then exceeds the value of MU(I), which falls as t increases. The test duration t* at which the two MU functions have equal values is the optimal one for Person A. This person will not voluntarily use X before it has undergone a test period of this duration. However, this person would object should the premarket test period be prolonged by regulators beyond t*, judging the foregone benefits associated with additional waiting to use X greater than the benefits of the additional information acquired.

Now, suppose that a regulatory agency effectively fixes the duration of premarket testing, as the FDA does.[15] Two cases are possible. One possibility, shown as duration t1 in Figure 1, is that the regulator sets t below the individual’s optimum, which is t*. In that case the individual refrains from using the product, after it becomes available in the market, until it has undergone further testing. For all such persons, the regulation is not a binding constraint. These persons desire more testing than the regulator requires. The regulator’s restriction brings them no benefit whatever.[16]

In the second case the regulator effectively fixes a test duration such as t2 in the figure, which exceeds the individual’s optimum. In this case individuals cannot consume the good as soon as they wish. Even though a consumer is willing to accept the risk of current use, the manufacturer is not permitted to sell the good. The well-being of the consumer is diminished. The consumer will gain some utility from further testing, but the utility sacrificed by additional waiting is greater than the utility gained from the information yielded by the additional testing.

We have then two possibilities. Either the regulator sets t equal to or less than an individual’s optimum, in which case the regulation neither helps nor hurts the consumer; or the regulator sets t higher than an individual’s optimum, in which case the regulation definitely reduces the well-being of the consumer. In short, marketing restrictions like those enforced by the FDA can make no one better off in the sense relevant in economic theory, but they can-and, as indicated by the many public complaints registered against the FDA, they clearly do-make some consumers worse off.[17] Overall, restrictions of this kind, which ban a product from the market, can only hurt consumers.[18]

Using the model, one can evaluate various aspects of the FDA’s policies with regard to premarket testing requirements. Consider, for example, how an individual’s optimal testing time t* would change if it were discovered that a drug might be helpful in treating a second illness as well as the one for which it was originally intended.[19] In this case the MU(B) function shifts upward, as Person A is foregoing not only the marginal utility of using drug X to treat condition 1but also the marginal utility of using X to treat condition 2. Because the MU(I) function remains fixed, the intersection of the MU(I) and the MU(B) functions must now occur at a lower value oft. This conclusion is intuitively obvious: the more conditions a drug can alleviate, ceteris paribus, the sooner a consumer will desire access to it.

The FDA, however, regulates drugs so as to preclude this result. Even if solid scientific studies or extensive clinical uses indicate that a previously approved product will prove useful in alleviating an additional condition, the product may not be legally marketed for that indication.[20] The seller is required to conduct a new, separate set of tests complete with years of clinical trials, and to present the FDA with a New Drug Application based exclusively on the additional therapeutic claim (Weimer 1982, p. 279; Gieringer 1985, pp. 188–90; 1986, p. 10; Ward 1992, pp. 47–49). Consumers’ welfare is diminished by the delay in the seller’s advertising and marketing for the new use of a product already on the market.[21]

Consider next the effect on Person A’s optimal U.S. testing time t* if information on drug X’s efficacy and side effects were to become available from tests or consumer experience in other countries. In this case the MU(I) function would shift downward, as the marginal utility of any particular increment of U.S. testing now would have lower value to Person A. With the downward shift of MU(I), given that the MU(B) function remains fixed, the two functions intersect farther to the left and hence the value of t* is lower than before. Again, this conclusion comports with intuition. Given that more information is already available for gauging the benefits and risks of using X, the consumer will be satisfied with a shorter period of premarket testing in the United States.

The FDA, however, usually does not alter its testing requirements in recognition of foreign testing or consumer experience. Even drugs that have been used abroad safely and beneficially, sometimes for decades, must undergo the same elaborate, expensive, and time-consuming testing as those never used or tested previously (Wardell 1979, p. 30; Grabowski and Vernon 1983, p. 69; Gieringer 1986, pp. 11, 14).[22] Hence arises the notorious “drug lag,” the delay between the introduction of drugs elsewhere and their marketing clearance by the FDA for sale in the United States (Temin 1980, pp. 141–51; Wardell and Lasagna, 1975; Anderson and Anderson 1987; Kazman 1990).[23] Whereas consumers want quicker access to drugs already tested and used abroad, the FDA as a rule does nothing to accommodate this desire, thereby thwarting consumer satisfaction in still another way.

Consider now the effect on Person A’s optimal testing time t* if the new drug X is chemically related to an existing drug. Because the mechanism of action of the new product probably will be the same as that of the existing product in at least some respects, the consumer- advised by doctors and pharmacists who understand such things-will get less valuable new information and hence less utility from any particular increment of testing of the new product. The MU(I) function will shift downward. Given that the MU(B) function has not changed, the intersection of the MU(B) and MU(I) functions occurs farther to the left, that is, the value of t* declines. Again intuition agrees. The consumer wants quicker access to the new product because the information yielded by additional testing is less valuable, given that the consumer expects certain “family resemblances” among products.

In such cases the FDA does not act in conformity with consumers’ desires. The agency requires every new product to undergo the same testing procedures even though manufacturers have already established the efficacy and side effects of products of the same chemical family.[24] Consumers gain access to the new product no sooner than they would if it were completely novel in chemical composition and mechanism of action. Again consumers’ satisfaction is thwarted.

Finally, consider how consumers would set t* for a more threatening condition (e.g., cancer), relative to a less threatening one (e.g., the common cold). In this situation the MU(B) function for the more threatening condition would lie above the MU(B) for the less threatening condition, as each day’s delay entails greater foregone benefit in the former case than in the latter. Higher MU(B) functions intersect the MU(I) function farther to the left, that is, at a lower value for t*. Ceteris paribus, the consumer desires quicker access to the drug when it can alleviate a more serious condition.

The FDA does not accommodate this consumer preference. Whether the condition to be treated is life-threatening or simply unpleasant, the agency requires the same rigid, elaborate, and time-consuming testing. Once again, the regulators frustrate the desires of consumers by insisting that one size (testing procedure) fits all (drugs and patients), regardless of the urgency with which consumers desire access to certain drugs. In some cases this regulatory intransigence creates the absurd situation in which the FDA denies dying patients access to a new drug because the manufacturer has not yet established beyond a reasonable doubt that the drug will not harm the users.[25]

Conclusion

Banning a product can never improve the well-being of consumers properly understood, that is, understood as individual consumers’ prospective and subjective utility. This proposition remains valid even when risk is incorporated into the analysis. Risk of inefficacy or adverse side effects is simply another dimension of each good, like taste, size, or location, about which the consumer has preferences. Government restrictions have the same effect on consumer welfare regardless of the dimension of the good that is restricted; in this regard there is nothing special about risk. A simple model incorporating this approach to thinking about risky consumers’ goods allows us to establish that the FDA’s regulation of drugs (and likewise its regulation of medical devices), both in general and in several of its specific forms, has detrimental effects on consumers’ welfare. Nothing in economic theory, correctly understood, supports the imposition of product bans such as those enforced by the FDA through its testing requirements. The bans help no consumer; they definitely hurt some consumers.

References

Anderson, Kenneth, and Lois Anderson, eds. 1987. Orphan Drugs. Los Angeles: The Body Press.

Barr, Nicholas. 1992. “Economic Theory and the Welfare State: A Survey and Interpretation.” Journal of Economic Literature 30 (June): 741–803.

Block, Will. 1992. “The Power of Information.” In Stop the FDA: Save Your Health Freedom. John Morgenthaler and Steven Wm. Fowkes, eds. Menlo Park: Calif.: Health Freedom Publications. Pp. 101–4.

Buchanan, James M. 1969. Cost and Choice: An Inquiry in Economic Theory. Chicago: University of Chicago Press.

____. 1979. What Should Economists Do? Indianapolis: Liberty Press.

____. 1986. Liberty, Market and State. New York: New York University Press.

Cordato, Roy E. 1992. Welfare Economics and Externalities in an Open Ended Universe: A Modern Austrian Perspective. Boston: Kluwer Academic Publishers.

DiMasi, Joseph A., Natalie R. Bryant, and Louis Lasagna. 1991. “New Drug Development in the United State from 1963 to 1990.” Clinical Pharmacology & Therapeutics 50 (November): 471–86.

Eraker, Stephen A., and Harold C. Sox. 1981. “Assessment of Patients’ Preferences for Therapeutic Outcomes.” Medical Decision Making 1: 29–39.

Formaini, Robert. 1990. The Myth of Scientific Public Policy. New Brunswick, N.J.: Transaction Publishers.

Gieringer, Dale H. 1985. “The Safety and Efficacy of New Drug Approval.” Cato Journal 5 (Spring/Summer): 177–201.

____. 1986. “Compassion vs. Control: FDA Investigational-Drug Regulation.” Policy Analysis No. 72. Washington, D.C.: Cato Institute.

Gordon, David. 1993. “Toward a Deconstruction of Utility and Welfare Economics.” Review of Austrian Economics 6, no. 2: 99–112.

Grabowski, Henry G., and John M. Vernon. 1983. The Regulation of Pharmaceuticals: Balancing the Benefits and Risks. Washington, D.C.: American Enterprise Institute for Public Policy Research.

Greer, Douglas F. 1993. Business, Government, and Society. 3rd ed. New York: Macmillan.

Higgs, Robert. 1993. “Allocation of Risks Associated with Medical Goods: Government Regulation versus Market Processes.” Journal of Private Enterprise 9 (Summer): 59–69.

____. 1994. “Should the Government Kill People to Protect Their Health?” The Freeman 44 (January): 13–17.

Kazman, Sam. 1990. “Deadly Overcaution: FDA’s Drug Approval Process.” Journal of Regulation and Social Costs 1 (September): 35–54.

____. 1992. “Saying Yes to Drugs.” Policy Analysis. Washington, D.C.: National Chamber Foundation.

Kwitny, Jonathan. 1992. Acceptable Risks. New York: Poseidon Press.

Langlois, Richard N. 1982. “Subjective Probability and Subjective Economics.” Discussion Paper No. 82–9. New York: C. V. Starr Center for Applied Economics, New York University.

Lave, Lester B. 1987. “Health and Safety Risk Analyses: Information for Better Decisions.” Science 236 (April 17): 291–95.

Mises, Ludwig von. 1996. Human Action. 3rd rev. ed. Chicago: Henry Regnery.

Pasour, E. C., Jr. 1988. “Economic Efficiency and Public Policy.” In Man, Economy, and Liberty: Essays in Honor of Murray N. Rothbard. Walter Block and Llewellyn H. Rockwell, Jr., eds. Auburn, Ala.: Ludwig von Mises Institute. Pp. 110–24.

Pearson, Durk, and Sandy Shaw. 1993. Freedom of Informed Choice: FDA Versus Nutrient Supplements. Neptune, N.J.: Common Sense Press.

Peltzman, Sam. 1974. Regulation of Pharmaceutical Innovation: The 1962 Amendments. Washington, D.C.: American Enterprise Institute for Public Policy Research.

Rothbard, Murray N. [I9561 1977. “Toward a Reconstruction of Utility and Welfare Economics.” Occasional Papers Series no. 3. Richard M. Ebeling, ed. New York: Center for Libertarian Studies. Scherer, F. M. 1993. “Pricing, Profits, and Technological Progress in the Pharmaceutical Industry.” Journal of Economic Perspectives 7 (Summer): 97–115.

Seidman, David. 1977. “The Politics of Policy Analysis." Regulation (July/August): 22–37.

Siegel, Joanna E., and Marc J. Roberts. 1991. “Reforming FDA Policy: Lessons from the AIDS Experience.” Regulation (Fall): 71–77.

Stiglitz, Joseph E. 1988. Economics of the Public Sector 2nd ed. New York: Norton.

Temin, Peter. 1980. Taking Your Medicine: Drug Regulation in the United States. Cambridge, Mass.: Hanard University Press.

Viscusi, W. Kip. 1991. “Risk Perceptions in Regulation, Tort Liability, and the Market.” Regulation 14 (Fall): 50–57.

Ward, Michael R. 1992. “Drug Approval Overregulation.” Regulation 15 (Fall): 47–53.

Wardell, William M. 1979. “More Regulation or Better Therapies?” Regulation (September/October): 25–33.

____, and Louis Lasagna. 1975. Regulation and Drug Development. Washington, D.C.: American Enterprise Institute for Public Policy Research.

Weimer, David Leo. 1982. “Safe-and Available-Drugs.” In Instead of Regulation: Alternatives to Federal Regulatory Agencies. Robert W. Poole, Jr., ed. Lexington, Mass.: Lexington Books. Pp. 239–83.

Wilson, Richard, and E. A. C. Crouch. 1987. “Risk Assessment and Comparisons: An Introduction.” Science 236 (April 17): 267–70.

Footnotes

  1. I do not make the Knightian distinction between risk and uncertainty. If consumers lack an acceptable estimate of probabilities from an external source, they must necessarily proceed in terms of subjectively formulated probabilities. To deny this proposition is to suppose that consumers appreciate that outcomes are contingent but act as if they know nothing at all about the likelihood of possible outcomes. Compare Langlois (1982, pp. 9,24,31,38–39).
  2. Eraker and Sox (1981) document the wide variation in attitudes toward risk-bearing of persons considering alternative medical therapies.
  3. ”Eficient risk-taking will generally lead consumers to buy some risky products and to forego some safety precautions” (Viscusi 1991, p. 52).
  4. The marginal annual risk of death for a person drinking one saccharin-sweetened soda a day has been estimated to be 1in 100,000; for someone eating four tablespoons of peanut butter a day, 1in 25,000. See Greer (1993, p. 443).
  5. The three preceding sentences provide, I submit, a more satisfactory under- standing of the Rothbardian position than the criticism advanced by Cordato (1992, p. 43), who objects that “Rothbard’s conclusions [that free-market exchanges increase social utility] would only hold in an error-free world of perfect knowledge, where expectations necessarily coincide with results.” See also Gordon (1993, pp. 103–5). Whether the product in question “ultimatelyn proves more or less toxic or more or less effective than consumers initially supposed has no bearing on the present analysis. Choices must be made on each day prior to that “ultimate” day. Should the attributes of the good ever become known fully by everybody, the present analysis no longer applies.
  6. Lave (1987, pp. 291–92) observes: “There is no single optimal decision for all people. The key issues in medical decision-making are the extent and quality of information about the outcomes of alternative interventions, the incentives influencing the ill person and those treating him, and the preferences of those involved . . . [Rlegulators usually make the most conservative (that is, worst case), plausible assumption in each situation.”
  7. See also Buchanan (1979, pp. 61,86–87; 1986, pp. 73–74).
  8. Rutten to the author, September 1993. Says Block (1992, p. 103), “to concede a monopoly on truth to a government agency acting as absolute scientific czar is fraught with peril far exceeding that of so-called snake-oil information governments so fear.” Seidman (1977, p. 32) observes that “there are points in the [FDA’s drug or medical device approval] process where single individuals can block approvals.” What is the likelihood that each such individual will have more knowledge than anyone else?
  9. In the words of Cordato (1992, p. 116), “[Neoclassical] economists have . . . constructed a parallel universe that looks very little like the one with which we must cope, and assessed the efficiency problem that would exist in that universe.”
  10. See Higgs (1993) for further contrasts of central planmng and free markets as institutions for allocating the risks associated with the use (or nonuse) of medical goods. Also, excellent discussions may be found in Gieringer (1985, 1986) and Weimer (1982, pp. 263–77).
  11. For estimates of a number of commonly borne risks, see Wilson and Crouch (1987, p. 236) and Greer (1993,p. 443). Reporting on studies of “the implicit values of life reflected in decisions involving a broad range of risky product and job choices,” Viscusi (1991,p. 51) notes that “the preferences with respect to risk follow patterns one would expect if these risks were the result of rational tradeoffs.”
  12. Gieringer (1985,p. 201) notes that “the overwhelming number of drug accidents are due to old, not new, drugs.”
  13. Wardell (1979, p. 33) notes that “current Phase 111 trials [the final stage of the clinical testing], although the most costly and time-consuming part of the clinical development process, add very little to what has already been learned about a drug’s efficacy and toxicity by the end of Phase 11.” Conceivably, MU(I) might increase in the early stage of testing, but eventually it must decline, if only because the human lifespan is limited. In using the model, nothing is gained by considering an initially rising portion of the MUU) function.
  14. I can imagine conditions such that MU(B)would not be a monotonic increasing function oft. For the model, all that matters is that if MU(I)ever intersects MU(B),it does so from above. Otherwise the model allows an absurdity: that a person favors early use of the product but, beyond a certain period of testing, prefers to wait for more testing.
  15. The agency does not set the test duration at the beginning of the process. Rather, it extends the period sequentially (and unpredictably) by advising the applicant from time to time that more information must be submitted or additional tests performed or simply by spending more time processing the initial application.
  16. Whether the manufacturer voluntarily performs the additional testing desired by Person A is a separate issue, which I presume depends on the seller’s estimate of whether, given the expected incremental streams of cost and revenue, the additional testing will increase the present value of the firm.
  17. Some of the complaints, which have appeared recently in the press, are quoted in Higgs (1994).
  18. In a paper that is for the most part excellent, Weimer (1982,pp. 253–55) comes close to reaching this conclusion, but his analytical framework, cast in terms of hypothetical numerically comparable costs and benefits for different groups, can be, as he recognizes, only a means of illustrating a point, not a compelling demonstration. Weimer’s analysis remains tied to the neoclassical concept of social efficiency: ‘So long as there were patients who would elect to take drug X after being informed of the benefits and risks associated with it, the regulatory decision not to allow marketing would be socially inefficient (p. 255). In fact the decision is much worse than merely ‘socially inefficient”: it harms some consumers and helps nobody.
  19. One frequently sees news items like those from the Wall Street Journal whose headlines announced “Study Finds Bristol-Myers Heart Drug Slows Down Kidney Disease in Diabetics” (November 11, 1993) and “Breast Cancer Drug Now Gaining Favor May Also Reduce Risk of Heart Disease” (September 1,1993).
  20. For example, by the 1980s, on the basis of extensive research reported in the medical literature, physicians accepted that patients with heart disease can reduce their risk of heart attack by taking a little aspirin each day, but the FDA forbade the sellers of aspirin to mention this benefit in their advertising to the public. See Pearson and Shaw (1993, pp. 12–15, 55–56, 81–84). Of course, no drug company will spend hundreds of millions of dollars to gain FDA approval to make a new claim when the product cannot be patented and many different companies can produce it.
  21. Physicians are legally free to use drugs for unapproved indications, but in practice they are reluctant to do so because of fears related to malpractice litigation. See Gieringer (1985, pp. 189–90; 1986, p. 17) and Nicholas Bachynsky, M.D., in the foreword of Anderson and Anderson (1987, pp. viii, xi-xii).
  22. 1n a few instances in recent years the FDA has taken into account foreign information, but these instances are quite exceptional.
  23. Anderson and Anderson (1987) catalogue 192 generic and 1,535 brand-name tested drugs available abroad but not approved for sale in the United States.
  24. The FDA has designated some products for consideration on a “fast track.” See Grabowski and Vernon (1983, p. 27). But this distinction represents the attempt of a few bureaucrats to “pick winners.” There is no reason to believe that they can do so more successfully than others can. See William Wardell (quoted in Kazman 1990, p. 45) for a case of egregious misclassification. In any event the agency’s discrimination usually reflects judgments of life-saving potential rather than the priorities of consumers, who might, for example, place a relatively high value on expediting the availability of a drug to prevent a disfiguring disease such as severe acne or a painful and debilitating disease such as arthritis, even though the disease is not fatal. Moreover, the FDA’s “fast-tracking” efforts, along with its attempt to speed the development of so-called “orphan drugs” and other exceptions, have not actually reduced the average time required for approval. See DiMasi, Bryant, and Lasagna (1991, p. 480), Weimer (1982, p. 249), Anderson and Anderson (1987, p. x), Siege1 and Roberts (1991, pp. 71–73, 77), Ward (1992, p. 51), and Kazman (1992, p. 6).
  25. Drugs for the treatment of AIDS furnish the outstanding example, but by no means the only one. The AIDS story is told in dramatic fashion by Kwitny (1992).