Rational Expectations Are Endogenous to and Abide by “the” Model
Rational Expectations Are Endogenous to and Abide by “the” Model
Abstract and Keywords
This chapter examines the main assumptions of the rational expectations hypothesis (REH). REH is a pillar of the neo-Walrasian approach to general equilibrium, a mathematically demanding theory purporting to show how the interaction between rational agents engaged in constrained maximization of consumption, production, profits, etc., over time, generates a unique and stable intertemporal equilibrium. This chapter first provides a historical overview of REH before discussing John Muth's critique of exponential averages as a forecasting technique and his claim that exponential smoothing is an optimal filtering method, along with his other arguments against adaptive expectations. It then considers the application of REH to macroeconomics and proceeds by analyzing some of the criticisms against REH, including the mathematical or computational difficulties present in RE models and the compatibility of RE models with empirical data. Finally, it highlights REH's limited methodological relevance when it comes to modeling observed economic behavior.
Keywords: rational expectations hypothesis, general equilibrium, rational agents, John Muth, exponential averages, exponential smoothing, adaptive expectations, macroeconomics, empirical data, economic behavior
THE rational expectations hypothesis (REH) is a much more ambitious, comprehensive, and complex theory than any of the expectations theories that we have just reviewed. It is a pillar of the neo-Walrasian approach to general equilibrium, a mathematically demanding theory purporting to show how the interaction between rational agents engaged in constrained maximization of consumption, production, profits, etc., over time, generates a unique and stable intertemporal equilibrium. The explicit consideration of time in this general equilibrium approach calls either for future markets in all goods or, failing such markets, for an expectations theory of future prices, so as to account for the influence of these expected prices on current transactions.
Like many other revolutions, the REH started as a criticism and a rejection of the preexisting order, namely, adaptive expectations and economic models in which expectations are assumed to be exogenous, that is, independent of the forecasts made by the model of interest. John Muth is the economist who launched this revolution in 1960. But this revolution went much further than a mere rejection of adaptive expectations.
The first positive proposition made by the REH is that, for a model to be logical and therefore valid, expectations have to be endogenous; that is, they must coincide with the forecasts made by this model. Muth asserted that this identity between the model’s forecasts and agents’ (p.18) expectations defines rational expectations. This first positive proposition implies the second one, according to which all economic agents use the same model of the economy. Almost simultaneously, Fama formulated the efficient market hypothesis, which develops very much the same line of argument in the context of financial markets.
Last but not the least, if all agents use the same correct model of the economy, it follows—as argued by Lucas—that they know how to forecast and how to adjust to the consequences of the decisions made by monetary and fiscal authorities. Therefore, concludes the REH, monetary policy and fiscal policy are ineffective or neutral, and policymakers should not even seek to steer real macroeconomic variables such as growth and employment. Markets can be trusted to do this in the most effective way.
In particular, policymakers should not fear triggering a contraction in private demand when they fight inflation by curbing money creation or when, to rebalance public finances, they cut public expenditures and raise taxes. In the first case, economic agents will recognize that money is neutral; in the second one, according to the Ricardian equivalence, they will reduce their current savings, as they figure out that they will pay less tax in the future. In this view of the world, there cannot be any endogenous cyclical fluctuations. Whichever fluctuations there may exist, according to the real business cycle theory, should be explained in terms of exogenous random shocks, that is, unexpected shocks. Let us elaborate on these different points.
2.2 Muth’s Critique of Exponential Averages
The central insight of the REH is that economic agents should form their expectations by making the most efficient use of all available information. Even if this information were limited to the past values of the data they need to forecast, the least they can do is to learn from their forecasting errors. There should be no exploitable pattern in their forecasting errors, which should be randomly distributed. If, for example, monthly inflation rates are in a constantly rising trend, an exponential average of past inflation rates is bound to be “behind the curve” of observed inflation. This systematic error is information that the public can exploit to improve its understanding of the dynamics of inflation.
(p.19) Despite this potential weakness, exponential averages had, in the 1950s, become a widely used and rather successful forecasting technique. This led Muth to search for the conditions under which an exponential average produces unbiased and minimum mean square error forecasts of a variable, assumed to be a linear combination of independent random shocks.1 (see proof C.1, p. 277.)
To put it differently, Muth has demonstrated that exponential smoothing is an optimal filtering method, if the variable of interest follows a random walk. By looking at exponential smoothing as a linear regression, one can shed further light on Muth’s insight. It is easy to demonstrate that the function2
The function f (a) formulates a weighted linear regression of the time series xn on a constant a with the weighting coefficients declining exponentially with respect to the age i of each observation (since 0 < k < 1). This approach clearly shows that exponential smoothing is appropriate only when one can fit the time series xn by a horizontal line in the neighborhood of n. If the times series exhibit a trend, be it rising or declining, or fluctuations, exponential smoothing should not be used. In other words, exponential smoothing is not an appropriate tool to deal with the accelerating rates of change that characterize hyperinflation (or financial bubbles).3
Finally, one can also interpret an exponential average as a filter, which maintains the filtering error xn – yn in a constant ratio to the forecasting (p.20) error xn – yn−1, irrespective of the latter’s magnitude. From relationship 1.1, we get indeed
For k≈0, the filtering error is close to the forecasting error. For k ≈ 1, the filtering error is small, since yn≈xn. This relationship illustrates the trade-off presented by exponential averages. If k ≈ 1, the filtering error is small, but the smoothing effect is negligible. This is what is needed in the presence of very volatile time series and regime changes. Conversely, if k ≈ 0, the smoothing effect is strong, but the filtering error is large. This is what is needed for well-behaved time series, without regime changes. For lack of flexibility, an exponential average is not fit to deal with a time series, the volatility of which is variable over time. In any case, the filtering and the forecasting errors are of the same sign, be it positive or negative.
In retrospect, Muth’s discussion of exponential averages clearly opened the way to the REH because, for the first time, it asserted that the mathematical expectation of a variable x is the benchmark against which the forecasting performance of its exponential average y should be measured.
By showing that the use of exponential averages should be restricted to certain cases, Muth had struck his first blow against adaptive expectations.
Since he posited that the mathematical expectation of a variable is the benchmark against which the performance of a specific forecasting method, that is, exponential average, should be measured, it was natural and logical on the part of Muth to push his argument further. He did so by stating that the probability distribution indicated by “the relevant economic theory” is what rational economic agents take as their expectation of the variable of interest.
2.3 Model Consistent Expectations
In general, the variable of interest is not autonomous with respect to other variables; it does not have a life of its own, and available information is not limited to its past values. Time series are also available for other variables that may influence the variable of interest or that (p.21) may be influenced by it. For example, intuition tells us that there must be some interaction between inflation, on one side, and wages, commodities prices, taxes, interest rates, the money supply, exchanges rates, and so on, on the other side. Building an economic model consists in quantifying these interactions. The REH argues that it is not logical to build a model in which expectations are assumed to be exogenous, that is, formed as if this very model did not exist or remained unknown to economic agents. For example, in a model where inflation results from money growth, it is not logical to assume that inflation expectations are a mere exponential average of past inflation rates. Two of Muth’s followers, Sargent and Wallace, express this idea very well:4
The usual method of modeling expectations involves supposing that they are formed by extrapolating past values of the variable being predicted, a scheme that usually, though not always, assumes that the people whose expectations count are ignorant of the economic forces governing the variable they are trying to predict.
But it was Muth who first struck this second blow to adaptive expectations. As a matter of fact, the idea of rational expectations was suggested to Muth by the two following observations he made about the expectations data collected through surveys:5
1. Averages of expectations in an industry are more accurate than naive models and as accurate as elaborate equation systems, although there are considerable cross-sectional differences of opinion.
2. Expectations generally underestimate the extent of changes that actually take place.
Muth explained these observations by suggesting that
expectations, since they are informed predictions of future events, are essentially the same as the predictions of the relevant economic theory. At the risk of confusing this purely descriptive hypothesis with a pronouncement as to what firms ought to do, we call such expectations “rational.” It is sometimes argued that the assumption of rationality in economics leads to theories inconsistent with, or inadequate (p.22) to explain, observed phenomena, especially changes over time. Our hypothesis is based on exactly the opposite point of view: that dynamic economic models do not assume enough rationality.
The hypothesis can be rephrased a little more precisely as follows: that expectations of firms (or, more generally, the “subjective” probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distribution of outcomes).
The hypothesis asserts that … the way expectations are formed depends specifically on the structure of the relevant system describing the economy.
If the prediction of the theory were substantially better than the expectations of the firms, then there would be opportunities for the “insider” to profit from the knowledge.
Translated in mathematical terms, this show of modesty on the part of economists says that the expected value of a variable x at n consists of its probability distribution, conditional on the information provided by the “relevant economic theory” at time n − 1:
Strictly speaking, the expectation should be represented by the probability distribution predicted by the relevant economic theory. In practice, for the sake of computational convenience, most of the time it is represented by a single quantity, that is, the mathematical expectation of this probability distribution
Being thus defined as the mathematical expectation of all potential outcomes, this rational expectation will naturally tend to “underestimate the extent of changes that actually take place.”
Having thus defined rational expectations, Muth continues with a series of very important statements that further part ways with Keynes’s “vision” of financial markets as a beauty contest where herding and destabilizing speculation are rife:
Allowing for cross-sectional differences in expectations is a simple matter, because their aggregate effect is negligible as long as the (p.23) deviation from the rational forecast for an individual firm is not strongly correlated with those of the others. Modifications are necessary only if the correlation of the errors is large and depends systematically on the other explanatory variables … Whether such biases in expectations are empirically important remains to be seen.
If price expectations are in fact rational, we can make some statements about the economic effects of commodity speculation … Speculation reduces the variance of prices by spreading the effect of a disturbance over several time periods.
Muth claimed that his hypothesis does a better job than alternative theories at explaining relevant data. What he did was to compare the empirical implications of the REH with those of the cobweb “theorem,” which purports to describe the formation of prices in agricultural markets, under the assumption that farmers do not learn from experience and form very naive—extrapolative or adaptive—expectations.
Nevertheless, Muth’s REH had very little influence on the development of economic theory during the 1960s. Muth’s attempt to turn the existing order—Keynesianism as it happens—upside down was probably too much to swallow by the high priests of the day. Furthermore, the agricultural markets in which the REH was tested were probably seen as too incidental to deserve a lot of attention.
2.4 REH and Macroeconomics
It was Lucas who brought the REH to the forefront of macroeconomic theory, with two seminal articles published, respectively, in 1972 and 1976. In the first one, Lucas used rational expectations to explain why there should be no trade-off in the long run between higher inflation and lower unemployment, contrary to what the Philips curve suggested, at least in the short run.6 The insight of this paper is that policymakers cannot hope to fool all people all the time: workers should not be expected to persistently mistake a rise in nominal wages for a rise in real wages. Therefore, monetary policy is neutral. In the second paper, the famous Lucas’s critique, he used rational expectations again, but this time to explain why most econometric models fail tests for structural changes.7 According to Lucas’s critique, this is so because (p.24) the coefficients of such models incorporate the expectations that people rationally form, not only about policy instruments, but also about exogenous variables:8
If expectations are rational and properly take into account the way policy instruments and other exogenous variables evolve, the coefficients … of the model will change whenever the processes governing those policy instruments and exogenous variables change.
In other words, these coefficients are not policy-invariant. They change whenever the rules of the game change.
As a transition to a critical discussion of the REH, let us quote Arrow’s own presentation of its main insights.
Although Arrow, as we shall soon see, has strong arguments against the REH, he gives a fair presentation of its structure:9
The new theoretical paradigm of rational expectations holds that each individual forms expectations of the future on the basis of a correct model of the economy, in fact, the same model that the econometrician is using … Since the world is uncertain,10 the expectations take the form of probability distributions, and each agent’s expectations are conditional on the information available to him or her … Each agent has to have a model of the entire economy to preserve rationality. The cost of knowledge … has disappeared; each agent is engaged in very extensive information gathering and data processing.
Rational expectations theory is a stochastic form of perfect foresight.
A simple model, borrowed from E. Malinvaud, will help to illustrate how a model with endogenous expectations is constructed.11 This model purports to describe the relationships among excess demand, prices, and output (see proof C.3, p. 280).
While the REH has reached a prominent position in contemporary economic thought, it has never succeeded to silence the many economists who challenge it. The critiques of the REH fall into three categories. A first group of critiques points to the mathematical or computational difficulties present in RE models. A second group of critiques goes one step further by questioning the key assumption (p.25) underpinning these models. Last but not the least, the compatibility of RE models with empirical data remains an open question.
2.5 Mathematical Difficulty #1: Modeling RE with Risk
Let us start with the first mathematical difficulty. While Muth originally defined rational expectations in terms of probability distributions, in most RE models, it is the mathematical expectation of a given variable that represents a rational expectation. In other words, a single number, a weighted average, is thus substituted for a set of numbers. The reason for this substitution is obvious: it is much easier to handle a single number than a set of numbers, even when they are distributed according to a certain statistical law.
As pointed out by Malinvaud, a purist might feel uneasy with such substitution for it involves neglecting risk in general and the tails of distributions in particular, while empirical research suggests that decision-makers deemed “rational” tend to minimize the cost of being wrong by avoiding strategies entailing low-probability, but potentially catastrophic, outcomes. We shall come back to this point in chapter 10, when we discuss the Allais paradox.
The alternative consisting of dealing with probability distributions requires computational abilities that dwarf even advanced statistical software, such as Eviews. Here is what the Version 6 users’ manual said in 2007, repeating what the Version 5 manual had said a few years earlier:
If we assume that there is no uncertainty12 in the model, imposing model consistent expectations simply involves replacing any expectations that appear in the model with the future values predicted by the model … A deterministic simulation of the model can then be run using Eviews ability to solve models with equations which contain future values of the endogenous variables. When we add uncertainty13 to the model … the expectations of agents should be calculated based on the entire distribution of stochastic outcomes predicted by the model … At present, Eviews does not have built in functionality for automatically carrying out this procedure. Because of this, Eviews will not perform stochastic simulations if your model contains (p.26) equations involving future values of endogenous variables. We hope to add this functionality to future revisions of Eviews.
2.6 Mathematical Difficulty #2: Nonlinearity
Let us now turn to the second mathematical difficulty. In Malinvaud’s example and in much, if not most, of the literature on REH, the relationships considered are linear in their variables. This revealed preference for linear relationships is not an accident. Rather, as Lucas and Sargent put it, it is “a matter of convenience, not of principle.”14 The convenience in question is computational; it has to do with the “computer bill,” not with mathematical theory (see proof C.4, p. 283).
Lucas and Sargent argue that, theoretically, we know how to deal with nonlinear forms via expensive recursive methods: the problem is only one of the cost of the computing power required to do this exercise. Nevertheless, their statement that “it is an open question whether for explaining the central features of the business cycle there will be a big reward to fitting nonlinear models” seems to owe more to a desire for analytical tractability and economical computing than to genuine economic conjectures and research, as chapters 3 to 6 of this book will show.
Whether a rational expectations model is linear or not is neither a pure question of computer bill, nor a mere challenge to its ability to accurately represent the real world. It is first and foremost an issue of relevance and utility. Dynamic linear models tend indeed to generate multiple equilibria. That a problem may admit several solutions is not an unknown situation: many well-known equations have several solutions depending on the values of their parameters. The problem with dynamic linear rational expectations models is rather that they admit, as stated by Guesnerie, an “embarrassing number of solutions.”15 By this, Guesnerie actually means a potentially infinite number of solutions, including bubble equilibria or sunspot equilibria, which are the expression or the fruit of self-fulfilling expectations that may or may not have anything to do with reality. If, for example, economic agents come to believe that an exogenous factor, say sunspots, has an impact on stock market returns and if, furthermore, they have good reasons to believe that they widely share this view, then, this expectation will fulfill itself, not directly of course, but indirectly by affecting investors’ behavior.
(p.27) The next round of critiques of the REH challenges the assumptions that it makes about the modeling ability of economic agents. Many economists consider these assumptions to be either nonrealistic or illogical.
2.7 Model Discovery in Uncertainty and Risk
Even if one is willing to accept the existence of the relevant model, there remains an important question as regards the learning process whereby this model has been discovered and estimated. The point has been made that the REH would need to specify the learning process whereby empirical frequencies are discovered and modeled into objective probabilities. How long does it take to discover them? As we have seen earlier, Hayek would answer that it can only be a never-ending process. Others think that it should in any case take a lot of time. According to Modigliani,16 “Benjamin M. Friedman has called attention to the omission from REH of an explicit learning model, and has suggested that, as a result, it can only be interpreted as a description not of short-run but of long-run equilibrium.”17
From this point of view, in the REH as in many other fields of economics, time is in fact absent.
The weight of Benjamin M. Friedman’s remark is indirectly increased by Lucas himself, who adds a very important disclaimer to his advocacy of the REH by saying that it cannot be valid when people are faced with uncertainty as defined by Knight. Right at the beginning of this book, we emphasized that the REH has a limited ambition, since it cannot deal with Knightian uncertainty. Let us again quote Lucas to clarify the relationship among the REH, uncertainty, and risk.18 “In order that the latter assumption [REH] have an operational meaning, the analysis will be restricted to the situation in which the relevant distributions have settled down to stationary values and can thus be ‘known’ by traders.”
Leaving uncertainty aside and content with assuming risk, Lucas and Sargent answer Benjamin M. Friedman’s argument that one should assume economic agents to be Bayesian learners when they are confronted with stochastic outcomes, such as the daily, weekly, monthly returns on a given security or asset class. Aware as agents should be of the variability of the observed mean and variance of returns from one set of observations to the next, they should, for lack of sufficient data, (p.28) take with a pinch of salt whichever beliefs their experience may lead them to hold about the true, but unknown (or yet to be discovered) values of these two parameters. They should be open to revising their prior beliefs (not to say their expectations) in a systematic and rigorous way whenever fresh data become available. In other words, they should practice Bayesian inference (see table 2.1).
Bayesian learning uses Bayes’s theorem, which says19
• P (H), the prior probability, is the ex ante probability of H (i.e., before E is observed).
• P (H | E), the posterior probability, is the ex post probability of H (i.e., after E is observed).
• P (H | E), the likelihood, is the probability of observing E if H holds.
• P (E), the marginal likelihood or model evidence, is the same for all possible hypotheses (i.e., it is independent of any hypothesis).20
Nothing in relationship 2.7 prevents P (H) from being a subjective probability (a “guesstimate”), even though there is not always an obvious prior distribution to take. In the real world, the lack of data often leaves no choice, but subjectivity.
Hence, the beauty of Bayes’s theorem is that it “enables us to pass from a particular experience to a general statement.”21 Bayes’s inference can, of course, be used recursively, each and every time new
Table 2.1 Bayesian Learning Process
Probabilities/degree of belief
Particular evidence (E)
Posterior (H | E)
For all its apparent conciseness, Bayesian inference is not that easy to implement.22 To compute the posterior probability, one needs to compute the marginal likelihood (or model evidence). This can be discouragingly complex if the hypothesis H pertains to many parameters. For in this case, the term P (E) takes the form of a complex multiple integral. Yet, too many multidimensional distributions cannot be expressed in analytic terms. Modern technology makes it possible to make these computations, but the “computer bill” is not negligible.
On the other hand, for simpler models such as the normal distribution, which is used pervasively in finance despite the fact that empirical distributions have fat tails, Bayesian inference does not add a lot, if at all, to the simpler frequentist approach. The results of both approaches converge.
When H is the assumption that the empirical distribution of a stochastic variable can be modeled with a normal distribution, the purpose of Bayesian inference is indeed to revise the prior estimates of its unknown mean μ and standard deviation σ.23 With μn − 1 being the prior estimate of μ, Nn − 1 the number of observations used to make this prior estimate, and the mean return observed on a new sample Nn observations, the posterior estimate of μ is
The similarity between relationships 2.10 and 1.1 is striking, when one substitutes y for μ, k for , and x for . In the case of a random variable assumed to be normally distributed, all that Bayesian (p.30) inference does is to substitute the variable term for the constant k. In other words, the elasticity of expectations becomes time-varying, which is a step forward. However, as the total number of observations grows, it converges towards 0, since
If, in the case of a normally distributed variable, Bayesian inference seems to be an unduly long roundabout to reintroduce a muted form of adaptive expectations by the backdoor, so be it! This is not to mention the fact that the underlying assumption that underpins Bayesian learning is that there exists a true, stationary distribution, which only quietly awaits to be discovered by perceptive economic agents. Yet, in our world, uncertainty is arguably the rule, and risk the exception. Or to borrow Knight’s words; “It is a world of change in which we live, and a world of uncertainty.”
Moreover, while it does take time to learn through Bayesian inference, that is, the time needed to collect new evidence, a close examination of relationship 2.10 shows that time is in fact absent from it. The sequential order of both the Nn−1 prior observations and the Nn new observations is simply irrelevant. Both samples can be reshuffled in any arbitrary order without having any impact on the value of the posterior mean μn. Everything happens as if all observations were given simultaneously to the observing agent. This crucial observation remains valid even in more sophisticated forms of Bayesian inference, such as recursive least squares learning, which theorists present as the ultimate form of rational learning.
2.8 Rational Learning, Recursive Ordinary Least Squares, and 1990s Adaptative Expectations
Thanks to its parsimony, the normal distribution is useful to highlight the basic mechanisms involved in Bayesian inference. That said, Bayesian inference encompasses much more than the mere updating of the normal distribution’s parameters. Under the umbrella of Bayesian inference, one finds more or less complex algorithms that process new information and update preexisting knowledge in a systematic way.
Under the rational learning assumption, economic agents use an ordinary linear regression model to form their expectations. In other words, they all use the same linear combination of m − 1 explanatory variables (m with the constant) to predict the variable of interest. The vector β consisting of the m coefficients of this linear combination is estimated through ordinary least squares (OLS). If needed, please see proof C.5, p. 285, for a remainder of the formulation and proof of this classic estimation problem, the well-known solution of which is
Of course, the coefficients estimated at a given point n − 1 in time depend on the observations and Xn−1 available up to this time. Ex ante, one can never rule out that new observations would yield different estimates of the regression coefficients. In fact, parameter drift is the rule rather than the exception. This is where rational learning and recursive OLS linear regression step in by assuming that economic agents update their estimates of the regression coefficients whenever a new data point becomes available.
This exercise leads to the following recursive relationship between successive estimates of the regression coefficients
The term Kn is called the gain. Of course, to compute βn, according to relationship 2.12, one could directly compute . However, such a computation would require handling all the data in the sample (the n observations) at each new step, that is, increasingly large matrices. In contrast, only the last observations are needed to (p.32) compute 2.15 and 2.13, assuming that has been previously computed. Since Xn−1 and are, respectively, (n − 1) × m and m × (n − 1) matrices, is an m × m matrix; hence, its size is independent of the number of observations. As for xn and , they are, respectively, 1 × m and m × 1 vectors. It follows that is a scalar and Kn an m × 1 vector.
Besides being more economical to compute, relationship 2.13 also highlights the role played by new information in the updating process of the prior knowledge represented by the vector βn − 1. The term , which is a 1 × 1 vector, represents indeed the forecasting error observed in n, since yn is the actual outcome and the forecast made under the assumption that the coefficients estimated in n − 1 are still valid in n. In other words, the posterior knowledge is equal to the prior one plus a fraction of the forecasting error. One can demonstrate that recursive OLS linear regression is a restricted form of the Kalman filter.25 It is interesting to compare relationships 2.13 and 1.1 while bearing in mind the equivalence between notations displayed in table 2.2.
The two relationships have very similar structures, the main difference being that with recursive OLS linear regression, the gain is not constant, but time-varying. This being said, time remains in fact absent from this updating process. As long as yi and xi remain on the same line, the lines of the vector ϒn − 1 and of the matrix Xn − 1 (the observations available up to n − 1) can be reshuffled in any arbitrary vertical order without having any impact on βn − 1 (the coefficients estimated in n − 1). To put it differently, economic agents are implicitly assumed to process and to give the same weight to all available data, even those
Table 2.2 Equivalence Table
Recursive OLS linear regression
xn − yn
yn − xnβn−1
It is to address this criticism that scholars have suggested to embed forgetting26 in recursive OLS linear regression.27 This they achieve by introducing an exponential weighting in the function to be minimized (the loss function). Instead of minimizing the sum of the squared residuals, the problem becomes one of minimizing their exponentially weighted sum, which gives greater weight to the more recent residuals.
One can easily prove that this formulation leads to substituting exponential averages, variances, and covariances for arithmetic ones in the formulation of the coefficients (c1, c2, …, cm), the same arbitrarily chosen forgetting factor being applied to the dependent variable and all the independent variables (see proof C.6, p. 289). Once again, it looks somewhat as if adaptive expectations were stealthily reintroduced by the backdoor, a fact at least partially acknowledged by Sargent, who calls recursive OLS linear regression with forgetting “1990s adaptive expectations.”
2.9 One Model?
The starting point of the REH is hardly controversial: everybody agrees that economic agents do–or at least should do–their best to exploit all available information. But, its final destination and conclusions are not immune from many relevant critiques, even if–or maybe because–the Royal Swedish Academy of Sciences awarded the Nobel Prize in Economic Sciences in 1995 to Lucas “for having developed and applied the hypothesis of rational expectations, and thereby having transformed macroeconomic analysis and deepened our understanding of economic policy.”28
Interestingly, in its citation, the Royal Swedish Academy seemed to be willing to defuse a potential critique by watering down the REH (p.34) with the following comment: “The REH does not imply that all agents have the same information, or that all agents know the ‘true’ economic model.”
It seems hard to reconcile this comment with the various texts we have quoted above. In fact, nothing seems further away from the truth. If John Kay is to believed, here is how Sargent–another REH theorist and 2011 Nobel laureate–replied, when asked if differences among people’s models mattered: “The fact is you simply cannot talk about their differences within the typical rational expectations model. There is a communism of models. All agents within the model, the econometricians and God share the same model.”29
Notwithstanding Muth’s claim that the REH is not normative, but simply descriptive, Sargent’s words have a definite normative, not to say totalitarian, tone. What initially looked as a show of modesty on the part of economists turns out to be a show of false modesty. The assumption that all agents use the same model is the easiest to criticize, for it immediately begs the following question: Which model? As one would “expect,” the answer is, of course, the model constructed by the advocates of the REH according to “the relevant economic theory,” to quote Muth once again. Sadly, there is no such thing as the relevant economic theory. Instead, there is a persistent conflict between different theories. Although it was written almost seventy years ago, Rist’s brilliant chapter Le conflit des théories des crises is still an interesting read, if only because it is humble.30 Even then, exogenous explanations of cyclical fluctuations–not yet called real business cycle theories–conflicted with endogenous explanations, which in turn disagreed about the causes of business cycles.
The simple model that we borrowed from Malinvaud contains a number of more or less explicit assumptions that are easy to challenge. For example, is the money supply really exogenous? Is the concept of real money balances the appropriate one to measure the demand for money? Why ignore potential changes in the velocity of money? Why assume that the relevant relationships are linear in their variables? And so on.
Much closer to us, the European sovereign debt crisis provides an interesting example of conflicting models of debt dynamics: while the public debt-to-GDP ratio is at the center of politicians and policy makers’ analysis and communication, the cross-country structure of sovereign spreads indicates that market participants focus much more (p.35) on public debt–to–tax revenues ratios. This is not a minor difference: in the first model, governments are implicitly assumed to own national income (probably because they can tax it); in the second one, their financial position is assessed through the lens of corporate finance!
2.10 How Complex a Model?
The existence of a model that agents use to form expectations seems to be more of an academic ideal than an accurate description of the real world. Malinvaud notes that, simple–if not simplistic–as it may seem, the model that he gives as an example is nevertheless representative of the models used by RE theorists. Indeed, the model discussed by Sargent and Wallace in Rational Expectations and the Theory of Economic Policy is hardly more elaborate than Malinvaud’s example.31 By their own admission, even the most prominent economists struggle with the mathematics of the more complex models. The extent to which central banks are incorporating RE in their own models seems at best incomplete, if not tentative: it is often limited to assuming that economic agents’ inflation expectations are equal to the central banks’ inflation targets. Yes, of course! But who would expect a central bank to construct a model assuming that its policy is not credible?
In terms of operational models, the REH has promised much more than it has actually delivered. This notwithstanding, it has remained the ruling paradigm, at least until the outbreak of the subprime crisis in 2007, for this crisis has challenged the belief that markets are rational and cannot deviate much and for long from fundamental value. As Edmund Phelps put it,32 “The lesson the crisis teaches, though it is not yet grasped, is that there is no magic in the market: the expectations underlying asset prices cannot be ‘rational’ relative to some known and agreed model since there is no such model.”
2.11 Internal Contradiction
Ironically, the REH would be less vulnerable if it allowed for competition between different models. For, as highlighted by Arrow, if all agents use the same model, an insurmountable internal contradiction presents itself, particularly in the case of financial markets:33
(p.36) But if all individuals are alike, why do they not make the same choice? … In macroeconomic models involving durable assets, especially securities, the assumption of homogeneous agents implies that there will never be any trading, though there will changes in prices.
This dilemma is intrinsinc. If agents are all alike, there is really no room for trade.… But if agents are different in unspecifiable ways, then … very few, if any, inferences can be made … identical individuals do not trade. Models of the securities markets based on homogeneity of individuals would imply zero trade.
If not all economic agents are alike, various modes of expectations formation can and must coexist, thus creating a set of permanently heterogeneous expectations. Within these various modes of expectations formation, one is likely to be dominant, one is likely to shape the so-called consensus and to drive the behavior of a majority of agents, if not in terms of headcount at least in terms of income or wealth. Whether this dominant mode of expectations formation is rational is not a theoretical question to be answered a priori on the basis of logical consistency. It is an empirical question that can only be addressed through the test of compatibility with data.
2.12 Rational Adaptive Expectations
We can nevertheless infer that rational expectations are unlikely to be the dominant form of expectations. With Lucas’s insistence that there is no room for rational expectations in an uncertain world, we have already encountered one strong reason for their scope to be limited. With Sargent and Wallace’s case that if inflation happens to feed back into money creation, it can be rational to form adaptive inflation expectations, we encounter a second reason to further limit the scope for rational expectations.34 Keen as they clearly are to, so to say, take over adaptive expectations by proving that it can be rational to form such expectations in the presence of interdependent variables, Sargent and Wallace give in fact, albeit inadvertently, a very strong reason to extend the scope for adaptive expectations in a kind of reverse takeover. For interdependence between variables (such as money and inflation), on the one hand, and endogenous money creation, on the other, is (p.37) not–contrary to what they conjecture with a lot of restraint–an exceptional situation only encountered during periods of hyperinflation, but rather what economics is all about.35
2.12.1 Falsifiability and Compatibility with Data
That, considering the pervasiveness of uncertainty and interdependence in economics, it seems “rational” to limit the scope for rational expectations exempts neither the advocates nor the critics of the REH from the test of compatibility with data. Sadly, this is easier said than done.
For endogenous expectations raise some serious methodological difficulties, when it comes to testing the relevance of RE models: indeed, can the REH be falsified? Since rational expectations cannot be observed directly (for they have an impact on the forecasted variables and are supposed to be formed in exactly the same way as variables are determined in an economic model of the economy), “how do we ever discover,” asks Blaug, “whether the REH theory is true or not?”36
According to Blaug, the REH is a proposition, which cannot be proved wrong; hence, it is not a scientific proposition.
This difficulty probably contributes to fostering the impression that, on rational expectations as on many other issues, economists can only agree to disagree over issues of empirical methodology. Acrimonious disagreements persist about the extent to which this theory is compatible with available data. Screpanti and Zamagni blame the new classical economics for its “ability to ignore constant attacks from empirical research.”37 The advocates of the REH claim, of course, that its degree of compatibility with data is high. Its critics, be they Keynesian or not, retort that they are not convinced, on the ground that RE theorists indulge in excessive calibration of parameters.
One of the features of RE models is indeed their lack of parsimony. They tend to contain many parameters, for example, to account for the lagged effects of a given variable on another one. This great number of parameters facilitates getting fits that are statistically good but neither easy to interpret nor economically meaningful. With many parameters to calibrate, goes the criticism, it is almost impossible not to fit the data. This calibration problem is exacerbated when Bayesian learning leads, as new data become available, to an almost continuous revision of the estimated values of the parameters.
(p.38) This being said, many tests of the policy neutrality hypothesis, one of the main implications of the REH, have been carried out by RE theorists, largely with negative results, according to Blaug, Malinvaud, and Woodford, to name but a few.38 However, Blaug qualifies this empirical refutation by stressing the fact that these tests may not be conclusive, for it is impossible to isolate the rational expectations hypothesis from joint assumptions.39 In the absence of absolutely unambiguous empirical conclusions, it is quite legitimate to consider that the problem of expectations remains unsolved and to look for alternative theories, as suggested by Woodford:40 “The macroeconomics of the future … will have to go beyond conventional late-twentieth-century methodology … by making the formation and revision of expectations an object of analysis in its own right.”
2.12.2 Policy Implications
That the issue of expectations remains unsettled is further evidenced by the fact that some of the REH’s policy implications, such as the alleged possibility to fight inflation without causing a slowdown in the economy, have not been verified, at least in the United States and in the United Kingdom in the early 1980s. In the same vein, almost three years into the European sovereign debt crisis, one can doubt that economic agents are rational enough to let themselves be guided by the Ricardian equivalence, according to which since fiscal austerity now implies less taxes in the future, they should save less now, thus offsetting the deflationary impact of fiscal austerity. If anything, the exact opposite seems to be happening. The way episodes of hyperinflation have come to a sudden end in Germany, Austria, Hungary, and Poland in 1923–24, following changes in fiscal and monetary policy, is often presented as proof that credible policy announcements can “break” allegedly rational inflation expectations and instantly create a new set of rational expectations. The same historical facts can nevertheless be explained within an adaptive expectations framework: one just needs to assume that toward the end of a hyperinflation episode, as we shall see in chapters 4 and 7 of this book, the elasticity of expectations with respect to observed inflation is close to 1, so that the slightest slowdown in actual inflation, induced by some effective and persistent change in fiscal and monetary policy, is powerful enough to turn the tide and to trigger a fall in inflation expectations.
What, in the end, should we make of the REH? Its limited methodological relevance when it comes to modeling observed economic behavior is probably the most important point to bear in mind. Its relevance is limited:
• First, by the pervasive presence of uncertainty in our world.
• Second, by its reluctance to consider nonlinear relationships between economic variables.
• Third, by the fact that it is “rational” to form adaptive expectations when economic variables are interdependent, which is generally the case.
What about the policy relevance of the REH? It seems somewhat immoderate to claim, as Lucas did in his Nobel lecture, that41 “the main finding that emerged from the research of the 1970’s is that anticipated changes in money growth have very different effects from unanticipated changes.”
Even though generals are known to have a propensity to prepare for the latest war they have fought, the idea that any strategy becomes ineffective as soon as the enemy can easily anticipate it has been at the core of military thinking for centuries. Actually, even early economists, such as Cantillon in 1755, were conscious of the fact that the effectiveness of monetary policy–the increase or decrease in the denomination of coins as it happened–depended on whether such decisions were officially announced or implemented without warning. To reach, on one hand, the conclusion that a deflationary policy ordering the lowering of the ´ecu from 5 to 4 livres over a period of 20 months (i.e., a monthly depreciation of 1.11 percent), as a Royal Arrêt did in France in 1714, will prompt borrowers to make haste to pay back their debts and lenders to offer loans on generous terms, or to reach, on the other hand, the conclusion that such behaviors will fail to materialize if the diminution of coins is “made suddenly, without warning,” one does not need to assume that all agents are forming rational expectations in all circumstances.
Somehow, the REH has never managed to convince those economists having a first-hand experience of financial markets. The (p.40) REH’s failure to convince practitioners is well illustrated, for example, by Roger Bootle’s following comments:42
The financial world has not yet adjusted to the current reality of low inflation, let alone to the possibility of falling prices. … This is because it does not expect the currently low rates of inflation to continue.
Indeed, the very word “expectation,” which is so common in markets as well as in academic and policy circles, fails to capture the state of mind of investors. They do not really “expect” anything. Rather, they operate with a normal presumption and then add something for protection against an outbreak from normality.
Why are financial markets so slow to recognize the collapse of inflation which is now so evident to many business people and consumers? The short answer is because in trying to look forwards, they always start by looking back.
After 15 years working in the financial markets, I have come to believe that there is something strange about the way financial markets form their expectations.
Market practitioners are trapped by the impossibility of knowing the future. Financial institutions frequently invest in financial instruments with a life of 20 to 30 years. Yet how can anyone begin to know what the world will be like in 20 years’ time … ?
Market practitioners deal with this problem in two related ways. First, they assume that although the instrument itself is long lived, they can dispose of it when they wish by selling it to someone else.
Secondly, they try to assess the future by looking at the past. What the market appears to do is to go back perhaps 15–25 years, to form a basic assumption about the norm, and then they modify it by giving special weight to very recent experience and current levels. Why 15–25 years? There is no logical reason, but the explanation may be because this corresponds to the working lives of the senior people and the living memory of just about everyone in the markets. This history forms the markets’ expectational hinterland.
When you put these two methods of dealing with uncertainty together, they begin to make more sense. If the owner of an asset expects to sell it at some point in the immediate future, it is not of key relevance how the market will perform over 20 years. What he or she needs is some assurance that it will trade well in the immediate (p.41) future, in relation to what was paid for it, even though at that point in the immediate future, the far future which is relevant to the asset over the whole of its life will be just as uncertain as it is now.
Each investor can do this by adopting conventional assumptions about valuation. … This is essentially the view propounded by the great economist John Maynard Keynes. … But this view of market behavior has the same penetrating relevance today as it did then.
“Expectations” which are formed in this way may be believed, but they do not have to be. They are used in the formation of market prices because they are useful. What makes them useful is that others use them. They can be thought of, if you like, rather as a form of currency.
But when expectations are formed in this “conventional” way, they are difficult to dislodge. The mere passage of time may eat away at them as the convention comes to seem more and more incongruous when set against current reality. But this can be a very long process.
It isn’t only the markets, which are affected by conventional expectations. The monetary authorities are also drawn in, for they set interest rates at levels which seem appropriate given the level of inflation expectations in the market.
Be that as it may, as the REH established its rule on the academic world in the 1960s, attempts to describe expectations within an adaptive framework became a nonstarter for anyone with ambitions for a successful academic carrier. The REH managed to be influential enough to eclipse a major contribution made in 1965 by Maurice Allais. That Allais’s contribution has been overshadowed is clearly illustrated by the fact that the economists who should have shown greatest interest in Allais’s works–such as critics of the REH and advocates of “bounded rationality”–do not seem to be aware of them, as shown by Simon’s following comment, a comment made in 1997:43
To a certain extent, but not within the formal theory, macroeconomics … today does incorporate executives’ expectations about the future. But it has little to say about how such expectations are formed except when it claims, implausibly, that entrepreneurs carry around in their heads neoclassical models of the economic system, and thereby form their expectations “rationally.”
(1.) Muth, J. F. (1960), Optimal properties of exponentially-weighted forecasts, Journal of the American Statistical Association, vol. 55, no. 290, June, pp. 299–306.
(2.) Gourgieroux, C., and Monfort, A. (1995), Séries temporelles et modèles dynamiques, 2nd ed., Economica, Paris, pp. 106–110.
(p.351) (3.) In chapter 3, we shall see that both Cagan and Allais empirically encountered and recognized the shortcomings of exponential smoothing in the early 1950s. It is this empirical encounter that led Allais to the HRL formulation.
(4.) Sargent, T. J., and Wallace, N. (1973), Rational expectations and the dynamics of hyperinflation, International Economic Review, vol. 14, no. 2, June.
(5.) Muth, J. F. (1961), Rational expectations and the theory of price movements, Econometrica, vol. 29, no. 3, July.
(6.) Lucas, R E. (1972), Expectations and the neutrality of money, Journal of Economic Theory, vol. 4, pp. 103–124.
(7.) Lucas, R. E. (1976), Econometric policy evaluation: A critique, Carnegie-Rochester Conference Series on Public Policy, North Holland, New York, vol. 1, pp. 19–46.
(8.) Sargent, T. J., and Wallace, N. (1976), Rational expectations and the theory of economic policy, Journal of Monetary Economics, July, pp. 199–214.
(9.) Arrow, J. K. (1986), Rationality of self and others in an economic system, Journal of Business, vol. 59, no. 4, part 2, October, pp. 385–399, reprinted in The New Palgrave: Utility and Probability, Eatwell J., Milgate M., Newman P. (eds), Norton, New York.
(10.) Knight would have said risky instead of uncertain.
(11.) Malinvaud, E. (1991), Voies de la recherche macroéconomique, Points-Seuil, Paris, 1993, pp. 546–549.
(12.) Read risk.
(13.) Read risk.
(14.) Lucas, R., and Sargent, T. (1979), After Keynesian macroeconomics, The Federal Reserve Bank of Minneapolis, Quarterly Review 321, Spring.
(15.) Guesnerie, R (2001), Assessing rational expectations, MIT Press, Cambridge, MA.
(16.) Modigliani, F. (1977), The monetarist controversy, or should we foresake stabilization policy? American Economic Review, vol. 67, no. 2, pp. 1–17, March.
(17.) Friedman, B. M. (1975), Rational expectations are really adaptive after all, unpublished paper, Harvard University, Cambridge, MA.
(18.) Lucas, R. (1975), An equilibrium model of the business cycle, Journal of Political Economy, vol. 83, no. 6, pp. 11–21.
(19.) Gowers, T. (ed.) (2008), The Princeton companion to mathematics, Princeton University Press, Princeton, NJ, p. 160.
(20.) In other words, for all i.
(21.) Lindley, D. V. (1990), Thomas Bayes in The New Palgrave: Utility and Probability, Eatwell J., Milgate M., Newman P. (eds.), Norton, New York.
(22.) Parent, E., and Bernier, J. (2007), Le raisonnement Bayésien: Modélisation et inférence, Springer-Verlag, Paris.
(p.352) (23.) Wonnacott, Thomas H., and Ronald, J. (1990), Introductory statistics, 5th ed., Wiley, New York.
(24.) Pastor, L., and Veronesi, P. (2009), Learning in financial markets, Annual Review of Financial Economics, vol. 1, pp. 361–381.
(25.) Sargent, T. J. (1999), The Conquest of American inflation, Princeton University Press, Princeton, NJ.
(26.) Another suggestion is to do rolling recursive OLS linear regression on an arbitrarily decided number of observations.
(27.) See, for example, Sargent, T. J. (1999), The Conquest of American inflation, Princeton University Press, Princeton, NJ.
(29.) Kay, J. (2011), The random shock that clinched a brave Nobel Prize, Financial Times, October 18.
(30.) Gide, C., and Rist, C. (1944), Histoire des doctrines économiques depuis les physiocrates jusqu’à nos jours, Livre 6, Chap. 2, Le conflit des théories des crises, 6th ed., Dalloz, Paris, 2000.
(31.) Sargent, T., and Wallace, N. (1976), Rational expectations and the theory of economic policy, Studies in Monetary Economics, Federal Reserve Bank of Minneapolis, October 1976; Journal of Monetary Economics, July.
(32.) Phelps, E. (2009), A fruitless clash of economic opposites, Financial Times, November 3.
(33.) Arrow, J. K. (1986), Rationality of self and others in an economic system, Journal of Business, vol. 59, no. 4, part 2, October, pp. 385–399, reprinted in The New Palgrave: Utility and Probability, Eatwell J., Milgate M., Newman P. (eds.), Norton, New York.
(34.) Sargent, T. J., and Wallace, N. (1973), Rational expectations and the dynamics of hyperinflation, International Economic Review, vol. 14, no. 2, June.
(35.) Woodford, M. (1999), Revolution and evolution in twentieth-century macroeconomics, Princeton University Press, Princeton, NJ, note 51, p. 24.
(36.) Blaug, M. (1996), Economic theory in retrospect, 5th ed., Cambridge University Press, Cambridge, p. 685.
(37.) Screpanti, E., and Zamagni, S. (2005), An outline of the history of economic thought, Oxford University Press, Oxford.
(38.) Woodford, M. (2008), Convergence in macroeconomics: Elements of the new synthesis, prepared for the annual meeting of the American Economics Association.
(39.) The so-called Duhem-Quine problem.
(40.) Woodford, M. (2011), What’s wrong with economic models? Institute for New Economic Thinking (INET), New York.
(42.) Bootle, R. (1997), The death of inflation, Nicholas Brealey, London. Quote used with kind permission of Roger Bootle.
(p.353) (43.) Simon, H. A. (1997), An empirically based microeconomics, First lecture (Rationality in Decision Making), Raffaele Mattioli Foundation, Cambridge University Press, Cambridge.