Behavioural Finance: Departures from Rational Expectation and Asset Pricing Anomalies.

" There is not a unique valid model of human behavior, but a whole range of models, whose applicability may depend on the availability and cost of information, the intelligence, education, and patience of human actors, and goodness knows what other factors. Once one introduces into the SEU maximization Eden the snake of boundedness, it becomes difficult to find a univocal meaning of rationality, hence a unique theory of how people will, or should, decide".Herbert Simon (2000)


Financial markets have been understood over time with models that have certain basic assumption. The mainstream approach is to assume that the representative agent is rational and as such makes decisions that are optimal. Rationality implies that the representative agent has the ability to reason and when confronted with new information is able to update his belief correctly in accordance to Bayes' theory (i.e. the rule that the posterior probability of an event is determined by the prior probability of that event and new information).

There are two basic principles upon which the ideology of rationalism rest. Firstly, the decisions made under the rational choice behavior are consistent over time and are widely accepted and it implies that these decisions have to be made in a systematically and coherent way at all times. Secondly, agents make choices that are consistent with the concept of Subjective Expected Utility. This concept, promoted by Savage J (1954) is a method of making decisions in the presence on risk and he postulates that a rational agent will make decisions that seek to maximize expected utility, and it implies that the rational agent will always seek self-interest.

This traditional approach to finance and the simplicity of its assumptions has brought rigour to study of financial markets as it seeks to explain price behavior based on investor rationality. While this approach has brought about theories and applications to the study of finance, there has been growing interest in the realism of these assumptions, its predictive power and its testability. In reality, agents are perceived not to be fully rational and given the actual level of the aggregate stock market, the cross section of average returns over time and the behavior of individual trading agents, the basic fundamentals of the markets cannot be fully explain under the rational, agent utility maximizing framework.

Behavioural Finance represents a new approach to the study of financial markets that has emerged following the gaps, difficulties and empirical results that appear inconsistent with the rational assumptions of the traditional approach. Behavioural finance rejects the ideology that the representative agent's behavior is based on the maximization of expected utility and seeks to explain market performance and financial phenomena if certain assumptions of rationality are relaxed. One of the key objectives of behavioural finance is to investigate the effect and implication of the psychological traits of the representative agents on market performance and it can also be seen as an alternative hypothesis to that of rationality under which the alternative hypothesis accommodates departures from rational expectation.

This literature review will be structured as follows. The first part discusses briefly rational asset pricing models and evidence based anomalies on stock returns identified in in the behavioural finance literature which will form the basis of this review. The second part discusses behavioural finance as an alternative to rational expectation theories. Finally, the last part contains concluding remarks.


The mainstream approach makes very few assumptions about the agent's psychological traits and it is based on the expected utility model. Under this assumption, the agent is assumed to derive utility from consumption in a time varying manner and the marginal utility of consumption is diminishing so that the agent's utility function is concave. According to rational expectation, at time t, the agent is assumed to maximize

Ut = s=t8�s-t U(cs )                                  Equation 1

Where U(cs ) is the utility of consumption of the representative agent at time s and � is a time discount factor. If we assume that the agent seeks to maximize utility based on the objective probability of consumption or based on the Subjective Expected Utility, then equation 1 becomes

MaxEs=t8�s-tU(cs )                                   Equation 2

The standard consumption Capital Asset pricing model of Lucas (1978) gives the pricing kernel as the intertemporal marginal rate of substitution of a rational utility maximizing representative agent. The maximization of equation 2 gives the following equilibrium

Et�U' (Ct+1 )U' (Ct ) Rti=1                                  Equation 3

Where U' is the marginal utility of consumption and Rti is the simple one period gross return on asset i.

Also the gross return Rti is given as Rti=Pt+1 i- Pti+Dt iPti

Where Pti the real price is level of the asset i at time t and Dt iis any real dividend that might be paid at time t.

The assumption of rationality implies that the representative agent has the ability to reason and is able to update his belief correctly in accordance to Bayes' theory when faced with new information, this suggest that the agent knows the actual distribution of asset returns. This assumption forms the basis of the Efficient Market Hypothesis which states that stocks already reflect all available information and since prices are determined rationally, only new information will cause a change in prices. The statement of the efficient market hypothesis holds that:

Et Xt+1?t= Xt+Et+1                                   Equation 4

where X is a stochastic variable which represents the random walk of prices and ?t represents the information available to market participants at time t.

Although the analysis above discusses intertemporal asset pricing, the fundamentals remain the same for any other asset pricing theory and equation 3 and 4 embodies the standard approach to asset pricing under the assumption of rationality and also defines the Efficient Market Hypothesis.

Fama (1969) in postulating the Efficient Market Hypothesis, asserts that asset prices reflect their fundamental value and no investment strategy is expected to earn excess risk adjusted average returns and any deviation from this fundamental values will be corrected by arbitrage. However, in recent years theoretical and empirical evidence have shown that assets are not priced rationally and neither do they reflect all available information. These anomalies have been shown consistent over time in and out of sample and still persist even after adjusting for risk. Literature has documented several anomalies and some of the pervasive ones are:

i. Equity Premium Puzzle

    a. Size/Market capitalization (Fama and French, 1992)

    b. Book to Market ratio (Fama and French, 1993)

    c. Post announcement earning drift (Chan et al, 1996)

ii. Return Predictability Puzzle

    a. Momentum (Jegadeesh and Titman, 1993)

    b. Long-term return reversal (DeBondt and Thaler, 1985, 1987)

iii. Volatility Puzzle and trading


There has been wide documentation of the poor empirical performance of the standard asset pricing models to explain observed patterns of asset returns over time. From the assumption of utility maximization, theoretical rational asset pricing models fail to explain the historical mean excess return of the market over the risk free rate without implying implausible high levels of risk aversion. The assumption that systematic risk matters in the asset pricing has raised significant interest and remains an issue to be resolved. Mehra and Prescott (1985) who coined the term "equity premium puzzle" find that this historical return is greater than can be explained by standard financial economic models without the assumption of high levels of risk aversion.

Fama and MacBeth (1973) suggest that given market efficiency, there exist a positive correlation between risk and return and this relation is also linear using a two factor model. However, further studies by Fama and French (1992) in creating the three factor model as an extension of CAPM find that the relationship between beta and expected returns is insignificant and state that other asset characteristics have better explanatory power for asset returns. Test of the consumption based CAPM has also led to results that are inconclusive which also implies that for the model to match historic returns on the risk free rate over time, the time discount factor must be greater than 1. Weil (1989) proposes that the solution to the equity premium puzzle cannot be solved by relaxing the assumption of risk aversion and expected utility; he also shows that and shows that relaxing the assumption of utility specification will only introduce what is referred to as the "risk free-rate puzzle", which can only be explained if the time discount factor is greater than 1.

In literature, several theories have been proposed to improve the performance of this standard approach to asset pricing in order for it to account for excess return observed on the market portfolio. An approach is to modify the representative agent's time and space separable utility specification. Epstein and ZIn (1989, 1991) separate the intertemporal elasticity of substitution and risk aversion and suggest the utility function of the agent is recursive and they term this the "generalized expected utility function". Constantinides (1990) initiates the habit formation where the utility is not only affected by present consumption, but also by past consumption. Under this assumption utility is a decreasing function of past consumption and marginal utility is an increasing function of past consumption. The author finds that once the assumption of expected utility can be relaxed to accommodate habit persistent, the equity premium can be solved.

Another approach taken to improve the performance of asset pricing under rational expectation is to assume that agents/consumers are heterogeneous and argue that this heterogeneity and market incompleteness can resolve the equity premium puzzle and the risk free rate puzzle, Constantinides and Duffie (1996). Also, a possible approach as given by Campbell and Mankiw (1990) is to argue that endogenous variables such as market frictions (i.e. transaction cost, limitations on short sales and borrowing) are essential to asset pricing. Given the improvement of asset pricing based on the models of habit persistent and relaxing the assumptions of expected utility, Mehra (2003) finds that these models have had successes in addressing the risk free rate puzzle but have had limited success in addressing that of the equity premium without still implying implausible levels of risk aversion.

Fama and French (1992) identified other variables that seem relevant to asset pricing and developed a three factor model to take into account these variables. They find that the market capitalization (size) and the book to market ration of firms are strong predictors of expected returns. They provide empirical evidence that these two easily measured variables are able to capture the variation in average stock returns and also argue that this characteristics recompenses for distress risk. Ferson and Harvey (1997) analyzing global and integrated markets find that there is a relationship between future returns and other variables of common stocks such as size, book to market, cash flow and earnings. With a different opinion, Daniel and Titman (1997) find that once they control for firm characteristics such as size and book to market ratios, returns are not related to the beta loadings of the three factor model and the also whilst these variables may predict covariance, they do not predict future returns.

In sum, the equity premium remains prevalent as empirical evidence has not resolved the puzzle over time.


One of the fundamental assumptions of the Efficient Market Hypothesis is that prices not only follow random walk process, they also reflect all information available. This implies that stock prices are not predictable as the expected price of as asset at time t+1 is given as:

Pt =Pt-1 + ut

EPt+1| Pt , Pt-1 ,....=Pt

where ut, t=0...., are independently and identically distributed random variables and Pt is the present price level. However, empirical evidence shows that stock prices are indeed predictable.


Jegadeesh and Titman (1993) provide evidence of momentum (positive autocorrelation) in stock returns at a short term horizon of three to twelve months. They document that strategies which involved buying past winners and selling past losers generate significant returns over a short horizon between 3-12months. They also find that this return is not a function of risk related to the trading strategy and that this profit is not related to the lead-lag effect of delayed stock reactions to information about a common factor. Rouwenhorst (1998) in analyzing an internationally diversified portfolio which specializes in buying past winners and selling past losers also finds the evidence of momentum and documents that the momentum effect is robust across size and notes that this return cannot be attributed to risk as controlling for risk only increases the abnormal returns. Brennan et al (1998) study the relationship between returns, risk factors and non-risk factors (such as book to market ratio, size, dividend yield, etc.) to determine the explanatory powers of this factors on returns. Using risk adjusted returns; they find evidence of book to market, size and return momentum effect. However, after controlling for size and book to market, they find that the effect of momentum and trading volume persist.

Fama (1998) acknowledging the momentum effect however is quick to suggest that it cannot be attributed to market inefficiency but that of data mining and bad model choice. Chan et al (2004) find evidence of multi-month momentum but also document that this momentum effect is greatly reduced when they control for earnings surprises. Although the momentum effect has been analyzed extensively in literature, across countries and firm size, and has proven to be robust over time, there is little evidence as to what is driving this momentum and whether these factors are correlated across countries.


DeBondt and Thaler (1985, 1987) found evidence of long term reversal. They created portfolio of past losers and past winners and found evidence that over 3-5 years horizon, stocks that performed poorly over the previous 3-5 years achieve greater returns than stocks that performed well over the same period. The economic cause of this effect is not clear and DeBondt and Thaler interprete their evidence as overreaction of investors; a clear departure from the assumption of rationality.

Many authors have argued the result of DeBondt and Thaler (1985) and Chan (1998) find evidence that the long term reversal are due to variances in the equilibrium required returns required by investors which is not captured by the model of Debondt and Thaler (1985). Chan et al (2004) also suggest that this predictability of return is a result of time varying discount rates of efficient markets or a matter of mispricing. Fama (1998) emphasizes the issue of mispricing and suggest that whilst the bad model problem is less serious in short term event studies, the problem becomes aggravated with long term abnormal return studies and this can yield spurious results.

Although Fama (1998) agrees that some long term anomalies are difficult to classify as there is no unified theory as to the cause of these abnormal returns, he argues that this evidence is not an alternative hypothesis to market inefficiency and attributes this evidence to chance. Little however is said as to how and why this chance is created.


Using data on the aggregate U.S. stock market, Barberis and Thaler (2001) show that the annual standard deviation of excess log return on the S&P500 is 18%, while that of the log price-dividend ratio is 0.27%.The relationship between returns and price-dividend can be shown as:

Rt+1= Dt+1+Pt+1Pt = 1+Pt+1Dt+1 PtDt Dt+1Dt

The model for the return of asset under rational expectation, assumes that price is equal to the present value of forecasted future cash flow plus dividend which is discounted by a real discount rate. Under rationality, a change to this fundamental value is expected to be attributed to the arrival of new information about future dividend and the discount rate and dividend growth are expected to be stable, if not constant over time. The volatility puzzle, highlighted by Shiller (1981) suggests however that given observed data it is difficult to explain the historic high levels of the volatility of return with models that assume that discount rates are constant and market participants are rational.

Campbell and Hentshel (1998) develop a volatility feedback model based on the assumption that an increase in stock market volatility increases expected returns and they find that, especially during periods of high volatility, there exist a relationship between returns and volatility. Literature has an extensive documentation of the negative relationship between volatility and expected returns whilst most studies show a negative correlation between future volatility and current return, some show that the innovation to volatility is greater with negative news than with positive news.

Trading volume in financial markets over the years have proven very high, with evidence showing that the annualized monthly turnover of the New York Stock Exchange in the past few year as almost 100%. Debondt and Thaler(1995) note the high volume of trading and suggest that it is an indictment on the standard approach to finance. The important question however is why do agents trade such huge volumes? Is it a reflection of past returns or future gains? Can it be due to liquidity traders or are there other exogenous reasons involved? Baker and Stein (2004) discuss the negative relationship between returns and past volume using investor sentiment and find that when sentiment is high returns are low and when they are low, returns are subsequently high. They suggest that when investors are optimistic, they generate volume which leads to lower returns and this optimism gets reversed in subsequent period.

In sum, there is mounting evidence that other variables, asides the risk based characteristics of a firm and the representative agent, provide a compelling case for asset pricing. Although these anomalies have been written off by the rational finance propagandist as a result of data mining, it has led to a lot of research and theories which attempt to explain these pervasive patterns of asset returns. The approach of these theories is along two distinct lines, limited arbitrage investor phycology and both would be discussed extensively in the next section.


From the foregoing, it has been established literature that these anomalies do exist and the severity of it highlights the lack of predictive success and testability of the rational approach to finance. This has led to the rapidly growing field of behavioral finance as it seeks to explain actual market performance based on human psychology. Rational finance hinges on the assumption that if there is any mispricing, competitive arbitrage will drive this to zero. Observed mispricing that persist as led to the fundamental assumption of behavioural finance; that of limited arbitrage as this departure from rationality defends the theory of irrationally induced mispricing. The question remains that if indeed that the mispricing observed in financial market is driven in part by irrational traders, why does arbitrage not remove this mispricing and why do these irrational traders who lose money on the average not leave the market due to the losses observed. The theory of limited arbitrage is a direct attempt to explain the mispricing of rational asset pricing models.


Under the Efficient market hypothesis, the price of an asset is the discounted sum of expected future cash flows where all investors process all available information correctly. The representative agent is assumed to trade under no market frictions (transaction cost, taxes, and information asymmetry) and as such, all asset prices reflect their fundamental values. A further implication is that there is no "free lunch" as agents update their beliefs according to Baye's rule and any mispricing would be quickly corrected as agents will not leave money lying on the table when the opportunity to make profit arises; when prices fall below their fundamental values, agents buy and when they go above, agents sell. The observed persistent mispricing from fundamentals however brings into question the rationality of investors. Why are rational agents not taking advantage of this deviation from fundamental values, assuming that this happens because of agents that are not fully rational?

Without putting the assumption of rationality into question, the traditional asset pricing models did not factor in market frictions and also not undermining the predictive power of these models; these trading frictions (transaction cost, bid and ask spread etc. ) can have significant impact on asset returns. Introducing the concept of noise trader risk, DeLong et al (1990) create a model where mispricing exist because risk adverse arbitragers are not concerned primarily about the riskless fundamental values of an asset, but about the price of assets in consequent periods following the trades of noise trades. Shleifer and Vishny (1997) in proposing a diverse approach to analyzing observed anomalies argue that the observed mispricing might be generated by the demand of noise traders, who may be driven by sentiment or investment constraints. Their model considers the cost of arbitrage especially the volatility returns and argues that this mispricing will continue to exist especially in highly volatile stocks as arbitragers may avoid the risk of this volatile position. These papers give evidence that mispricing continues to exist even when rational arbitragers endeavor to correct this mispricing.

Transaction cost such as brokerage fees, holding cost, bid and ask spread and price impacts of trades can make it less attractive for arbitragers to explore deviations from fundamentals. Abreu and Brunnermeier (2002) argue that arbitragers incur holding cost in other to exploit mispricing and their results even though arbitragers eventually trade give mispricing's, the trades are delayed due to holding cost and the risk inherent in the trades. There is a cost to finding mispricing and there is also a cost that has to be borne to exploit this mispricing and in some cases, there are institutional laws in place with regards the type of trades that can be done, e.g. short selling by many mutual funds and pension funds is not allowed. Short selling for instance is essential to effective arbitrage and this includes cost of borrowing, legal fees and introduces liquidity risk that arbitragers are unwilling to bear.

This constrains place limitations on arbitrage and prevents rational agents to respond to mispricing even when it is apparent and persistent. Since this mispricing is persistent and before rejecting the hypothesis of market efficiency, the "joint hypothesis problem" as stated by Fama (1970) has to be taken into cognizance. Is the mispricing a function of bad modeling? A test of mispricing would be a joint test of the model and the discount count rates assumed in the discounting of the cash flows and this makes it hypothesis problem tenable at best.


Asset pricing under the assumption of rationality, the persistent anomalies observed over time and the lack of predictive success of this standard approach to finance has led to the research of imperfect rationality. Financial economist over time have criticized the frame work of rationality, its detachment from reality and have explored the effect of relaxing this assumption to see if the observed market patterns would be better explain. Herbert Simon, a pioneer of behavioral economics suggest the theory of agents having bounded rationality in which the involvedness of problems and calculating powers restricts agents from making fully rational decisions, (Simon 1955). Based on the assumption of bounded rationality, Tversky and Kahneman (1974) offer a theory that human decision should be based on simple heuristics.

It has been found and well documented in psychology that human beings use heuristics to make decisions and form beliefs and as such do not weigh information correctly in making these decisions. Behavioural economics have identified a huge range of heuristics that aim to explain the irrationality of agents in financial market. This review will identify the major heuristics relevant to financial economics and also discuss the prospect theory which is an alternative to expected utility maximization theory.


Research in cognitive psychology has shown that the average individual Is overconfident in their judgment about their abilities and precision of the information they possess. In literature, overconfidence has been identified to manifest in three ways

i. Miscalibration: Lichtenstein et al (1982) carry out an experimental survey and find that people assign a too narrow probability distribution to the occurrence of a random variable. Miscalibration is an overestimation of the accuracy of one's knowledge

ii. Illusion of Control (Langer, 1975): Glaser and Webber (2007) identify this bias as having an unrealistically high opinion of one's ability to succeed, having a clear forecast of future events and the belief of being able to control random variables.

iii. Better than Average effect: Taylor and Brown (1988) find in their survey that individuals see themselves as been better than average. People estimate themselves higher than others and this is proven in a well-known study by Svenson (1981) where 82% of participants rank themselves amongst 30% of the safest drivers.

In financial economics, overconfidence is measured as an overestimation of the accuracy of private information.


Tversky and kahneman (1974) define representativeness heuristic as the tendency of people to see patterns in truly random events. This heuristic also known as the law of small numbers makes people assess the probability of an event or outcome by considering how well this even resembles available evidence or data.

Although very useful, this heuristic has been shown to result in base rate neglect and other cognitive bias as it makes agents overweight the strength of a signal and undermine the weight. Baberis et al (1998) also highlight that the representative heuristic though helpful does not indicate what information is strong and salient and what information is low in weight.


Edwards (1968) finds that given new information, change is orderly and in accordance with Bayes' theorem of updating beliefs, but finds that the updating is not sufficient in magnitude as predicted by the theorem. The experiment carried out by Edwards (1968) also finds that it would take two to five observations to bring about a change of opinion as against the case Bayes' learning, where one observation would have brought about a change of beliefs. This implies that people are slow to incorporate new information into prices as new information that differs from prior information is harder to accept.


Babcock and Loewenstein (1997) describe the self-serving bias as the tendency of individuals to favour themselves even when they try not to be unbiased and impartial. They show that human beings are prone to undermine information that contradicts their opinion and overweight evidence in favour of their beliefs. This bias is known as the self-serving bias.

Langer and Roth (1975) document the psychological evidence that people tend to credit themselves for past successes and blame others for their failures. This bias is known as self-attribution. This bias is also an extension of overconfidence and it implies that the confidence of an individual increases when public information tallies with his own private information but when there is a conflict in information, his level of confidence does not fall as expected.


Reported by Tversky and Kahneman (1973, 1974), this availability heuristic is the phenomenon where when judging the probability of an event occurring, recent events have more significant impact on the decision making of the agent. The probability of an outcome under this heuristic is dependent how easy it is for the individual to imagine the outcome.

Other well documented behavioural biases that are often used in financial economics in order to analyse market returns, patterns and performance include anchoring, belief perseverance, optimism and pessimism. The debate on employing human psychology in explaining financial markets, however, is that through learning, will people learn their way out of these biases and make fewer errors? And also, if given strong enough incentives, will people abandon these biases for rationality? While it is agreed that given learning, some of this bias will disappear, others maintain that there is little evidence that their effect will be erased totally. Menkhoff and Nikiforow (2009) carry out a study to show that cognitive biases identified in behavioral finance to explain financial patterns are so entrenched in human behavior that the effect would be difficult to overcome by learning. The study focused on fund managers who have strong incentive to learn efficient behavior and they find that these behavioural patterns continue to persist even with the knowledge that they exist. The study also shows an intriguing twist that whilst acceptance of behavioural finance affects the agent's view of the market, it hardly affects the view the advocate's view of his own performance.


An important aspect of asset pricing theories is the assumption of the utility specification of the rational agent. The traditional asset pricing models assume that the agent's preference is based on the Expected Utility maximization which is also known as the von Neumann-Morgenstern utility. Von Neumann and Morgenstern (1944) present the theory and show that in the preference of individuals in the face of risky outcome is determined by the payout function and individuals will act so as maximize their expected utility.

Given the failure of the expected utility theory in explaining the behavior or agents in choosing amongst risky assets, a lot of theories have been postulated on how to modify the utility function to improve the performance of asset pricing models; these include rank dependent utility and cumulative prospect theory. Prospect theory, regarded as an innovation to financial applications, is a theory that is not based on expected utility and can be seen as a distinction and an alternative to rational expectation. Tversky and Kahneman (1979) study the expected utility theory and propose that it does not exist in reality. The assumption of expected utility theory is that the representative is risk averse and seeks to maximize expected utility maximizes. This maximization is based on the weighted sum of all various possible outcomes, with the weight been equal to the probability that the outcome will occur. The theory specifies that the utility is determined by a final payout function but does not state how this final state is reached.

Tversky and Kahneman (1979) develop a psychological alternative to expected utility and argue that individuals deviate from rationality and that their choice consistently deviate from normative behavior. They find that people evaluate decisions under losses and gains differently and value is assigned to gain and losses instead of final wealth. In prospect theory, the choice process is represented by a two phased course. The first phase is that of editing where the agent uses some form of decision heuristics to evaluate prospects and, construct a reference point to appraise the gains and losses of the prospect and the prospects are assigned probabilities. In the second phase, the agent evaluates each of the prospects evaluated and makes a decision that would maximize the prospective value function. The prospective value function is expressed with two terms; (p) (where p is the probability of an outcome and the decision weight) and (which reflects the subjective value of each outcome and also measures the deviations from the reference point.

They show that when offered prospect (,;,) where is and outcome with probability of and and outcome with probability of and == or == and + <1, then agents will assign it a value of

() () + () () and they show that the agent will choose the prospect with the highest value function.

The value function is concave above the reference point (gain and losses) and convex below the reference point, which implies that people are risk loving over loses and risk averse over gains. Also, the value function has a kink at its origin and the slope is steeper under loses, which implies a greater sensitivity to losses than to gain; a characteristic known as loss aversion. In prospect theory, the value of a prospect is multiplied by the decision weight (), which measures the not just the probability of a prospect but also the impact of the desirability of the prospect and they propose that probability is nonlinear (i.e small probabilities are overweight).


Tversky and Kahneman (1992) develop an advance version of the prospect theory, known as the cumulative prospect theory where the decision weight is determined by the cumulative distribution of gains and losses independently, rather than on the separate probabilities. This innovation to prospect theory helps the theory in eliminating the problem of first order stochastic dominance which obtains in the earlier version and it introduces the concept of loss aversion, risk seeking and nonlinear preferences to explain the value and weighting process. The cumulative prospect value function becomes



? = xa                              if x=o

?= -?(-x)a                      if x<o,

? is a coefficient of loss aversion and a measure of relative sensitivity to loses and gains.

One of the major contributions of prospect theory of is that of framing. Framing is refers to the way a problem is described to an individual and the theory suggest that decisions made under risk are influenced by the way the problem is framed. In this context, choices are not independent of the problem description and the process of evaluating and classifying this problem into different categories is the basis of the concept of Mental Accounting, Thaler (1980). Mental accounting refers to the process where agents classify, evaluate and categorize different economic outcomes. Prast (2004) propose that the mixture of mental accounting, multi - dimensional and loss aversion leads to the framing effect; if problems are framed in terms of losses, people become risk seeking and if framed in terms of gains, people become risk averse.

Although prospect theory is based on strong psychological explanations compared with the theory of expected utility, the important question is whether the theory can explain market behavior and asset pricing pattern better. In literature, the major criticism of the theory is that one of its major assumptions is that of reference dependence but it does not however specify how this reference point is determined. Advocates of traditional finance have noted this reference dependence as a limitation to building asset pricing models with the same degrees of freedom as that of standard pricing models. Stracca (2004) suggest however that this limitation should not be over emphasized as traditional models are built on the limitation of mean variance (which assumes the reference point of initial wealth), and suggest that it should be feasible to build asset pricing models with the same reference point. The superiority of both theories remains tenuous with advocates of the different approaches claiming dominance in predictive powers. Camerer (1998) documents ten regularities in data that count as anomalies for expected utility theory but which can be explained by the elements of prospect theory (loss aversion, nonlinear weighting probabilities and reflection effects) and states that based on the empirical evidence in its favour, there is no reason why prospect theory should not be used alongside expected utility in research

DEPARTURES FROM RATIONAL EXPECTATION - Application to Asset Pricing Puzzles

The aim of behavioural finance is to explain the observed anomalies with assumptions that are not based on rational exemption and these include using prospect theory to explain investor preferences and assuming that agents used heuristics to make decisions. The standard approach to testing efficient market hypothesis is to assume market hypothesis as the null hypothesis and the alternative hypothesis as evidence of market inefficiency. This section discusses the attempts of behavioural finance in literature to explain the observed efficient market anomalies.


Mehra and Prescot (1985) show that it is impossible to explain the observed high equity premium and low risk free rate without assuming a high and implausibly large level of risk aversion. Benartzi and Thaler (1995) however show that the equity premium can be solved if the behavior of investors is modeled according to the assumptions of prospect theory. They use the concept of loss aversion (been more sensitive to losses than to gains) and mental accounting (methods agents use to classify and evaluate financial outcomes) to model the behavior of investors and they call this combination myopic loss aversion. The model suggest that when agents are loss averse, they will be more willing to take risks if they do not evaluate their performance frequently and that the attractiveness of a risky asset depends on the evaluation period (period of time which an agent aggregates returns). Using Tversky and Kahneman(1992) estimate of loss aversion(which indicate that sensitivity to loss is nearly two times great the sensitivity to gains), they find that the evaluation period consistent with observed equity premium of about 6% is about 1 year. This result shows that the equity premium is solved if the evaluation period required by investors is 1 year, Thaler (1999). Although this result gives a fascinating interpretation of the equity premium based on sensitivity to loss and a tendency to evaluate the performance of one's portfolio, it makes no mention of how equilibrium prices are reached and the effects of trading cost incurred during evaluation.

Using other investor psychology, Abel (2002) investigates the effect of pessimism and doubt in the agent's belief about the distribution of the aggregate per capita consumption growth on the mean equity premium and the risk free rate. Assuming log normality of the subjective and objective probabilities of aggregate consumption, they find that the consumption CAPM with pessimism and doubt can match the observed equity premium and risk free rate with economically plausibly levels of risk aversion and time discount factor. The major drawback of this model however, is that in some cases the model needs high level of pessimism and doubt to explain observed risk free rate and equity premium. To correct this drawback, Semenov (2009) uses the same model specification but introduces the availability heuristic in the agent's belief to resolve the equity premium puzzle. Using the Generalized Method of Moments to test the ability if the consumption CAPM with these deviations, they carry out two simulations. Under the first simulation, they assume that the agents risk aversion is determined solely by the curvature of the utility function (linear in subjective probabilities). The second simulation uses the cumulative prospect theory of Tversky and Kanheman (1992) and they assume that the risk aversion is determined jointly by the curvature of the agent's utility function and decision weights (i.e. nonlinear in subjective probabilities). They carry out tests using different mixtures of pessimism, doubt and the availability heuristic and document that only the consumption CAPM with doubt and the availability heuristic explains the equity premium and risk free rate puzzle. They however state that the observed levels of the agent's preferences are still too high and though doubt and availability heuristic can better explain asset returns, they are not enough to adequately resolve the puzzle.


The most prominent attempts to explain return predictability puzzle are by Barberis et al (1998), Hong and Stein (1999) and Daniel et al (1998, 2001). The two identified and persuasive pattern of asset return is that of underreaction and overreaction. Underreaction evidence shows that over a short horizon (1-12 months), stock prices underreact to news and the impact of the news diffuses to subsequent periods after the news has been released; which implies that prices exhibit positive autocorrelation. The overreaction evidence shows that over longer horizon (3-5 years), stock prices overreact to news in the same direct; which implies that stocks that have a long record of good performance tend to become overpriced and subsequently have low returns afterwards. The implication of this is that by trending the market, an investor can make money from these strategies, which is a direct violation of the efficient market hypothesis, Barberis et al (1998).

Barberis et al (1998) develop a model of investor psychology to account for overreaction and underreaction by combining the behavioural heuristic of representativeness and conservatism. They develop the model involving one investor and one asset and all divided is paid out as cash. Earnings follow a random walk process but the investor is unaware of that and believes that earnings move between two regimes, i.e. earnings are either mean-reverting or trending. The model also assumes that agent believes that the earnings are likely to stay in a regime than switch and also observing earnings; the agent updates his beliefs using rational Bayesian learning. A positive earnings surprise followed by another positive surprise indicates a trending regime and a positive earnings surprise followed by a negative earnings surprise indicates a mean reverting state. Representativeness is modeled by overreaction and conservatism by underreaction. Using random prices, they find that when agents expect patterns to continue, it generates overreaction and hence subsequent reversal and when they expect pattern to revert, they react too little to news and this cause underreaction which creates momentum. Hong and Stein (1999) not focusing on behavioural biases create a model where traders interact with each other. The model features two types of trades, momentum traders and newswatchers; both traders are not fully rational. Newswatchers make forecast based on private signal (private information diffuses slowly), while momentum traders make forecast based on past price changes. They suggest that the gradual diffusion of information amongst newswatchers causes underreaction and momentum trade while trying to profit from this underreaction eventually create an overreaction. The major significance of the paper is that changes in prices do not occur only as a result of news but also as a result of previous momentum trading.

Daniel, Hirschleifer and Subrahmanyam (1998) develop a theory of underreaction and overreaction using two behavioural biases: investor overconfidence and self-attribution. In the model, an agent is overconfident if he over estimates the accuracy of his private information and underestimates public information and they find that prices overreact to private signal and underreact to public signal. They also show that this overreaction is consistent with long run negative autocorrelation. Self-attribution is the phenomenon where agents, when they receive news that corroborates their prior private information attribute it to their superior ability which increases confidence and when they receive news that does not conform attribute to market error. They suggest that this public information cause prices to overreact following private news and they show that this overreaction causes momentum. Their results further shows that as public information takes prices back to its fair value, the momentum is reversed and this leads to long term reversal.

Hong et al (2005) create a model where agents build and evaluate portfolios based on the word of mouth and they show that the sensitivity of such portfolio with to the portfolio of agents in within the same city was greater than those outside the city. They also use this to explain momentum and suggest that physical distance maybe a good explanation for the slow diffusion of information. A good contribution using the theory of word of mouth but the authors were quick to add that the literature does not try in any way to link word of mouth to asset pricing. Barberis et al (2005) use data of recent additions to the S&P 500 to study comovement in returns. The null hypothesis of the traditional rational view of comovement states that there is a correlation between comovement of returns and fundamentals and the null hypothesis states that due to investor sentiment and market sentiment, there is no correlation between comovement of returns and the fundamental value of assets. They find that the beta of stocks go up when they are added to the S&P500 and argue that this is partly due to investors who treat S&P stocks as been in a different category and explain that their result is favour of the view that investor sentiment and market frictions are determinants of comovement and not just fundamental.

Chan et al (2004) test the pricing effect of representativeness and conservatism using out of data sample and find evidence in support of conservatism and not representativeness. They find evidence of momentum, which becomes significantly reduced when they control for earnings surprises, but do not find evidence for long term reversal. Fama (1998) reacting to the work of Barberis et al (1998) suggest that the prediction of post event return reversal in response to the pre - event abnormal returns and the observed frequency of post event continuation is a vindication for efficient market hypothesis and argues that behavioural models do not perform well once used out of sample. He also argues that long term event studies are sensitive to methodology and bad modeling.


To resolve the volatility puzzle identified by the gap in the volatility of returns and volatility of the dividend yield, the assumption that the discount rate and dividend growth are constant over time has to be reexamined. The alternative is to suggest that this volatility is driven by the investor irrationality. Since the changes in price-dividend ratio is caused by discount rate (which is determined by expectation of risk free rate and risk aversion) and future dividend growth, changes in the expectation of the future discount rate and future dividend growth may be sufficient to resolve this variation.

Barberis and Thaler (2001) highlight that changing the assumption about risk aversion as suggested by Campbell and Cochrane (1999) helps to close the gap between the volatility of returns and dividend growth. They however make a more compelling case using a behavioural approach to resolve the volatility puzzle. Using the heuristic of representativeness, they suggest that when investors observes many periods of good earnings, the law of small numbers will make then believe that earnings will continue to rise in the future. They also suggest that this heuristic makes investors project returns too far into the future when forming expectations of returns, this in turn makes the investor overreact. An investor who is overconfident about the precision of his private information on future dividend and cash flow growth (which might not be an accurate reflection of future dividend growth) will trade based on this information and will cause prices to rise, thereby contributing to the volatility of returns.

Using investor psychology and behavioural heuristics to answer the question of huge volumes traded by investors, Odean (1998b) develops a model that takes overconfidence into account. They suggest that overconfidence increases expected market trading volume, market depth and reduces the expected utility of overconfident agents. They model a market where all investors are rational but are overconfident about the way they process information and believe their private information carries more weight that that available to other market participants. The major finding is that overconfident market participants increase trading volume and cause volatility. They also find evidence of momentum where overconfident traders cause market to underreact to information available to ration traders which leads to short run positive autocorrelation in returns.

A slight deviation from overconfidence is the disposition effect of Shefrin and Statman (1995). Evaluating the disposition effect which is a behavior of investors under risk, they document that investor's exhibit the disposition to sell winners (gains) too early and sell losers too late. They suggest that this effect is not due to tax consideration but also due largely to the effect of mental accounting, regret aversion and self-control. The disposition effect has been confirmed adequately in literature and empirical evidence by Odean (1998b), Weber and Camerer (1998) prove the occurrence that investors are quick to realize the gains of their portfolio and realize the gains but are reluctant realize the losses on the portfolio. In a recent study by Statman et al (2004) they model the disposition effect but using investor confidence. They use the Vector Autoregression and impulse response function to analyse whether investors become more overconfident about active trading when they observe positive portfolio returns and less confident after negative returns. The results show that volatility increases following months of high market wide returns and this can be interpreted as the disposition effect. They also find that trading activity is highly dependent on past returns, which in their model is a validation of investor overconfidence and evidence against the efficient market hypothesis as it implies that returns can be predicted based on volume traded.


The apparent dismissal of the risk based, expected utility and rational expectation framework based on the puzzles obtained in real life financial markets has acted as an opportunity for the budding field of behavioural finance. The beauty of the traditional approach is the simplicity of its assumptions. The fact that these assumptions do not reflect in real data is a cause for concern, but are these anomalies enough for us to reject the efficient market hypothesis in favour of the alternative hypothesis of market inefficiency? The important issue to rise though is whether behavioural finance is conditional on observed anomalies or can models be created based on the assumptions of behavioural economics.

The field of investor psychology based asset pricing is still at its nascent stage and one of significant contributions of behavioural finance to asset pricing is that of using cognitive psychology (heuristics and biases) and prospect theory to model how investors view financial outcomes as against the expected utility framework. The essence of behavioural finance is to amalgamate the economics of financial markets and human behaviour. Hirshleifer (2001) in response to the idea that the task of financial economics is to discover the factors of real risk that drive asset prices and expected return says "...that examine how expected returns are related to risk and investor misevaluation".

Perhaps the most daunting task that lays ahead for asset pricing models based on investor psychology and preference is how to fuse the diverse human biases that have been observed to create pricing models that not only coherent and possessing predictive power but which are also able to improve on the performance of the rational based models in explaining financial markets. Fama (1998) attesting to this fact and taking a very critical stand suggest that given the litany of cognitive biases observed by behavioral proponents, says "it is safe to predict that we will soon see a menu of behavioural models that can be mixed and matched to explain specific anomalies". The important opinion however is that the study of behavioral finance has left the stage of acceptance and has become pivotal for the explanation of asset pricing and financial markets as a whole.

Source: Essay UK -

Not what you're looking for?


About this resource

This Economics essay was submitted to us by a student in order to help you with your studies.


No ratings yet!

  • Order a custom essay
  • Download this page
  • Print this page
  • Search again

Word count:

This page has approximately words.



If you use part of this page in your own work, you need to provide a citation, as follows:

Essay UK, Departures from rational expectation and asset pricing anomalies.. Available from: <> [21-03-18].

More information:

If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal:

Essay and dissertation help