Papers are listed alphabetically by UCLA Anderson faculty. To search by keyword, use the Find feature on your browser.
We document that stocks with the strongest prior 12-month returns experience a significant average market-adjusted return of 1.58 percent during the five trading days before their earnings announcements and a significant average market-adjusted return of -1.86 percent in the five trading days afterward. These returns remain significant even after accounting for transactions costs. We empirically test two possible explanations for these anomalous returns. The first is that unexpectedly positive news hits the market over the few days prior to these firms' earnings announcements, and that unexpectedly negative news comes out just afterwards. The second possibility is that stocks with sharp run-ups tend to attract individual investors' attention, and investment dollars, particularly before their earnings announcements. We do not find evidence for an information-based explanation; however, our analysis suggests the possibility that the trading decisions of individual investors are at least partly responsible for the return pattern we observe.
The Value-Relevance of Intangibles: The Case of Software Capitalization
David Aboody & Baruch Lev
We examine in this study the relevance to investors of information on the capitalization of software development costs, as promulgated in 1985 by the Financial Accounting Standards Board in its Statement No. 86 (SFAS 86). We find that software capitalization is value-relevant to investors: The annually capitalized development costs are positively and significantly associated with stock returns and the cumulative software asset reported on the balance sheet is associated with stock prices. Furthermore, software capitalization figures are associated with subsequent reported earnings, indicating another dimension of relevance to investors. We also find that investors undervalue firms that expense all their software development costs. Finally, we find no support for the frequent argument that the judgment and subjectivity involved in software capitalization adversely affects the quality of reported earnings. We also investigate why the industry petitioned the FASB, in March 1996, to abolish SFAS 86. We document a significant shift in the mid-1990s in the impact of software capitalization on reported earnings and return-on-equity of software companies. Whereas in the early period of SFAS 86 application (mid- to late-1980s) software capitalization enhanced reported earnings considerably more than its detraction by the amortization of the software asset (since that asset was still small), during the early 1990s the gap between capitalization and amortization narrowed, and in 1995, the amortization's negative impact on reported profitability roughly offset the positive impact of capitalization. This diminished impact of capitalization on reported performance may have been among the reasons underlying the petition to abolish SFAS 86. Finally, we find that analysts' earnings forecast errors are positively and significantly associated with the intensity of software capitalization.
Phenomenological assumptions -- assumptions about the fundamental qualities of the phenomenon being studied-affect the dissemination of knowledge from sub-fields to the broader field of study. Microprocess research in organizational studies reveals implicit phenomenological assumptions that vary in the extent to which micro-processes are treated as parts of larger systems. We suggest that phenomenological assumptions of recursive interactions between the phenomenon and the environment will make the relevance of micro-process research findings to broader organizational questions easier to discern, and therefore more likely to disseminate to the larger field of organizational research. We empirically assess this assertion by analyzing studies of negotiation published in top peer-reviewed management, psychology, sociology, and industrial relations journals from 1990 to 2005. Our findings illuminate a continuum of open systems to closed systems phenomenological assumptions revealed in this microprocess research. Analysis of the citation rates of the articles in our data set by non-negotiation organizational research reveals that more open systems assumptions increase the likelihood that a negotiation article will be cited in organizational studies, after controlling for other, previously identified effects on citation rates. Our findings suggest that sub-fields can increase the impact they have on the broader intellectual discourse of their field by situating their phenomena in rich contexts that illuminate the connections between their findings and questions of interest to the broader field.
We argue that Jehn's (1995; 1997) conflict trichotomy of task, relationship and process conflict missed a fourth fundamental type of group conflict, that which occurs over relative status positions. Using mixed methods with two samples of MBA student teams, we identify and determine the impact of status conflicts in task groups. We first qualitatively identify the characteristics of status conflicts when they occur independently or with a conflict that is ostensibly over tasks, relationships or processes. We next validate a three-item survey scale that distinctly measures status conflict. Finally, we determine that including status conflict along with task and relationship conflict in analyses of group performance and team member satisfaction both increases the explanatory power of the models and exerts a significant, negative main effect. Thus, some of the ambiguity in research on the effects of group conflict on performance may be resolved by recognizing status conflicts. In addition, emerging research on status contests will benefit from cohering around a single definition and measurement scale.
Co-locating knowledge workers from different disciplines may be a necessary but insufficient step to generating multidisciplinary knowledge. We explore the role of assumptions underlying knowledge creation within the field of organizational studies, and investigate how incompatible assumptions across subgroups may inhibit the generation of multidisciplinary knowledge. While organizational studies research commonly assumes dynamic open systems with recursive influence between environments and interactions, studies of micro-processes in organizations often assume implicitly that interactions among organizational members are closed systems. We suggest that this incompatibility between assumptions may inhibit knowledge sharing in organizational studies research. We empirically assess this assertion by analyzing studies of negotiation published in top peer-reviewed management, psychology, sociology, and industrial relations journals from 1990 to 2005. Our findings illuminate a continuum of open-systems to closed-systems assumptions underlying this micro-process research. Analysis of the rate of citation of the articles in our data set by non-negotiation organizational studies research reveals that open systems assumptions increase the likelihood that a negotiation article will be cited in organizational studies, after controlling for other known effects on citation rate, such as outlet, discipline, length, number of citations and methodology. Our findings suggest that multidisciplinary fields can enhance their knowledge sharing by attending to the compatibility of assumptions held by sub-groups within the field.
Reaping What Is Sown: The Impact Of Perceived Managerial Controls On Subordinates' Fairness Evaluations
Chris P. Long, Corinne Bendersky & Calvin Morrill
We argue that different types of managerial controls - mechanisms to direct subordinate's tasks - increase the salience of particular aspects of fairness among subordinates. In a series of survey and scenario studies, we find that subordinates who perceive market controls engage in distributive fairness monitoring, subordinates who recognize bureaucratic controls engage in procedural fairness monitoring, and subordinates who discern clan controls engage in interpersonal fairness monitoring. We also find that managers who promote the type of fairness their subordinates monitor most closely promote higher levels of subordinate job satisfaction than those who do not.
In this paper we present new empirical evidence on the agency based asset pricing model of Brennan (1993). We find strong evidence that in the recent period stocks whose returns covary more with the idiosyncratic component of the S&P500 return have significantly lower returns, holding constant either the market beta or the loadings on the Fama-French factors. The effect is confined mainly to large capitalization stocks, which is consistent with previous evidence that these stocks are favored by institutional investment managers. The lack of evidence for an agency effect in earlier years is also consistent with the much smaller importance of institutional investors in the earlier period and to the late development of risk-adjusted approaches to measuring portfolio management performance.
The optimal portfolio strategy is developed for an investor who has detected an asset pricing anomaly but is not certain that the anomaly is genuine rather than merely apparent. The analysis takes account of the fact that the parameters of both the underlying asset pricing model and the anomalous returns are estimated rather than known. The value that an investor would place on the ability to invest to exploit the apparent anomaly is also derived and illustrative calculations are presented for the Fama-French SMB and HML portfolios, whose returns are anomalous relative to the CAPM.
We test two hypotheses about the determinants of closed-end fund premia and discounts using a comprehensive sample of non-taxable and taxable funds for the period 1988 to 2002. We test whether fund premia reflect agency costs, and the potential tax liability associated with unrealized capital gains by examining changes in fund premia around the declaration day of large dividend and capital gain distributions. We provide further evidence on the effect of the tax liability from unrealized capital gains by examining changes in the premium around the ex-day of capital gain distributions. Our results lend support to both agency cost and the capital gains tax explanations for fund premia and discounts. We also find that the market prices of municipal bond funds (which pay tax-free dividends) are more sensitive to capital gains tax liabilities than are the prices of taxable funds, which is consistent with the existence of tax clienteles among closed-end fund investors.
This paper develops a simple framework for analyzing the asset allocation problem of a longhorizon investor when there is inflation and only nominal assets are available for trade. The investor's optimal investment strategy is given in simple closed form using the equivalent martingale method. The investor's hedge demands depend on both the investment horizon and the maturities of the bonds in which he invests. The optimal strategy can be decomposed into three components: first, a portfolio that mimics a hypothetical indexed bond with maturity equal to the investment horizon; secondly, the meanvariance tangency portfolio; thirdly, an additional investment in the hypothetical indexed bond to hedge against changes in the investment opportunity set. When short positions are precluded, the investor's optimal strategy consists of investments in cash, equity and a single nominal bond. When the model is calibrated to recent data on US interest rates and inflation, only high frequency movements in real interest rates are detected so that the optimal allocation between stock and bond is found to be relatively insensitive to the horizon. A longer calibration period reveals low frequency variation in real interest rates that induces more pronounced horizon effects. Reasons for the differences in the two calibration exercises are suggested.
Estimation and Test of a Simple Model of Intertemporal Capital Asset Pricing
Michael J. Brennan, Ashley W. Wang & Yihong Xia
A simple valuation model that allows for time variation in investment opportunities is developed and estimated. The model assumes that the investment opportunity set is completely described by two state variables, the real interest rate and the maximum Sharpe ratio, which follow correlated OrnsteinUhlenbeck processes. The model parameters and time series of the state variables are estimated using data on US Treasury bond yields and inflation for the period January 1952 to December 2000. The estimated state variables are shown to be related to the equity premium and to the level of stock prices as measured by the dividend yield. Innovations in the estimated state variables are shown to be related to the returns on the Fama-French arbitrage portfolios, HML and SMB, providing a possible explanation for the risk premia on these portfolios. When tracking portfolios for the state variable innovations are constructed using returns on 6 size and bookto market equity sorted portfolios, the tracking portfolios explain the risk premia on HML and SMB, and these state variable tracking portfolios perform about as well as HML and SMB in explaining the crosssection of returns on the 25 size and bookto market equity sorted value weighted portfolios. An additional test of the ICAPM using returns on 30 industrial portfolios does not reject the model while the CAPM and the Fama-French 3 factor model are rejected using the same data.
The relation between the volatilities of pricing kernels associated with different currencies and the volatility of the exchange rate between the currencies is derived under the assumption of integrated capital markets, and the volatilities of the pricing kernels are related to the foreign exchange risk premium. Time series of pricing kernel volatilities are estimated from panel data on bond yields for five major currencies using a parsimonious term structure model that allows for time varying pricing kernel volatilities. The resulting estimates are used to test hypotheses about the relation between the volatilities of the pricing kernels in different currencies and the volatility of the exchange rate. As predicted, time variation in foreign exchange risk premia is found to be related to time variation in both the volatility of the pricing kernels and the volatility of exchange rates: the estimated pricing kernel volatilities can account for the forward premium puzzle in an 'average' sense across exchange rates.
Intertemporal Capital Asset Pricing and the Fama-French Three-Factor Model
Michael J. Brennan, Ashley W. Wang, and Yihong Xia
Characterizing the instantaneous investment opportunity set by the real interest rate and the maximum Sharpe ratio, a simple model of time varying investment opportunities is posited in which these two variables follow correlated Ornstein-Uhlenbeck processes, and the implications for stock and bond valuation are developed. The model suggests that the prices of certain portfolios that are related to the Fama-French HML and SMB hedge portfolio returns will carry information about investment opportunities. This provides a justification for the risk premia that have been found to be associated with these hedge portfolio returns. Evidence that the FF portfolios are in fact associated with variation in the investment opportunity set is found from an analysis of stock returns. Further evidence of time variation in the real investment opportunity set is found by analyzing bond yields, and the time variation in investment opportunities that is identified from bond yields is shown to be associated both with the time-variation in investment opportunities that is identified from stock returns and with the returns on the Fama-French hedge portfolios. Finally, it is shown that the estimated parameters imply substantial variation in stock prices that is not associated with cash flow expectations.
We estimate the parameters of pricing kernels that depend on both aggregate wealth and state variables that describe the investment opportunity set, using FTSE 100 and S&P 500 index option returns as the returns to be priced. The coefficients of the state variables are highly significant and remarkably consistent across specifications of the pricing kernel, and across the two markets. The results provide further evidence that, consistent with Merton's (1973) Intertemporal Capital Asset Pricing Model, state variables in addition to market risk are priced.
We analyze the risk characteristics and the valuation of assets in an economy in which the investment opportunity set is described by the real interest rate and the maximum Sharpe ratio. It is shown that, holding constant the beta of the underlying cash flow, the beta of a security is a function of the maturity of the cash flow. For parameter values estimated from U.S. data, the security beta is always increasing with the maturity of the underlying cash flow, while discount rates for risky cash flows can be increasing, decreasing or non-monotone functions of the maturity of the cash flow. The variation in discount rates and present value factors that is due to variation in the real interest rate and the Sharpe ratio is shown to be large for long maturity cash flows, and the component of the volatility that is due to variation in the Sharpe ratio is more important than that due to variation in the real interest rate.
We show that, when stock prices are subject to stochastic mispricing errors, expected rates of return may depend not only on the fundamental risk that is captured by a standard asset pricing model, but also on the type and degree of asset mispricing, even when the mispricing is zero on average. Empirically, the mispricing induced return premium, either estimated using a Kalman filter or proxied by the volatility and variance ratio of residual returns, is shown to be significantly associated with realized risk adjusted returns.
In this paper we analyze the gains to an investment banker who is able to market debt securities at yields that reflect the credit ratings of bond ratings agencies when the ratings depend on either the probabilities of default or the expected default losses of the securities issued. We consider the gains both from choosing the collateral against which the debt securities are written, and from dividing the debt into tranches with different priority. We derive general results and characterize the gains for numerical examples that are based on the CAPM and the Merton (1974) debt pricing model.
Market Areas of Car Dealerships
Paulo Albuquerque & Bart J. Bronnenberg
Using transactional data, we estimate a structural model of demand and supply of sport utility vehicles (SUVs). Consumers in our model choose among SUVs at specific dealerships. By expanding the product space to include the location of point of sales, we can study the size and shape of market areas - defined as the geographic area where demand for an alternative is highest - for each car model, dealership, and manufacturer. By combining demand estimates with profit optimizing behavior of manufacturers, our model is able to provide profit projections and shifts in market areas from managerial decisions such as the relocation of a dealership or the removal of a car model from the market. We empirically apply our model to the case of SUVs in San Diego using a dataset that contains transaction information about locations of dealers and consumers, manufacturer prices, and retail prices. We find high disutility for travel to retailers, which geographically limits preferences to nearby alternatives. We show that most dealers have their own private 'backyard' of demand that is shared with a small set of other alternatives. As predicted by the literature on spatial competition, the size and shape of the market area is strongly dependent on the competitors location. In the majority of cases the highest spatial density of demand is not at the location of the dealer but at locations that are furthest from direct substitutes. We find that discounting prices by 10% leads to market area expansion of 5 miles.
This paper empirically investigates the determinants of supermarkets' assortment composition and assortment size decisions. We define measures of assortment similarity and use a unique cross-sectional store-level data set to analyze how assortment composition and assortment size are related to underlying factors that describe local store clientele, local competitive structure, and the retail organization's corporate structure. We then examine whether cross-store variation in assortment is most strongly related to factors describing demand or supply. From an analysis covering two important product categories, cereals and colas, we find that corporate- and ownership-structure of retail stores are the most important drivers of differences in assortment composition. We also find that supermarkets tailor their assortment by considering relevant local demographic dimensions, but this effect is an order of magnitude smaller than that of corporate ownership structure. We conclude that marginal assortment decisions are mostly cost driven, and are made at the corporate level. We discuss our findings in the context of manufacturers designing distribution policies.
A primary goal of research in marketing is to evaluate and recommend optimal policies for marketing actions, or "instruments" in the terminology of Franses (2005). In this respect, marketing is a very policy-oriented field and it is ironic that so much published research skirts the issue of policy evaluation. Franses' paper draws much needed attention to the question of what sort of model is usable for policy simulation and evaluation. Our perspective on what constitutes a valid model for policy evaluation differs from Franses' view but we believe our view complements his in many important respects. We also strongly believe that marketing has much to contribute to the literature on structural modeling. We will outline some of what we believe are the advantages for marketing scholars of using structural modeling for policy evaluations and what are some of the challenges which are presented by marketing problems.
Clickstream data are defined as the electronic record of Internet usage collected by Web servers or third-party services. The authors discuss the nature of clickstream data, noting key strengths and limitations of these data for research in marketing. The paper reviews major developments from the analysis of these data, covering advances in understanding (1) browsing and site usage behavior on the Internet, (2) the Internet's role and efficacy as a new medium for advertising and persuasion, and (3) shopping behavior on the Internet (i.e., electronic commerce). The authors outline opportunities for new research and highlight several emerging areas likely to grow in future importance. Inherent limitations of clickstream data for understanding and predicting the behavior of Internet users or researching marketing phenomena are also discussed.
In Internet paid search advertising, many marketers pay for search engines to serve text ads in response to keyword searches that are generic (e.g., "Hotels") or branded (e.g., "Hilton Hotels"). While stand-alone metrics usually show that generic keywords have higher apparent costs to the advertiser than branded keywords, generic search may create a spillover effect on subsequent branded search. Building on the Nerlove-Arrow advertising framework, the authors propose a dynamic linear model to capture the potential spillover from generic to branded paid search. In the model, generic search ads expose users to information about the advertiser's brand, increasing its awareness level. This, in turn, affects future search activity for keywords which include the brand name. Using a Bayesian estimation approach, the authors apply the model to data from a paid search campaign for a major lodging chain. The results show that spillover is asymmetric. Generic search activity positively affects branded search activity via increased awareness but branded search does not affect generic search. Implications for improving metrics for paid search advertising are discussed.
Psychologists study regret primarily by measuring subjects' attitudes in laboratory experiments. This does not shed light on how expected regret affects economic actions in market settings. To address this, we use proprietary data from a blackjack table in Las Vegas to analyze how expected regret affects peoples' decisions during gambles. Even among a group of people who choose to participate in a risk-taking activity, we find strong evidence of an economically significant omission bias: players incur substantial losses by playing too conservatively. This behavior is prevalent even among large stakes gamblers, and becomes more severe following previous aggressive play, suggesting a rebound effect after aggressive play.
We study a firm's investment in organization capital by analyzing a dynamic model of language development and intrafirm communication. We show that firms with richer internal languages (i.e., more organization capital) have lower employee turnover, higher diversity in skill, and greater wage dispersion. The model predicts that senior managers will more frequently be promoted from within in firms with a rich language. Our results also suggest that firms with lower asset betas and higher geographic concentration will invest more in organization capital by retaining their employees more often. Our model has implications for the management of human capital, executive compensation and mergers.
Given the importance of sound advice in retail financial markets and the fact that financial institutions outsource their advice services, how should consumer protection law be set to maximize social welfare? We address this question by posing a theoretical model of retail markets in which a firm and a broker face a bilateral hidden action problem when they service clients in the market. All participants in the market are rational, and prices are set based on consistent beliefs about equilibrium actions of the firm and the broker. We characterize the optimal law, and derive how the legal system splits the blame between parties to the transaction. We also analyze how complexity in assessing clients and conflicts of interest affect the law. Since these markets are large, the implications of the analysis have great welfare import.
We develop a dynamic model to study the interaction between obfuscation and investor sophistication in retail financial markets. Taking into account different learning mechanisms within the investor population, we characterize the optimal timing of obfuscation for a profit-maximizing monopolist. We show that educational initiatives that are directed to facilitate learning by investors may induce producers to increase wasteful obfuscation, further disorienting investors and decreasing overall welfare. Obfuscation decreases with competition among firms, but increases with higher investor participation in the market.
We examine the relation between leverage and future stock returns while simultaneously considering the dynamic nature of firm's leverage. Using Graham's (2000) kink measure as a proxy for excess leverage, we find supportive evidence that firm's leverage can be characterized by a partial adjustment model. Excess leverage predicts not only future changes in leverage, but also other fundamentals such as investment and profitability. The market does not seem to fully understand the information contained in excess leverage about future fundamentals (especially investments), and under-levered firms earn superior risk adjusted returns through unexpected growth. The anomalous finding by Penman, Richardson and Tuna (2007), that the relation between leverage and future returns is negative, is subsumed by the negative relation between excess leverage and future returns.
While many researchers employ regression-based tests of the asymmetric timeliness of earnings, recent studies have shown that such measures may be invalid. We develop an econometric model to show that asymmetric timeliness tests typically provide a valid indication of whether or not earnings are conservative; however, the inclusion of future rents in equity values causes these tests to mismeasure the degree of conservatism. This mismeasurement potentially biases cross-sectional comparisons of the degree of conservatism. We show analytically and empirically that a test based on ratios of regression coefficients controls for this bias and allows for valid cross-sectional comparisons of conservatism.
It has long been understood that moral hazard arises in debt financing. Equity holders can expropriate creditors by inducing them to lend at a lower rate of interest than is commensurate with the risk of loan, either by misleading them about the true nature of the firm's risk (asymmetric information) or by changing the risk of the firm after the loan is made, but before it is paid off (asset substitution). This paper considers the implications of moral hazard issues when a firm makes a sequence of investments and creditors can not fully specify and/or investors can not fully commit, to the nature (risk) of future investments. Sequential real estate acquisitions or development projects are examples of the situation we are considering. We show that when lenders are aware of the moral hazard problem and act rationally to price this into their debt contracts, the pricing of this risk can bias investments decisions toward riskier investments. This occurs, not by constraining investment decisions, but by creating incentives to follow an anticipated (and ex ante priced) path. In this environment cross-guarantees may result in higher rather than lower interest rates. We conclude that in the presence of this moral hazard non-recourse lending or bankruptcy remote entities are the first best from the perspective of controlling moral hazard risk while giving investors the opportunity to invest in future opportunities.
In the United States, but not in Canada, nominal interest on residential housing mortgages is a deductible expense for the personal income tax. This suggests that changes in nominal interest rates could conceivably have differing impacts on real estate values in the two countries. The inflation component of nominal interest should have a negative impact on Canadian real estate, but its effect should be strictly less negative in the US and could even be positive. Using real estate investment trusts along with expected inflation imputed from inflation-indexed bonds in both countries, we find empirical support for a material and significant difference. In Canada, increases in nominal interest rates driven by inflation have a negative impact. The US impact is minimal and ambiguous in sign.
Relative purchasing power parity (PPP) holds for pure price inflations, which affect prices of all goods and services by the same proportion, while leaving relative prices unchanged. Pure price inflations also affect nominal returns of all traded financial assets by exactly the same amount. Recognizing that relative PPP may not hold for the official inflation data constructed from commodity price indices because of relative price changes and other frictions that cause prices to be sticky, we provide a novel method for extracting a proxy for realized pure price inflation from stock returns. We find strong support for relative PPP in the short run using the extracted inflation measures.
(i) why we discount the future,
(ii) get weaker with age, and
(iii) display risk-aversion
A number of evolutionary theories have been proposed to explain the phenomenon of aging or senescence or why we get weak as we get older (see the review article by Gavrilov and Gavrilova, 2002). Economists have also begun to explore the biological basis of preferences, such as discounting of future consumption (Rogers, 1994), and risk-aversion, that are usually taken as primitive (see a comprehensive article by Robson, 2002). In this paper, I formulate a simple and parsimonious evolutionary model that shows that because most species face a possibility of dying because of external factors, called extrinsic mortality in the biology literature, it can simultaneously explain (i) why we discount the future, ii) get weaker with age,1 and (iii) display risk-aversion.
Extreme Measures of Agricultural Financial Risk
John Cotter, Kevin Dowd & Wyn Morgan
Risk is an inherent feature of agricultural production and marketing and accurate measurement of it helps inform more efficient use of resources. This paper examines three tail quantile-based risk measures applied to the estimation of extreme agricultural financial risk for corn and soybean production in the US: Value at Risk (VaR), Expected Shortfall (ES) and Spectral Risk Measures (SRMs). We use Extreme Value Theory (EVT) to model the tail returns and present results for these three different risk measures using agricultural futures market data. We compare the estimated risk measures in terms of their size and precision, and find that they are all considerably higher than normal estimates; they are also quite uncertain, and become more uncertain as the risks involved become more extreme.
Does Load Lead to Decision Bias or are We Biased Against Load?
Aimee Drolet, Mary Frances Luce & Itamar Simonson
We examine moderators of the impact of cognitive load on choice strategies and susceptibility to decision bias. In four studies, we investigate the conditions under which load increases the compromise effect. Overall, our research shows that the ultimate influence of load on bias is contingent on motivational factors that determine how choice processes would have progressed under conditions of no load. Our findings indicate that there is no de facto impact of load on bias. Instead, the biasing effects of reduced resources are confined to consumers who have sufficient motivation to resolve choice problems and avoid biased (e.g., based on preferences rather than compromise position) choice outcomes. The implications of our research for the reliance on load to study consumer choice and for the two-system view of consumer decision making are discussed.
Psychological accounts have generally emphasized the driving role of external factors, such as contextual cues, in habit performance. The present research investigated the influence of a person-specific variable which reflects a more internal driver of habits. Three studies showed a strong negative relationship between people's tendency to generate relatively uncommon word responses in free-association tasks and their tendency to repeat behavior across situations. These results implicate free associations as having an important role in habits.
In three longitudinal experiments, conducted both in the field and lab, we investigate the recollection of mixed emotions. Results demonstrate that mixed emotions are generally underreported at the time of recall, an effect which appears to grow over time and does not occur to the same degree with unipolar emotions. Importantly, the decline in memory of mixed emotions is: (a) not explained by differential importance levels across the distinct types of emotion experiences, and (b) distinct from the pattern found for the memory of negative emotions. These results imply that recall difficulty is diagnostic of the complexity of mixed emotions rather than of any association with negative affect. Finally, we show that one reason for these effects is the felt conflict which arises when experiencing mixed (vs. unipolar) emotions. Implications for consumer memory and behavior are discussed.
In this paper I analyze the role of openness and globalization in Latin America's economic development. The paper is divided into two distinct part: I first (Sections II through IV) provide an analysis of 60 years of the region's economic history, that go from the launching of the Alliance for Progress by the Kennedy Administration in 1961, to the formulation and implementation of the market-oriented reforms of the Washington Consensus in the 1990s and 2000s. I conclude that Latin America's history has been characterized by low growth, high inflation and recurrent external crises. In Section V I deal formally with the costs of crises, and I estimate a number of variance component models of the dynamics of growth. I find that external crises have been more costly in Latin America than in the rest of the world. I also find that the cost of external crises has been inversely related to the degree of openness.
I use a large cross country data set and panel probit analysis to investigate the way in which the interaction between trade and financial openness affect the probability of external crises. This analysis is related to debate on the adequate sequencing of reform. I also investigate the role played by current account and fiscal imbalances, contagion, international reserves holdings, and the exchange rate regime as possible determinants of external crises. The results indicate that relaxing capital controls increases the likelihood of a country experiencing a sudden stop. Moreover, the results suggest that "financial liberalization first" strategies increase the degree of vulnerability to external crises. This is particularly the case if this strategy is pursued with pegged exchange rates and if it results in large current account imbalances.
Social network theory suggests that individuals' preferences and decisions are affected by the actions of others. Such decision externalities arise from constraints on our ability to process or obtain costly information. This paper provides evidence that managers are influenced by their social peers when making corporate finance policy decisions. I create a matrix of social ties using data on current employment, past employment, education, and other activities for key executives and directors of the board for US companies. I find that the more social connections two companies share with each other, the more similar both their level and change over time in investment. Furthermore, companies positioned more centrally in the social networks invest in a less idiosyncratic way. Finally, more socially connected firms exhibit better economic performance. To address endogeneity concerns, I find that two companies behave less similarly when an individual who connects them dies.
CDO Market Implosion and the Pricing of Subprime Mortgage-Backed Securities
Yongheng Deng, Stuart A. Gabriel, Anthony B. Sanders
The global market for collateralized debt obligations (CDOs) witnessed explosive growth over the 1997-2006 period, as the stock of global issuance expanded from $300 billion to almost $2 trillion. CDO issuance importantly supported the market for subprime mortgage-backed debt, via the re-packaging of those assets into derivative CDO securities. The surge in issuance of subprime mortgage-backed CDOs coincided with a marked tightening in subprime MBS-Treasury spreads, suggesting some measurable effect of this market-completing vehicle on the supply/demand balance and pricing of mortgage-backed securities. In 2007 and in the wake of the implosion in the CDO market, spreads on subprime mortgage-backed securities widened considerably. This research evaluates the effects of the emergence of the CDO market on the pricing of subprime residential mortgage-backed securities. Upon controlling for mortgage option values and other well-established determinants of credit spreads, research indicates that the emergence of the subprime-backed CDO market was associated with a significant tightening of subprime MBS/Treasury yield spreads. Results of VAR and other robustness tests serve to corroborate the results. Research findings suggest the importance of innovations in derivative securities markets to the pricing and related affordability of subprime mortgages. Results similarly indicate that the unexpected closure of the CDO market exerted upward pressure on MBS spreads, and in so doing contributed to changes in the pricing, underwriting and related demise of subprime mortgages.
The dramatic government takeover of Fannie Mae and Freddie Mac in September, 2008 was motivated in part by a desire to ensure adequate liquidity in the mortgage market. This study examines a closely related issue: the extent to which GSE activity crowds out mortgage purchases by private secondary market intermediaries. Evidence of substantial crowd out suggests that government support for the GSEs may be less warranted, while absence of crowd out implies that the GSEs enhance liquidity.
Using 1994-2007 HMDA data for conventional, conforming sized, home purchase loans, three distinct periods with regard to GSE crowd out are apparent. From 1994-2003, the share of loans sold to the secondary market increased from 60 percent to nearly 100 percent, private sector and GSE market shares of loan purchases were similar, and IV estimates indicate relatively little GSE crowd out. From 2004 to 2006, private loan purchases boomed and dominated those of the GSEs, while IV estimates indicate close to 100 percent crowd out. With the crash in housing and mortgage markets in 2007, private sector intermediaries pulled back, the GSEs regained market share, and importantly, evidence of GSE crowd out disappeared. These patterns suggest that the degree of GSE crowd out varies with market conditions, and that the federal takeover of Fannie Mae and Freddie Mac likely has served to enhance liquidity to the mortgage market during the current mortgage market crisis.
This study seeks to distinguish among competing theories of urbanization in an explanation of recent, massive rural-to-urban migration in China. Specifically, the research evaluates whether Chinese urbanization following the 1990s liberalization of mobility and residential location restrictions was driven by migrant learning opportunities as in the skill-transition urbanization technology (Lucas, 2004), or instead was associated with a traditional dual-skill urbanization technology in which opportunities for migrant skill upgrading were largely absent. The analysis is facilitated by the application of an unusually rich data set to estimate skill-based selection of Chinese migrants in the context of a utility-maximizing directional migration model. Research findings suggest substantial differentials across skill-based strata in migratory response to regional disparities in returns to education. Model simulation further indicates that those disparities derive largely from regional variations in human capital rather than from positive skill complementarities in production (Giannetti, 2003; Berry and Glaeser, 2005) and accordingly serve to encourage regional convergence in human capital concentration. Further, results fail to support the hypothesis that benefits of human capital externalities in learning accrue to low-skilled migrants. According to Lucas (2004), such benefits operate as a key mechanism for economic transition from a dual-skill economy to a modern urbanized economy. The lack of such human capital externalities is consistent with the pervasiveness of institutional barriers in China which sustained urban segregation in occupation and social interactions to the disadvantage of low-skilled migrants (Wang and Zuo, 1999). Our estimates do show strong social interaction benefits for the educated population strata, which contributed to regional divergence in human capital concentration.
Optimal Pricing Strategy with Price Dispersion: New Evidence from the Tokyo Housing Market
Diehang Zheng, Yongheng Deng, Stuart A. Gabriel & Kiyohiko G. Nishimura
In our multistage search model, the seller's reservation price is affected by the offer price distribution, while the optimal asking price is chosen so as to maximize the return from search. We show that a greater dispersion in offer prices leads to a higher reservation price and a higher optimal asking price, which in turn results in a higher expected transaction price. Under the assumption that offer prices are normally distributed, a higher dispersion of offer prices also reduces time on the market for overpriced properties.
The GSEs, CRA, and Homeownership in Targeted Underserved Neighborhoods
Stuart A. Gabriel & Stuart S. Rosenthal
This study evaluates the effects of the Community Reinvestment Act of 1977 and the Government-Sponsored Enterprise Act of 1992 on mortgage and housing outcomes. Together, the CRA and GSE Acts form the centerpiece of U.S. government efforts to direct mortgage capital and enhance homeownership among targeted low-income and minority populations. Conceptual arguments suggest that GSE purchase requirements likely increase originations for mortgages below the conforming size limit, but reduce activity in the market for larger nonconforming sized mortgages. Anticipated effects of CRA on conforming and nonconforming market segments are ambiguous, a priori, but likely opposite from patterns anticipated for GSE purchase requirements.
Secondary markets for credit are widely believed to improve efficiency and increase access to credit. In part, this is because of their greater ability to manage risk. However, the degree to which secondary markets expand access to credit is virtually unknown. Using the mortgage market as an example, we begin to fill that gap. Our conceptual model suggests that secondary credit markets have potentially ambiguous effects on interest rates, but unambiguous positive effects on the number of loans issued. We focus our empirical analysis on the latter using 1992-2004 HMDA files for conventional, conforming, home purchase loans in conjunction with Census tract data.
Value Creation through Securitization: Evidence from the CMBS Market
Xudong An, Yongheng Deng, and Stuart A. Gabriel
Despite recent volatility and constraints in secondary market funding, analysts have ascribed substantial value creation to the securitization of commercial mortgages. Such value creation likely emanates from enhancements to originator liquidity, tranching of claims on cashflows, gains from specialization in origination, servicing, and holding of mortgages, regulatory arbitrage, and the like. Indeed, such value creation would be consistent with past accelerated growth in the mortgage- and asset-based securities markets and the substantial profits earned by secondary market intermediaries.
This paper examines a volatility estimation bias that may be commonly exhibited by all option pricing models on all underlying sources of risk. Black-Scholes (1972) were the first to illustrate the bias by showing that their model under priced options on relatively low variance stocks and over priced options on relatively high variance stocks. The bias is always observed in cross section among individual stocks. We think this bias might have nothing to do with Black-Scholes or any option pricing model but instead might be attributable to sampling error. Thus, this bias should be observed with any option pricing model on any underlying, not just equity, but also fixed income securities, foreign exchange, and commodities. To test this idea, we use shrinkage estimators of James-Stein detailed in Efron-Morris (1976) and Ledoit-Wolf (2004a). While both shrinkage estimators utilize the covariance matrix, Ledoit-Wolf (or LW hereafter) is unique because it does not require matrix inversion. We show that the variance bias can be eliminated using these improved estimators.
The primary purpose of this paper is to introduce a new methodology for measuring the market value of aggregate market debt and analyzing the resultant risk effects of stochastic market leverage on asset prices in the economy. To the best of our knowledge this is the first paper to attempt to directly isolate and analyze the effects of the market value of aggregate market debt on equity index option prices. We present the first measures of the market value of aggregate corporate debt. The new methodology used to derive the market value of aggregate debt is unique because it uses no arbitrage arguments from option theory which rely only on contemporaneous (no historical) market price data for the very liquid prices of the index level and index options. We demonstrate that the effects of the market value of aggregate debt operate, not through default risk, but instead by altering the sensitivity and risk of the market equity. The inclusion of market value of aggregate debt results in significant statistical and economic improvements in the pricing of S&P 500 index put options relative to more complex models which omit leverage.
Despite the increasing importance of remittances in total international capital flows, the relationship between remittances and growth has not been adequately studied. This paper studies one of the links between remittances and growth, in particular how local financial sector development influences a country's capacity to take advantage of remittances. Using a newly-constructed dataset for remittances covering about 100 developing countries, we find that remittances boost growth in countries with less developed financial systems by providing an alternative way to finance investment and helping overcome liquidity constraints. The study also explores some common myths about remittances and suggests that they are predominantly profit-driven and mostly pro-cyclical.
The structure of family relationships influences economic behavior and attitudes. We define our measure of family ties using individual responses from the World Value Survey regarding the role of the family and the love and respect that children need to have for their parents for over 70 countries. We show that strong family ties imply more reliance on the family as an economic unit which provides goods and services and less on the market and on the government for social insurance. With strong family ties home production is higher, labor force participation of women and youngsters, and geographical mobility, lower. Families are larger (higher fertility and higher family size) with strong family ties, which is consistent with the idea of the family as an important economic unit. We present evidence on cross country regressions. To assess causality we look at the behavior of second generation immigrants in the US and we employ a variable based on the grammatical rule of pronoun drop as an instrument for family ties. Our results overall indicate a significant influence of the strength of family ties on economic outcomes.
Are Mutual Fund Fees Competitive? What IQ-Related Behavior Tells Us
Mark Grinblatt, Seppo Ikaheimo & Matti Keloharju
This study analyzes the fees of mutual funds and the choices of mutual fund investors. Using a comprehensive dataset on males in two Finnish provinces, we find that the fees of funds selected by high IQ investors are not significantly lower than the fees of funds selected by low IQ investors. This conclusion controls for a variety of fund and individual attributes that explain mutual fund fees and mutual fund choices. This suggests that fees are set competitively in the fund industry.
We use an iterative relocation algorithm to identify factors in common stock returns. The benefit of the approach is that factors are portfolios of assets with non-negative weights. As a result, they are readily interpretable in terms of the characteristics of the underlying securities. The positive portfolio factors have comparatively high explanatory power in sample and out of sample. We find evidence of a size factor and factors identified with certain industries. Factors extracted from the mutual fund universe perform marginally better than factors from the universe of equities.
Marketing decision makers are increasingly aware of the importance of shareholder value maximization, which calls for an evaluation of the long-run effects of their actions on product-market response as well as investor response. However, the marketing literature to date has focused on the sales or profit response of marketing actions such as advertising spending and new-product development, and the goals of marketing have traditionally been formulated from a customer perspective. There have been no studies of the long-term investor response to marketing actions, in particular the stock returns of publicly traded firms.
Our research investigates one important aspect of this impact, the long-run relationship between advertising spending and market capitalization. We hypothesize that advertising can have a direct effect on valuation, i.e., an effect over and above its indirect effect via sales revenue and profit response. Our empirical test is based on several years of data for two industries. We use multivariate time-series methods that disentangle the long-run effects and short-run effects, as well as the direct and the indirect effects of advertising on firm valuation. The empirical results provide support for our hypothesis that advertising spending has a positive and long-run impact on firms' market capitalization. Thus, even if product-market response to the advertising is demonstrably weak, investors are willing to pay a premium for aggressive advertisers. We quantify the magnitude of this investor response effect for own firms as well as competition and discuss its implications for future research.
The marketing profession is being challenged to assess and communicate the value created by its actions on shareholder value. These demands create a need to translate marketing resource allocations and their performance consequences into financial and firm value effects. The objective of this paper is to integrate the existing knowledge on the impact of marketing on firm value. We first frame the important research questions on marketing and firm value and review the important investor response metrics and relevant analytical models, as they relate to marketing. We next summarize the empirical findings to date on how marketing creates shareholder value, including the impact of brand equity, customer equity, customer satisfaction, R&D, product quality and specific marketing-mix actions. In addition we review emerging findings on biases in investor response to marketing actions. We conclude by formulating an agenda for future research challenges in this emerging area.
The customer equity paradigm is readily implemented in relationship businesses where the distinction between a prospect and an existing customer is unambiguous. That enables firms in such industries to be customer and long-term focused in the allocation of their marketing resources. This is not the case in frequently purchased product categories, where customers may switch back and forth between competing brands, and even consume multiple brands in the same time period. However, by adopting an always-a-share customer definition and using a probabilistic classification of active and inactive customers, we demonstrate that measures of customer equity may still be obtained in such categories, using readily available scanner panel data. We illustrate our approach for the leading national and private-label brands in two CPG categories and show that the brands' sources of customer equity and the impact of their marketing activities are very different. As a result, the brands' customer equity levels may be evolving in different directions that are not readily apparent. We discuss the managerial implications of our findings and offer several areas for future research.
Product Innovations, Advertising and Stock Returns
Shuba Srinivasan, Koen H. Pauwels, Jorge Silva-Risso & Dominique Hanssens
Under increased scrutiny from top management and shareholders, marketing managers feel the need to measure and communicate the impact of their actions on shareholder returns. In particular, how do customer value creation (through product innovation) and customer value communication (through marketing investments) affect stock returns? This paper examines conceptually and empirically how product innovations and marketing investments for such product innovations lift stock returns by improving the outlook on future cash flows. We address these questions with a large-scale econometric analysis of product innovation and associated marketing mix in the automobile industry. First, we find that adding such marketing actions to the established finance benchmark model greatly improves the explained variance in stock returns. In particular, investors react favorably to companies that launch pioneering innovations, with higher perceived quality, backed by substantial advertising support, in large and growing categories. Finally, we quantify and compare the stock return benefits of several managerial control variables.
Our results highlight the stock market benefits of pioneering innovations. Compared to minor updates, pioneering innovations obtain a seven times higher impact on stock returns, and their advertising support is nine times more effective as well. Perceived quality of the new-car introduction improves the firm's stock returns while customer liking does not have a statistically significant effect. Promotional incentives have a negative effect on stock returns, suggesting that price promotions may be interpreted as a signal of demand weakness. Managers may combine these return estimates with internal data on project costs to help decide the appropriate mix of product innovation and marketing investment.
The Impact of Positive vs. Negative Online Buzz on Retail Prices
Hyun S. Shin, Dominique Hanssens & Bharath Gajula
Online buzz or electronic word-of-mouth (e-WOM) has become more influential on customer decision-making due to increasing product complexity and product availability over the internet. Moreover, e-WOM spreads rapidly among customers and can be accessed anytime and anywhere, which further increases its significance. These e-WOM conversations describe products in a positive, negative or neutral way, but we do not know if and how such customer perceptions influence important business outcomes such as retail prices. This paper examines the effect of e-WOM on the prices of digital music players. Using a cutting-edge web crawling technique, we obtain the relevant buzz information collected from diverse online documents on a daily basis for two months. In particular, we capture online buzz sentiment, which allows us to investigate the different implications of positive, neutral, and negative online conversations. Econometric time-series modeling reveals that positive online buzz is a leading indicator of price increases, and vice versa. Furthermore, the effect of online buzz sentiment on prices is moderated by purchase involvement: negative online buzz leads to price cuts for high-ticket items, whereas positive online buzz enables price increases for low-priced items. These findings establish the influence of online buzz sentiment on e-retailers' pricing power, and suggest that managers should frequently monitor the sentiment of online buzz around their products and respond appropriately by adjusting their prices promptly.
This paper examines properties of analysts' cash flow forecasts and compares these properties with those exhibited by analysts' earnings forecasts. Our results indicate that analysts' cash flow forecasts are of a considerable lower quality than their earnings forecasts. They are less accurate and improve at a slower rate during the forecast period. Further, analysts' cash flow forecasts appear to be, in essence, a naive extension of their earnings forecasts and provide no incremental information on expected changes in firms' working capital. Consistent with their low quality and in contrast to their earnings forecasts, analysts' forecasts of cash flows are of limited information content and are only weakly associated with stock price movements. Finally, a measure of expected accruals based on the difference between analysts' earnings and cash flow forecasts has a very low power in detecting earnings management.
The Rewards to Meeting or Beating Earnings Expectations
Eli Bartov, Dan Givoly & Carla Hayn
The paper studies the manner by which earnings expectations are met, measures the rewards to meeting or beating earnings expectations (MBE) formed just prior to the release of quarterly earnings, and tests alternative explanations for this reward. The evidence supports the claims that the MBE phenomenon has become more widespread in recent years and that the pattern by which MBE is obtained is consistent with both earnings management and expectation management. More importantly, the evidence shows that after controlling for the overall earnings performance in the quarter, firms that manage to meet or beat their earnings expectations enjoy an average quarterly return that is higher by almost 3 percent than their peers that fail to do so. While investors appear to discount MBE cases that are likely to result from expectation or earnings management, the premium in these cases is still significant. Finally, the results are consistent with an economic explanation for the premium placed on earnings surprises, namely that MBE are informative of the firm's future performance.
What Do Analysts Really Predict? Inferences from Earnings Restatements and Managed Earnings
Dan Givoly, Carla Hayn & Timothy Yoder
This paper examines whether analysts earnings forecasts incorporate or exclude the managed earnings component. The results, based on a sample of 285 restatements and a much larger sample of cases where earnings are likely to have been managed upward, are consistent with analysts predicting the earnings number that will eventually be reported by the firm. Further, the managed earnings component appears to influence analysts' subsequent earnings forecasts, leading to upward forecast revisions and upgraded stock recommendations. The findings are further consistent with management signaling through earnings management favorable future performance.
In this study, we examine the relation between implied cost of capital and expected returns under an assumption that expected returns are stochastic, a property supported by theory and empirical evidence. We demonstrate that implied cost of capital differs from expected return, on average, by a function encompassing volatilities of, as well as correlation between, expected returns and cash flows, growth in cash flows, and leverage. These results provide alternative explanations for findings from empirical studies employing implied cost of capital on the magnitude of the market risk premium; relations between cost of capital, growth, leverage, and idiosyncratic risks; predictability of future returns, and characteristics of the firm's information environment.
Monthly US data on payroll employment, civilian employment, industrial production and the unemployment rate are used to define a simple recession dating algorithm that nearly perfectly reproduces the NBER official peak and trough dates. The only substantial point of disagreement is with respect to the NBER November 1973 peak. The algorithm prefers September 1974. In addition, this algorithm indicates that the data through June 2008 do not yet exceed the recession thresholds, and will do so only if things get much worse.
An established firm can enter a new market through acquisition or internal development. Predictions that the choice of entry mode depends on "relatedness" between the new market and the firm's existing businesses have repeatedly failed to gain empirical support. We resolve ambiguity in prior work by developing dynamic measures of relatedness, and by making a distinction between entries inside versus outside a firm's primary business domain. Using a fine-grained data set on the telecommunications sector, we find that inside a firm's primary business domain, acquisitions are used to fill persistent gaps near the firm's existing businesses, whereas outside that domain, acquisitions are used to extend the enterprise in new directions.
Facing uncertainty about whether to adopt a new technology, firms rely on both external and internal sources of information. Firms may learn vicariously about the desirability of adoption; a large body of research has demonstrated a tendency for firms to imitate rival adopters. Organizations with multiple units may also learn from their own experience once an initial unit of the firm has adopted. We use data on the establishment of websites by consumer magazines during the early Internet era to test the hypothesis that multi-unit firms substitute experiential learning for vicarious learning once an initial unit of the firm has adopted. Consistent with this hypothesis, we find that the influence of rivals drops sharply following the initial adoption within the firm.
Using an extension of the Olley-Pakes estimation technique, we estimate rates of learning by doing in over 250 SIC4 industries in the US manufacturing sector. We then examine the link between learning and producer concentration using Sutton's "bounds" approach. We find that the lower bound of concentration is higher in high learning industries, which suggests that learning by doing has characteristics of an endogenous sunk cost.
Do You Look to the Future or Focus on Today? The Impact of Life Experience on Intertemporal Decisions
Jennifer Aaker & Wendy Liu
In this research, we investigate the impact of significant life experiences on intertemporal decisions among young adults. A series of experiments focus specifically on the impact of experiencing the death of a close other by cancer. We show that such an experience, which bears information about time, is associated with making decisions that favor the long-term future over short-term interests (Studies 1 and 2). Underlying this effect appears to be increased salience and concreteness regarding one's future life course, shifting focus away from the present toward the long run (Studies 3 and 4). Finally, we explore the shift caused by a cancer death of a public figure and examine its stability over time (Study 5). Implications for research on intertemporal decision making and the impact of life events on perceptions and preferences are discussed.
This research examines the phenomenon of interruptions and suspensions in decision making. It is proposed that information processing may change from a bottom-up data-driven to a top-down goal-directed mode after an interruption, thereby affecting preferences. In particular, in decisions involving desirability and feasibility conflicts, because desirability is a superordinate goal to feasibility, four studies found that when a decision is interrupted and later resumed, people become more likely to favor a highly desirable but less feasible consumption, such as a high-risk high-reward option, or a high-quality high-price option. A reduced focus on feasibility is found to underlie this effect. Forthcoming in Journal of Consumer Research 2009.
This research examines how a focus on time versus money can lead to two distinct mindsets that impact consumers' willingness to donate to charitable causes. The results of three experiments, conducted both in the lab and in the field, reveal that asking individuals to think about "how much time they would like to donate" (versus "how much money they would like to donate") to a charity increases the amount that they ultimately donate to the charity. Fueling this effect are differential mindsets activated by time versus money. Implications for the research on time, money and emotional well-being are discussed. Forthcoming in Journal of Consumer Research 2008.
Variety, Vice, and Virtue: How Assortment Size Influences Option Choice
Jonah Berger, Aner Sela & Wendy Liu
Recent research has demonstrated that variety can have an important influence on choice likelihood. However, given that consumers do make a selection, we suggest that variety may also influence the type of item they select. Specifically, because choosing from a larger number of options is associated with more choice difficulty, it promotes greater reliance on reasons or justifications for choice. Consequently, we suggest that choosing from a larger set of options will lead consumers to select options that are easier to justify. Five studies support this hypothesis, demonstrating that variety influences the choice between hedonic and utilitarian options.
We study asset pricing and trading behavior in an exchange economy populated by two agents with different risk aversion. We show that the credit market plays a central role in the risk sharing between the two agents. It allows the less-risk-averse agent to borrow in order to take on levered positions in the stock and thus bear more risk. Optimal risk sharing results in the more-risk-averse agent effectively selling covered "call" options to the less-risk-averse agent. As the state of the economy changes, the equilibrium amount of credit in the market also fluctuates, which in turn influences expected stock returns, stock return volatility, the term structure of interest rates, and trading activity in the stock market. We further explore the immediate empirical implication that variation in the size of the credit market is related to variation in expected stock returns. Using various measures of changes in the size of the credit market, we find that they have significant power in forecasting one-year excess returns of the stock market. Our results suggests that the credit sector is of fundamental importance to the behavior of asset prices.
Financial crises are often accompanied by a flight-from-leverage as levered investors realize that they are overextended in a distressed market and try to reduce their funding risk. We study an equilibrium model in which assets can become severely distressed as investors learn that an adverse event has occurred, but don't initially know the full extent of the damage. When distress occurs, flight-from-leverage results in a major contraction in the size of the riskless debt market. Thus, the supply of riskless assets shrinks precisely when agents have the greatest incentives to increase their bondholdings. This tension has important implications for risksharing and asset pricing. We show that a flight-from-leverage differs from a flight-to-quality or a flight-to-liquidity. Flights-from-leverage may play a central role in understanding the behavior of severely distressed markets and may help explain why financial crises are often accompanied by significant liquidity shocks.
We study the marginal tax rate incorporated into short-term tax-exempt municipal rates using a unique new data set from the municipal swap market. By applying an affine term-structure framework, we are able to identify both the marginal tax rate and the credit/liquidity spread in one-week tax-exempt rates. Furthermore, we obtain maximum likelihood estimates of the risk premia associated with these variables. The average marginal tax rate during the sample period is 41.6 percent. We find that the marginal tax rate is significantly positively related to returns in the stock and bond markets. The risk premium associated with the marginal tax rate is negative, consistent with the strong contracyclical nature of aftertax fixed-income cash flows which increase in bad states of the economy as personal income and the effective marginal tax rates applied to those cash flows decline.
Systemic Credit Risk: What is the Market Telling Us?
Vineer Bhansali, Robert Gingrich & Francis A. Longstaff
The ongoing subprime crisis raises many concerns about the possibility of much broader credit shocks in the economy. We use a simple linear version of the Longstaff and Rajan (2007) model to extract the information about macroeconomic credit risk embedded in the prices of tranches on the most-liquid credit indices. Three types of credit risk appear to be priced by the market: idiosyncratic risks at the level of individual firms, sectorwide risk at the level of correlated firms within the same industry group, and economywide or systemic risk. We apply the model to the recent behavior of tranches in the U.S. and European credit derivatives markets and show that the current credit crisis has more than twice the systemic risk of the May 2005 auto-downgrade credit crisis.
We conduct an empirical investigation into the pricing of subprime assetbacked CDOs and the resulting contagion effects on other markets. Using data for the ABX indexes of subprime CDO prices, we find strong evidence of contagion effects. In particular, we find that contagion effects spread first from lower-rated ABX indexes to higher-rated ABX indexes, and then from the subprime markets to the Treasury bond and stock markets. ABX index returns forecast stock and Treasury bond returns as much as three weeks ahead during the crisis. Furthermore, ABX index shocks are significantly related to contractions in the size of the short-term credit markets and increases in the trading activity of financial stocks over the next several weeks. These results provide support for the hypothesis that financial contagion was spread through liquidity and risk-premium channels.
A Multiplier Approach to Understanding the Macro Implications of Household Finance
Yi-Li Chien, Harold Cole & Hanno Lustig
Our paper examines the impact of heterogeneous trading technologies for households on asset prices and the distribution of wealth. We distinguish between passive traders who hold fixed portfolios of stocks and bonds, and active traders who adjust their portfolios to changes in the investment opportunity set. The fraction of total wealth held by active traders is critical for asset prices, because only these traders respond to variation in state prices and hence absorb the residual aggregate risk created by non-participants. We calibrate this heterogeneity to match the equity premium and the risk-free rate. The calibrated model reproduces the skewness and kurtosis of the wealth distribution in the data. To solve the model, we develop a new method that relies on an optimal consumption sharing rule and an aggregation result for state prices. This result allows us to solve for equilibrium prices and allocations without having to search for market-clearing prices in each asset market separately.
Currency excess returns are highly predictable, more than stock returns, and about as much as bond returns. In addition, these predicted excess returns are strongly counter-cyclical. The average excess returns on low interest rate currencies are 4.8 percent per annum smaller than those on high interest rate currencies after accounting for transaction costs. We show that a single return-based factor, the return on the highest minus the return on the lowest interest rate currency portfolios, explains the cross-sectional variation in average currency excess returns from low to high interest rate currencies. This evidence suggests currency risk premia are large and time-varying. In a simple affine pricing model, we show that the high-minus-low currency return measures the component of the stochastic discount factor innovations that is common across countries. To match the carry trade returns in the data, low interest rate currencies need to load more on this common innovation when the market price of global risk is high.
Asset pricing is a branch of financial economics that is rich in puzzles and anomalies that is, stylized empirical facts not easily explained by the canonical asset pricing models. These range from the equity premium puzzle and the risk-free rate puzzle to the fact that stock returns are highly predictable. This entry discusses different consumption based asset pricing models that have been developed to resolve these puzzles and it evaluates their empirical performance.
The essential element in modern asset pricing theory is a positive random variable called "the stochastic discount factor" (SDF). This object allows one to price any payoff stream. Its existence is implied by the absence of arbitrage opportunities. Consumption-based asset pricing models link the SDF to the marginal utility growth of investors -- and in turn to observable economic variables -- and in doing so, they provide empirical content to asset pricing theory. This entry discusses this class of models.
IT, Corporate Payouts, and the Growing Inequality in Managerial Compensation
Hanno Lustig, Chad Syverson, & Stijn Van Nieuwerburgh
Three of the most fundamental changes in US corporations since the early 1970s have been (1) the increase in the importance of organizational capital in production, (2) the increase in managerial income inequality, and (3) the increase in payouts to the owners. There is a unified explanation for these changes: The arrival and gradual adoption of information technology since the 1970s has stimulated the accumulation of organizational capital in existing firms. Since owners are better diversified than managers, the optimal division of rents from this organizational capital has the owners bear most of the cash-flow risk. In our model, the IT revolution benefits the owners and the managers in large successful firms, but not the managers in small firms. The resulting increase in managerial compensation inequality and the increase in payouts to owner's compare favorably to those we establish in the data.
We introduce limited liability in a model with a continuum of ex ante identical agents who face aggregate and idiosyncratic income risk. These agents can trade a complete menu of contingent claims, but they cannot commit and shares in a Lucas tree serve as collateral to back up their state-contingent promises. The limited liability option gives rise to a second risk factor, in addition to aggregate consumption growth risk. This liquidity risk is created by binding solvency constraints, and it is measured by the growth rate of one moment of the wealth distribution. The economy is said to experience a negative liquidity shock when this growth rate is high and a large fraction of agents faces severely binding solvency constraints. The adjustment to the Breeden-Lucas stochastic discount factor induces substantial time variation in equity risk premia that is consistent with the data at business cycle frequencies.
To measure the wealth-consumption ratio, we estimate an exponentially affine model of the stochastic discount factor on bond yields and stock returns. We use that discount factor to compute the no-arbitrage price of a claim to aggregate US consumption. Our estimates indicate that total wealth is much safer than stock market wealth. The consumption risk premium is only 2.2 percent, substantially below the equity risk premium of 6.9 percent. As a result, our estimate of the wealth-consumption ratio is much higher than the price-dividend ratio on stocks throughout the post-war period. The high wealth-consumption ratio implies that the average US household has a lot of wealth, most of it human wealth. A variance decomposition of the wealth-consumption ratio shows less return predictability overall, but most of the return predictability is for future interest rates, not excess returns. We conclude that the properties of the total wealth portfolio are more similar to those of a long-maturity bond portfolio than those of a stock portfolio. The differences that we find between the risk-return characteristics of equity and total wealth suggest that equity is a special asset class.
This paper studies a public firm's investment decision and whether to raise the equity capital needed using the public market (SEO) or a private channel (PIPE, Private Investment in Public Equity). Issuing the security privately allows the firm to enjoy greater financial flexibility since funds can be raised faster due to less legal requirements and marketing e¤orts than a public offering (i.e. the shares do not need to be registered before they are sold). This greater financial flexibility also alleviates information asymmetries. However because they are initially illiquid, they carry a cost. The trade-off is therefore between liquidity and the value of financial flexibility. The model throws light on what rm characteristics determine the use of one market or the other, and on the optimal timing of investment decisions to better use financial flexibility. We then solve for the optimal private debt contract and show that, within private placements, the pecking order theory need not hold. The model explains empirical regularities, for instance, why do SEOs have negative abnormal returns around its announcement whereas abnormal returns for PIPEs are positive.
The potential advantage of extreme value theory in modeling management phenomena is the central theme of this paper. The statistics of extremes have played only a very limited role in management studies despite the disproportionate emphasis on unusual events in the world of managers. An overview of this theory and related statistical models is presented, and illustrative empirical examples provided.
Practicing managers live in a world of 'extremes' but management research is based on Gaussian statistics that rule out those extremes. On occasion, deviation amplifying mutual causal processes among interdependent data points cause extreme events characterized by power laws. They seem ubiquitous; we list 80 kinds of them -- half each among natural and social phenomena. We draw a 'line in the sand' between Gaussian (based on independent data points, finite variance and emphasizing averages) and Paretian statistics (based on interdependence, positive feedback, infinite variance, and emphasizing extremes). Quantitative journal publication depends almost entirely on Gaussian statistics. We draw on complexity and earthquake sciences to propose redirecting Management Studies. Conclusion: No statistical findings should be accepted into Management Studies if they gain significance via some assumption-device by which extreme events and infinite variance are ignored. The cost is inaccurate science and irrelevance to practitioners.
One of the most fundamental questions in macroeconomics is: how are wages determined? The assumptions that underlie this question help to determine how economic evidence is interpreted and then how monetary and fiscal policy are set. Yet, these underlying assumptions are rarely explicitly or self-consciously acknowledged or examined. We examine the evolution of one of these key underlying assumptions, the "wage-push" view of wage setting. Concepts of wage-push inflation and wage-price spirals arose after unions became important economic actors in the 1930s and during World War II and were visibly aggressive in demanding wage increases. These concepts, although never rigorously defined, became the basis of the Kennedy-Johnson guideposts program. Although the guideposts eventually fell apart, notions of constraining wage-push directly persisted in the 1970s under Nixon's mandatory controls and Carter's voluntary guidelines. Thereafter, unions experienced a rapid decline in membership. Although the concept of wage-push by aggressive unions became increasingly implausible in the US, a residue of wage-push remained in the rhetoric of macroeconomics and may still influence macro policymakers.
This paper examines a mechanism through which workers acquire and maintain competence: task experience. I analyze whether cardiac surgeons who have performed more procedures in the recent past experience an improvement in performance. I use an instrumental variables method that considers exogenous shocks to the procedure volume of CABG surgeons in Florida caused by the exit of other surgeons from the same hospital. I find evidence indicating a strong learning-by-doing effect: performing an additional procedure reduces the probability of patient mortality by 0.14%. This benefit is lower for high volume surgeons, and is partly specific to firm and task settings.
Does the Market Punish Aggressive Experts? Evidence from Caesarean Sections
Subbu Ramanarayanan & David Dranove
There are many markets in which a seller simultaneously diagnoses a customer's needs and recommends a product or service to meet them. Customers have limited information on which to judge the merits of the recommendation and may, as a result, agree to excessively costly or unnecessary services. This asymmetry of information poses a theoretical conundrum. What prevents sellers from always exaggerating the value of their products? A simple but compelling explanation is that consumers may choose not to purchase the product if the seller routinely exaggerates its value. In this paper, we examine whether the market does, in fact, punish overzealous sellers. We focus on the market for deliveries, a procedure in which a practicing obstetrician/gynecologist is faced with the choice of prescribing one of two possible modes: vaginal birth vs. a more highly reimbursed alternative, a cesarean section. We find that maternity patients prefer not to visit physicians with aggressive styles (i.e. physicians who overprescribe cesarean sections), ceteris paribus. The effect is most pronounced for high income patients and HMO patients, two segments of the market that might be very attractive to some obstetricians.
Contrary to popular opinion, average Tobin's q is a better indicator of future growth opportunities than marginal Tobin's q. We derive a curious relation between average and marginal q: the more profitable a new investment opportunity, the smaller will be the increase in average q when the opportunity is undertaken. Average q is inversely related to the cost of equity capital, so it represents an inverse measure of risk. The closely-related book/market ratio is also a measure of risk in the cross-section.
Negotiation Under the Threat of an Auction: Friendly Deals, Ex-Ante Competition and Bidder Returns
Nihat Aktas, Eric De Bodt & Richard Roll
Observable (ex-post) competition in the merger and acquisition (M&A) market seems to be very low. In this paper, we focus on the role of ex-ante competition and show that, when this is taken into account, the M&A market is more competitive than it seems at first sight. We first provide a theoretical analysis where we model takeovers as a two-stage process. The initial stage corresponds to a one-to-one negotiation with the target. If the negotiation fails, there is a second stage in which either a takeover battle among rivals occurs, or the target firm organizes a competitive auction. One of the main empirical predictions is that the higher the anticipated competition in the second stage, the higher the bid offered in the first stage. We then provide an empirical test of this prediction using a dataset of friendly deals for which, by construction, no ex-post competition is observable. We use the deal frequency in a given industry as a proxy for ex-ante competition, and we show that this variable is negatively related to the share of the value creation kept by the acquirer. This result is significant even taking account evidence of a decreasing investment opportunity. The main conclusion that we can draw from our analysis is that the M&A market is fairly competitive, and that anticipated competition allows target shareholders to receive a reasonable premium even in friendly deals.
Acquirer cumulative abnormal returns (CARs) have been investigated intensively for more than three decades. CARs measure shareholders' wealth changes conditional upon a deal announcement. The unconditional acquirer shareholders' expected profit is, however, the product of the acquirer's CAR and the probability of making the acquisition. This probability element has been mostly overlooked in the M&A literature. Since the probability cannot be observed directly, we use the time between successive acquisitions as a proxy and provide a systematic empirical analysis of its determinants.
Testing the CAPM boils down to testing the mean-variance efficiency of the market portfolio. Numerous studies have examined the meanvariance efficiency of various market proxies by employing the sample return parameters, and have concluded that these proxies are inefficient. Employing different shrinkage corrections does not help in this regard. These findings cast grave doubt about one of the cornerstone models of modern finance. In this study we take a reverse engineering approach to the problem: given a market proxy, we find the minimal variation of the sample parameters required to ensure that the proxy is mean-variance efficient. Surprisingly, we find that slight variations of the sample parameters, well within the estimation error bounds, suffice to make the proxy efficient. Thus, conventional market proxies are shown to be perfectly consistent with the CAPM.
How do a leader and a follower use marketing and R&D differently for resource accumulation, and why? To address the question, we investigate asymmetric incentive structures with respect to marketing and R&D for a leader vs. a follower. In particular, we analyze the resource accumulation paths of Intel and AMD by utilizing 28-year quarterly time series data (1972-1999). In so doing, we adopt a Structural Vector-Autoregressive (SVAR) modeling approach, dealing with two methodological challenges, i.e., context dependence and dynamic interdependence. Overall, we find that the leader (Intel) benefits more from its investment in marketing than the follower (AMD), while the follower gains more from its investment in R&D than the leader, consistent with the theoretical prediction. We also find that when the firm's market value unexpectedly increases the leader increases its marketing investment while the follower chooses to invest in R&D. Unlike the conventional First-Mover (Dis)Advantage research which focuses on the timing of entry, our study suggests that a firm, once having entered, may be better off by choosing different resource accumulation paths for marketing and R&D, based on its current market position. Finally, our results imply that there is no single best strategy; context determines what strategy will work best.
We study the drift in returns of portfolios formed on the basis of the stock price reaction around earnings announcements. The Earnings Announcement Return (EAR) captures the market reaction to unexpected information contained in the company's earnings release. Besides the actual earnings news, this includes unexpected information about sales, margins, investment, and other less tangible information communicated around the earnings announcement. A strategy that buys and sells companies sorted on EAR produces an average abnormal return of 7.55% per year, 1.3% more than a strategy based on the traditional measure of earnings surprise, SUE. The post earnings announcement drift for EAR strategy is stronger than post earnings announcement drift for SUE. More importantly, unlike SUE, the EAR strategy returns do not show a reversal after 3 quarters. The EAR and SUE strategies appear to be independent of each other. A strategy that exploits both pieces of information generates abnormal returns of about 12.5% on an annual basis.
We propose forecasting separately the three components of stock market returns: dividend yield, earnings growth, and price-earnings ratio growth. We obtain out-of-sample R-squared coefficients (relative to the historical mean) of nearly 1.6% with monthly data and 16.7% with yearly data using the most common predictors suggested in the literature. This compares with typically negative R-squares obtained in a similar experiment by Goyal and Welch (2008). An investor who timed the market with our approach would have had a certainty equivalent gain of as much as 2.3% per year and a Sharpe ratio 77% higher relative to the historical mean. We conclude that there is substantial predictability in equity returns and that it would have been possible to time the market in real time.
We study the effect of options trading volume on the value of the underlying firm after controlling for other variables that may affect firm value. The volume of options trading might have an effect on firm value because it helps to complete the market (allocational efficiency) and because the options market impounds information faster than the stock market (informational efficiency). We find that firms with more options trading have higher values. This result holds for all sample firms and for the subset of firms with positive options volume.
We develop a model for pricing expropriation risk in natural resource projects, in particular an oil field. The government is viewed as holding an American-style option to expropriate the oil field, but facing the following three possible expropriation costs: A state-run company may produce oil less cost-efficiently than a private firm, the government may have to pay a compensation to the firm, and an expropriation may trigger lower investor confidence negatively affecting the overall economy. The dynamics of key variables - the spot price, futures prices and volatility - is described by a model proposed and estimated in Trolle and Schwartz (2007). For reasonable parameter values and under market conditions not too different from what has been seen in recent years, the value of the expropriation option can be substantial.
Commodity derivatives are becoming an increasingly important part of the global derivatives market. Here we develop a tractable stochastic volatility model for pricing commodity derivatives. The model features unspanned stochastic volatility, quasi-analytical prices of options on futures contracts, and dynamics of the futures curve in terms of a low-dimensional affine state vector. We estimate the model on NYMEX crude oil derivatives using an extensive panel data set of 45,517 futures prices and 233,104 option prices, spanning 4082 business days. We find strong evidence for two, predominantly unspanned, volatility factors.
This paper investigates variance risk premia in energy commodities, particularly crude oil and natural gas, using a robust model-independent approach. Over a period of 11 years, we find that the average variance risk premia are negative for both energy commodities. Energy variance risk premia in dollar terms are time-varying, while energy variance risk premia in return terms, particularly in the case of natural gas, are more constant over time. Finally, the return profile of a natural gas variance swap resembles that of a call option, while the return profile of a crude oil variance swap, if anything, resembles the return profile of a put option. The annualized Sharpe ratios from shorting energy variance are sizable. Although not nearly as high as the annualized Sharpe ratio of shorting S\&P 500 index variance, they are comparable to those of shorting interest rate volatility or variance on individual stocks.
This paper examines the impact of a reform designed to curtail the strategic manipulation of the liver transplant waiting list. Prior to March 1, 2002, livers were allocated by a standards based regime in which strategic misrepresentation of severity of patient illness could enhance a center's chances of performing a transplant. After March 1, 2002, a rules based allocation regime was introduced that eliminated subjective factors in the allocation of livers. Using this policy change to identify strategic manipulation of the waiting list, I show an association between highly competitive transplant markets and an increased willingness to misrepresent patient need to obtain livers.
We examine whether editorial slant influences electoral outcomes in the context of one of the most powerful media conglomerates in US history. In the early 1900s, the Hearst newspaper empire was politically charged and considered influential. We test if the Hearst newspapers affected elections. Using a difference-in-differences and matching methodology, we find that the introduction of a Hearst newspaper into a county did not change electoral outcomes compared to similar counties - in contrast to other studies of media effects. We consider explanations for the results, and offer historical perspective to an issue that remains both salient and ambiguous.
Can Liquidity Shifts Explain the Lockup Expiration Effect in Stock Returns?
Chandrasekhar Krishnamurti, Avanidhar Subrahmanyam & Tiong Yang Thong
Several studies on the expiration of IPO lockups document a strong negative reaction even though the unlock event is devoid of any informational content. The empirical finding has remained a conundrum. In this paper, we find that changes in liquidity can account for the observed stock price reaction around lockup expiration. Specifically, firms which show improvement in liquidity subsequent to the unlock day experience positive abnormal returns in the post-expiration period, and vice versa. Another interesting conclusion that emerges from our research is that liquidity changes can predict future abnormal returns. Our results remain robust to the use of alternate procedures to characterize unexpected changes in liquidity.
Common Liquidity Shocks and Market Collapse: Lessons from the market for Perps
Chitru S. Fernando, Richard J. Herring & Avanidhar Subrahmanyam
We show how a high degree of commonality in investor liquidity shocks can diminish incentives for intermediaries to keep markets open and lead to market collapse, even without information asymmetry or news affecting fundamentals. We motivate our model using the perpetual floating rate note market where two years of explosive growth -- in which issues by high quality borrowers were placed with institutional investors and traded in a liquid secondary market -- were followed by a precipitous collapse when market intermediaries withdrew due to large order imbalances. We shed new light on the trade-off between ownership concentration and market liquidity.
When agents first become active investors in financial markets, they are relatively inexperienced. We focus on the incentives of economic agents to educate these individuals. A feature of the financial market arena is that the agents best positioned to educate the inexperienced themselves stand to earn trading profits at the expense of inexperienced agents. Owing to this phenomenon, we show that the equilibrium amount of financial education may not fully correct the biases of the inexperienced agents. Thus, biased agents may not be fully educated by those with the best financial knowledge. This result complements hindrances to learning due to the self-attribution bias. With monopolistic delivery of financial education, the equilibrium proportion of educated agents tends to decrease with the profit potential of the information possessed by sophisticated agents, suggesting a policy need to reduce the informational advantage of agents with privileged access to information. On the other hand, in a competitive setting, increasing the variance of information tends to increase the rents from trading and thus can decrease the equilibrium proportion of uneducated agents.
This paper examines the specific mechanism by which the incorporation of information into prices leads to cross-autocorrelations in stock returns. We develop a model where trading on private information occurs first in the large stocks and is transmitted to small stocks with a lag. Such trading reduces large stock liquidity so that, in equilibrium, greater large stock illiquidity portends stronger cross-autocorrelations. Empirically, we find that the lead-lag relation between large and small stocks increases with lagged spreads of large stocks. Further, order flows in large stocks significantly predict returns of small stocks when large stock spreads are high, at both the market and industry levels. In addition, the role of order flow and liquidity in predicting small stock returns is stronger prior to macro announcements (when information-based trading is more likely) and weaker following such events (when information asymmetries are lower). Overall, the results support the predictions of our model.
Previous studies of Treasury market illiquidity span short time-periods and focus on particular maturities. In contrast, we study the joint time-series of illiquidity for different maturities over an extended time sample. We also compare time series determinants of on-the-run and off-the-run illiquidity. Illiquidity increases and the difference between spreads of long- and short-term bonds significantly widens during recessions, suggesting a "flight to liquidity" phenomenon wherein investors shift into the more liquid short-term bonds during economic contractions. We also document that macroeconomic variables such as inflation and federal fund rates forecast off-the-run illiquidity significantly but have only modest forecasting ability for on-the-run illiquidity. Bond returns across all maturities are forecastable by off-the-run short-term illiquidity but not by illiquidity of other maturities or by on-the-run bond illiquidity. Thus, short-term off-the-run liquidity, by reflecting macro shocks first, is the primary source of the liquidity premium in the Treasury bond market.
External Networking and Internal Firm Governance
Cesare Fracassi & Geoffrey Tate
External network ties between CEOs and directors in major U.S. corporations may limit the effectiveness of internal corporate governance. Using comprehensive biographical data on the managers and directors of S&P 1500 companies, we identify connections between directors and their firms' CEOs through external directorships, past employment, education, and other activities (e.g. golf clubs or charity organizations). Consistent with an expectation of weaker monitoring, we find that firms with powerful CEOs are disproportionately likely to add directors with ties to the CEO to the board. Once on the board, such directors are more likely to buy company stock in the open market at the same time as the CEO, even though there is no evidence that they (or the CEO) have better information than other outside directors. Their companies are also less likely to do company-prompted earnings restatements. Turning to real investment choices, we find that acquisitions are more frequent among firms with more connections between their directors and CEO, particularly when those directors serve on the executive committee. We also find that merger bids by such firms destroy value for shareholders: on average, bidding firms lose $354 million in the three days surrounding merger bids, $282 million more than bidders with fewer connections between the board and the CEO. Moreover, firms with more network ties between directors and the CEO have lower aggregate market valuations than other firms. Both valuation effects are most pronounced in firms with weak shareholder rights. Finally, we find little evidence that recent governance reforms have reduced the frequency of social ties between directors and their firms' CEOs, suggesting that a broader notion of independence, which accounts for social ties between directors and management, might increase the effectiveness of future governance reforms.
Compensation, status, and press coverage of managers in the U.S. follow a highly skewed distribution: a small number of 'superstars' enjoy the bulk of the rewards. We evaluate the impact of CEOs achieving superstar status on the performance of their firms, using prestigious business awards to measure shocks to CEO status. We find that award-winning CEOs subsequently underperform, both relative to their prior performance and relative to a matched sample of non-winning CEOs. At the same time, they extract more compensation following the award, both in absolute amounts and relative to other top executives in their firms. They also spend more time on public and private activities outside their companies, such as assuming board seats or writing books. The incidence of earnings management increases after winning awards. The effects are strongest in firms with weak governance, even though the frequency of obtaining superstar status is independent of corporate governance. Our results suggest that the ex-post consequences of media-induced superstar status for shareholders are negative.
Competition and Price Discrimination in the Market for Mailing Lists
Ron Borzekowski, Raphael Thomadsen & Charles Taragin
This paper examines whether mailing list sellers, when faced with additional competitors, are more likely to try to segment consumers by offering consumers additional choices at different prices (second-degree price discrimination) and/or offering different prices to readily identifiable groups of consumers (third-degree price discrimination). We utilize a dataset that includes information about all consumer response lists derived from mail order buyers (i.e. lists derived from catalogs) available for rental in 1997 and 2002. Our results indicate that increased competition is generally associated with an increased propensity to price discriminate along each of the dimensions we investigate. These results hold for both second-degree and third-degree price discrimination. Further, list owners offer menus with more choices in more competitive markets. These results, taken together with results from other empirical studies, suggest that the connection between competition and increased price discrimination is a result that applies broadly.
Companies and managers are apt to forget information, yet game theory assumes that all players have perfect recall. This paper expands the literature by examining how introducing forgetfulness into a multi-player game-theoretic framework can help or hinder cooperative behavior. We distinguish between forgetting histories and forgetting strategies, and explain how classic game theory models and equilibrium concepts should be adapted to accommodate imperfect recall. We find that forgetfulness impacts the ability of firms to cooperate in countervailing directions. On the one hand, forgetfulness can diminish the ability to punish deviators, making cooperation more difficult. On the other hand, forgetfulness can make meting out severe punishments credible, and if the players forget their strategies then forgetfulness can also decrease the ability for players to effectively deviate, facilitating cooperation. When players forget their strategies, their reduced ability to deviate may be so severe that the equilibrium payoff may be below the minimax.
Marketing & Institutions: An International Perspective
Bjorn N. Jorgensen & Raphael Thomadsen
This paper presents descriptive empirical evidence of a link between country-level institutions on marketing sophistication and customer orientation. We find that stronger institutions are generally associated with greater marketing sophistication and greater customer orientation, although increased corruption is also associated with higher levels of both variables after controlling for a country's other institutions. Our results suggest that country-level institutional factors should be considered in marketing research analysis.
Brokerage Houses and their Stock Recommendations: Does Superior Performance Persist?
Brad M. Barber, Reuven Lehavy & Brett Trueman
This paper tests for the existence of performance persistence in brokerage house stock recommendations. For the period 1987-1996 we show that purchasing the current-year buy recommendations of the brokerage houses with the best prior performance earned an annualized geometric mean raw return of 18.6 percent, while purchasing the recommended stocks of the houses with the worst prior performance earned only 14.3 percent. After controlling for market risk, size, book-to-market, and price momentum effects, though, we find no significant difference, in general, between the abnormal returns of the best and worst brokerage houses. A series of supplementary tests confirm this result. The findings for brokerage house sell recommendations are even weaker. Overall, our tests provide no reliable evidence of performance persistence for brokerage house stock recommendations.
Buys, Holds, and Sells: The Distribution of Investment Banks' Stock Ratings and the Implications for the Profitability of Analysts' Recommendations
Brad M. Barber, Reuven Lehavy, Maureen F. McNichols & Brett Trueman
This paper analyzes the distribution of stock ratings at investment banks and brokerage firms and examines their relation to the profitability of analysts' recommendations. Consistent with prior work, we find that the percentage of buy recommendations increased substantially from 1996-2000. Notably, though, the largest brokers, who have received the most scrutiny from regulators and the media, generally have a smaller percentage of buy recommendations than our sample as a whole. Starting in mid-2000 the percentage of buys has decreased steadily. Our analysis strongly suggests that this is due, at least in part, to the implementation of NASD Rule 2711, which requires brokers' ratings distributions to be made public. We also find that a broker's stock ratings distribution can predict the profitability of its recommendations. Upgrades to buy issued by brokers with the smallest percentage of buy recommendations significantly outperformed those of brokers with the greatest percentage of buys, by an average of 50 basis points per month. Conversely, downgrades to hold or sell coming from brokers issuing the most buy recommendations significantly outperformed those of brokers issuing the fewest, by an average of 46 basis points per month.
Can Investors Profit from the Prophets? Consensus Analyst Recommendations and Stock Returns
Brad M. Barber, Reuven Lehavy, Maureen F. McNichols & Brett Trueman
In this paper we document that an investment strategy based on the consensus (average) analyst recommendations of security analysts earns positive returns. For the period 1986-1996, a portfolio of stocks most highly recommended by analysts earned an annualized geometric mean return of 18.8 percent, while a portfolio of stocks least favorably recommended earned only 5.78 percent. (In comparison, an investment in a value-weighted market index earned an annualized geometric mean return of 14.5 percent.) Alternatively stated, purchasing stocks most highly recommended yielded a return of 102 basis points per month. The magnitude of this return is surprisingly large, and is far greater than the size effect (negative 16 basis points) and book-to-market effect (17 basis points) for the same period. Even after controlling for these two effects, as well as for price momentum, we show that the strategy of purchasing stocks most highly recommended and selling short those least favorably recommended yielded a return of 75 basis points per month. These results are robust to partitions by time period and overall market direction, and are most pronounced for small and medium-sized firms. The abnormal returns also persist when we allow a lapse of up to 15 days before acting on the investment recommendations. There is no extant theory of asset pricing that explains these results.
Comparing the Stock Recommendation Performance of Investment Banks and Independent Research Firms
Brad M. Barber, Reuven Lehavy & Brett Trueman
This study compares the profitability of security recommendations issued by investment banks and independent research firms. During the 1996 through mid-2003 time period, the average daily abnormal return to independent research firm buy recommendations exceeds that of the investment banks by 3.1 basis points, or almost 8 percentage points annualized. In contrast, investment bank hold and sell recommendations outperform those of independent research firms by -1.8 basis points daily, or -4.5 percentage points annualized. Investment bank buy recommendation underperformance is concentrated in the subperiod subsequent to the NASDAQ market peak (March 10, 2000), where it averages 6.9 basis points per day, or slightly more than 17 percent annualized. More strikingly, during this period those investment bank buy recommendations outstanding subsequent to equity offerings underperform those of independent research firms by 8.7 basis points (almost 22 percent annualized). Taken as a whole, these results suggest that at least part of the underperformance of investment bank buy recommendations is due to a reluctance to downgrade stocks whose prospects dimmed during the early 2000's bear market, as claimed in the SEC's Global Research Analyst Settlement. Additional analyses find that the underperformance of investment bank buy recommendations extends not only to the ten investment banks sanctioned in the research settlement but to the non-sanctioned investment banks as well.
Prophets and Losses: Reassessing the Returns to Analysts' Stock Recommendations
Brad M. Barber, Reuven Lehavy, Maureen F. McNichols & Brett Trueman
After a string of years in which security analysts' top stock picks significantly outperformed their pans, the year 2000 was a disaster. During that year the stocks least favorably recommended by analysts earned an annualized market-adjusted return of 48.66 percent while the stocks most highly recommended fell 31.20 percent, a return difference of almost 80 percentage points. This pattern prevailed during most months of 2000, regardless of whether the market was rising or falling, and was observed for both tech and non-tech stocks. While we cannot conclude that the 2000 results are necessarily driven by an increased emphasis on investment banking by analysts, our findings should add to the debate over the usefulness of analysts' stock recommendations to investors. They should also serve to alert researchers to the possibility that excluding the year 2000 from their sample period could have a significant impact on any conclusions they draw concerning analysts' stock recommendations.
Ratings Changes, Ratings Levels, and the Predictive Value of Analysts' Recommendations
Brad M. Barber, Reuven Lehavy & Brett Trueman
This paper provides evidence that the documented abnormal returns to analysts' security recommendations stem from both the ratings levels assigned as well as the changes in those ratings. Conditional on the sign and magnitude of a ratings change, we find buy and strong buy recommendations to be associated with greater returns than are holds, sells, and strong sells. Conditional on the ratings level, upgrades earn the highest returns and downgrades the lowest. We also find that both ratings levels and changes predict future unexpected earnings as well as the associated market reaction. Our results imply that (a) it is possible to enhance investment returns by conditioning on both recommendation levels and changes, (b) the predictive power of analysts' recommendations reflects analysts' ability to generate valuable private information about future earnings, not just to shift investor demand, and (c) there exists a degree of inconsistency between analysts' ratings and the formal ratings definitions issued by securities firms.
This paper presents a novel stylized fact and analyzes its contribution to the skill bias of technical change: The share of skilled labor embedded in intermediate inputs correlates strongly with the skill share employed in final production. This finding points towards an intersectoral technology-skill complementarity (ITSC). Empirical evidence suggests that the channel through which this complementarity works is product innovation driven by skilled workers. Together with input-output linkages, the observed complementarity delivers a multiplier that reinforces skill demand along the production chain. The effect is large, accounting for more than one third of the observed skill upgrading in U.S. manufacturing over the period 1967-92. The paper presents a simple multi-sector model with intermediate linkages that integrates the observed ITSC into the standard framework of skill-biased technical change. Therein, the relative productivity of skilled workers rises with the skill intensity of intermediates. A calibration exercise confirms the quantitative importance of the ITSC.
The Three Horsemen of Growth: Plague, War and Urbanization in Early Modern Europe
Nico Voigtländer & Hans-Joachim Voth
How did Europe overtake China? We construct a simple Malthusian model with two sectors, and use it to explain how European per capita incomes and urbanization rates could surge ahead of Chinese ones. That living standards could exceed subsistence levels at all in a Malthusian setting should be surprising. Rising fertility and falling mortality ought to have reversed any gains. We show that productivity growth in Europe can only explain a small fraction of rising living standards. Population dynamics -- changes of the birth and death schedules -- were far more important drivers of the long-run Malthusian equilibrium. The Black Death raised wages substantially, creating important knock-on effects. Because of Engel's Law, demand for urban products increased, raising urban wages and attracting migrants from rural areas. European cities were unhealthy, especially compared to Far Eastern ones. Urbanization pushed up aggregate death rates. This effect was reinforced by more frequent wars (fed by city wealth) and disease spread by trade. Thus, higher wages themselves reduced population pressure. Without technological change, our model can account for the sharp rise in European urbanization as well as permanently higher per capita incomes. We complement our calibration exercise with a detailed analysis of intra-European growth in the early modern period. Using a panel of European states in the period 1300-1700, we show that war frequency can explain a good share of the divergent fortunes within Europe.
Why England? Demographic Factors, Structural Change and Physical Capital Accumulation During the Industrial Revolution
Nico Voigtländer & Hans-Joachim Voth
Why did England industrialize first? And why was Europe ahead of the rest of the world? Unified growth theory in the tradition of Galor-Weil (2000) and Galor-Moav (2002) captures the key features of the transition from stagnation to growth over time. Yet we know remarkably little about why industrialization occurred so much earlier in some parts of the world than in others. To answer this question, we present a probabilistic two-sector model where the initial escape from Malthusian constraints depends on the demographic regime, capital deepening and the use of more differentiated capital equipment. Weather-induced shocks to agricultural productivity cause changes in prices and quantities, and affect wages. In a standard model with capital externalities, these fluctuations interact with the demographic regime and affect the speed of growth. Our model is calibrated to match the main characteristics of the English economy in 1700 and the observed transition until 1850. We capture one of the key features of the British Industrial Revolution emphasized by economic historians -- slow growth of output and productivity. Fertility limitation is responsible for higher per capita incomes, and these in turn increase industrialization probabilities. The paper also explores the availability of nutrition for poorer segments of society. We examine the influence of redistributive institutions such as the Old Poor Law, and find they were not decisive in fostering industrialization. Simulations using parameter values for other countries show that Britain's early escape was only partly due to chance. France could have moved out of agriculture and into manufacturing faster than Britain, but the probability was less than 30 percent. Contrary to recent claims in the literature, 18th century China had only a minimal chance to escape from Malthusian constraints.
We find that genetic distance, a measure associated with the time elapsed since two populations' last common ancestors, has a statistically and economically significant effect on income differences across countries, even when controlling for measures of geographical distance, climatic differences, transportation costs, and measures of historical, religious and linguistic distance. We provide an economic interpretation of these findings in terms of barriers to the diffusion of development from the world technological frontier, implying that income differences should be a function of relative genetic distance from the frontier. The empirical evidence strongly supports this barriers interpretation.
Reverse pricing is a market mechanism under which a consumer's bid for a product or service leads to a sale if the bid exceeds a hidden acceptance threshold that the seller has set in advance. The specification of such a mechanism by the seller involves two key decisions: (1) the revenue model decision -- that is, how to set his margin above cost (and thus the bid-acceptance threshold) and/or how to set a fee for the right to bid -- and (2) whether to facilitate or hinder consumer learning about the bid-acceptance threshold. We analyze these interrelated decisions for a profitmaximizing firm selling to consumers with heterogeneous product valuations, derive the optimal revenue model, and characterize how the seller strategy should account for consumers learning about the bid-acceptance threshold. The optimal reverse-pricing mechanism is to charge a bidding fee upfront and then accept all bids above cost, rather than to charge a positive margin above cost (as is common practice). When consumers learn about the bid-acceptance threshold, the market becomes more efficient, bidding fees remain superior to margins in profitability, and both consumers and the seller can benefit. Specifically, we show that everyone benefits from consumers learning about the threshold when there are enough consumers, who are not interested in buying the product on the outside posted-price market.
When capacity-constrained bidders have information about a good sold in a future auction, they need to take the information into account in forming today's bids. The capacity constraint makes even otherwise unrelated objects substitutes and creates an equilibrium link between future competition and current bidding strategy. This paper proves the existence and uniqueness of a symmetric pure-strategy equilibrium under mild conditions on the population distribution of valuations, characterizes general properties of the equilibrium bidding strategy, and provides a simple technique for numerically approximating the bidding strategy for arbitrary valuation distributions. The key property of the equilibrium is that almost all bidders submit positive bids in the first stage, thereby ensuring trade with probability one. Even bidders who strongly prefer the second object submit a positive bid in the first auction, because losing the first auction is informative about the remaining competitors who also lost, and losing with a low bid indicates that these competitors are quite strong. Because of the guaranteed trade, the sequential auction with information about future goods is a very efficient trading mechanism, achieving more than 98 percent of the potential gains from trade across a wide variety of settings.
Marketers often analyze multinomial choice from a set of branded products to learn about demand. Given a set of brands to study, we analyze three reasons why choices from strict subsets of the brands can contain more statistical information about demand than choices from all the brands in the study: First, making choices from smaller subsets is easier, so it is possible to use more choice-tasks when the choice data comes from a choice-based conjoint survey. Second, choices from subsets of brands better identify and more accurately estimate the covariance structure of unobserved utility shocks associated with brands. Third, subsets automatically balance the brand-shares when some of the brands are less popular than others. We demonstrate these three "benefits of subsets" using a mixture of analytical results and numerical simulations, and provide implications for the design of choice-based conjoint analyses. We find that the optimal subset-size depends on the model, the number of brands in the study, and the designer's resource constraint. Besides showing that subsets can be beneficial, we also provide a simulation methodology that helps designers pick the best subset-size for their setting.
Tests of the Sealed-Bid Abstraction in Online Auctions
Robert Zeithammer & Christopher Adams
This paper presents five empirical tests of the popular modeling abstraction that bidders in ascending online auctions bid "as if" they were in a sealed bid auction. The tests rely on observations of the magnitudes and timings of top two proxy bids, with the different tests stemming from different regularity assumptions about the underlying joint distribution of signals and timings. We apply the tests to data from three eBay markets - MP3 players, DVDs and used cars - and we reject the sealed-bid abstraction in all three datasets. This consistent rejection casts doubt on several existing theories of online-auction behavior. Moreover, we reject the sealed-bid abstraction even in carefully selected subsets of the data that one might consider more likely to conform to sealed bidding. Given these findings, demand-estimation using eBay data is more difficult than previously thought. In particular, our results suggest that the empirical strategies based on multiple order-statistics of the bidding distribution will not work.