REVISION: Deep Learning in Characteristics-Sorted Factor Models
To study the characteristics-sorted factor model in asset pricing, we develop a bottom-up approach with state-of-the-art deep learning optimization. With an economic objective to minimize pricing errors, we train a non-reduced-form neural network using firm characteristics [inputs], and generate factors [intermediate features], to fit security returns [outputs]. Sorting securities on firm characteristics provides a nonlinear activation to create long-short portfolio weights, as a hidden layer, from lag characteristics to realized returns. Our model offers an alternative perspective for dimension reduction on firm characteristics [inputs], rather than factors [intermediate features], and allows for both nonlinearity and interactions on inputs. Our empirical findings are twofold. We find robust statistical and economic evidence in evaluating various portfolios and individual stock returns. Finally, we show highly significant results in factor investing, improvement in dissecting ...
REVISION: Smart Money, Dumb Money, and Learning Type from Price
We present a simple model of smart money and dumb money. Dumb money tries to learn from market prices whether or not it is dumb. Dumb money's ability to learn depends on its openness to the idea that it may be the dumb money and on its ability to assess the total amount of dumb money in the market. Neither requirement may be met easily in the real world.
REVISION: Deep Learning for Finance: Deep Portfolios
We explore the use of deep learning hierarchical models for problems in financial prediction and classification. Financial prediction problems – such as those presented in designing and pricing securities, constructing portfolios, and risk management – often involve large data sets with complex data interactions that currently are difficult or impossible to specify in a full economic model. Applying deep learning methods to these problems can produce more useful results than standard methods in finance. In particular, deep learning can detect and exploit interactions in the data that are, at least currently, invisible to any existing financial economic theory.
REVISION: Why Indexing Works
We develop a simple stock selection model to explain why active equity managers tend to underperform a benchmark index. We motivate our model with the empirical observation that the best performing stocks in a broad market index often perform much better than the other stocks in the index. Randomly selecting a subset of securities from the index may dramatically increase the chance of underperforming the index. The relative likelihood of underperformance by investors choosing active management likely is much more important than the loss to those same investors from the higher fees for active management relative to passive index investing. Thus, active management may be even more challenging than previously believed, and the stakes for finding the best active managers may be larger than previously assumed.
REVISION: Sparse Regularization in Marketing and Economics
Sparse alpha-norm regularization has many data-rich applications in Marketing and Economics. Alpha-norm, in contrast to lasso and ridge regularization, jumps to a sparse solution. This feature is attractive for ultra high-dimensional problems that occur in demand estimation and forecasting. The alpha-norm objective is nonconvex and requires coordinate descent and proximal operators to find the sparse solution. We study a typical marketing demand forecasting problem, grocery store sales for salty snacks, that has many dummy variables as controls. The key predictors of demand include price, equivalized volume, promotion, flavor, scent, and brand effects. By comparing with many commonly used machine learning methods, alpha-norm regularization achieves its goal of providing accurate out-of-sample estimates for the promotion lift effects. Finally, we conclude with directions for future research.
REVISION: Sequential Learning, Predictability, and Optimal Portfolio Returns
This paper finds statistically and economically significant out-of-sample portfolio benefits for an investor who uses models of return predictability when forming optimal portfolios. The key is that investors must incorporate an ensemble of important features into their optimal portfolio problem, including time-varying volatility, and time-varying expected returns driven by improved predictors such as measures of yield that include share repurchase and issuance in addition to cash payouts ...
REVISION: Particle Learning and Smoothing
Particle learning (PL) provides state filtering, sequential parameter learning and smoothing in a general class of state space models. Our approach extends existing particle methods by incorporating the estimation of static parameters via a fully-adapted filter that utilizes conditional sufficient statistics for parameters and/or states as particles. State smoothing in the presence of parameter uncertainty is also solved as a by-product of PL. In a number of examples, we show that PL ...
REVISION: Tracking Flu Epidemics Using Google Flu Trends and Particle Learning
In the second half of 2009 the world experienced an intense influenza activity. The new 2009 H1N1 virus, formerly known as the swine flu, has in only five months found its way from Mexico to a majority of the countries on the planet. The fears of a large second-wave pandemic and its potential impact on health and economic outcomes have underlined the importance of accurate and fast disease surveillance mechanisms capable of suggesting timely public health interventions.
In this paper we ...
REVISION: Corporate Credit Spreads under Parameter Uncertainty
This paper assesses the impact of parameter uncertainty on corporate bond
credit spreads. Using data for 5,300 firm-years between 1994 and 2008, we
find that investors’ uncertainty about model parameters explains up to 40% of
the credit spread that is typically attributed to liquidity, taxes and jump risk,
without significantly raising bankruptcy probabilities. Spreads on firms with
large intangible assets and volatile earnings growth are the most affected by
parameter uncertainty. Uncertainty ...
New: Sequential Inference for Nonlinear Models using Slice Variables
This paper develops particle-based methods for sequential inference in nonlinear models. Sequential inference is notoriously difficult in nonlinear state space models. To overcome this, we use auxiliary state variables to slice out nonlinearities where appropriate. This induces a Fixed-dimension conditional sufficient statistics and, given these, we adapt existing particle learning algorithms to update posterior beliefs about states and parameters. We provide three illustrations. First, a ...
New: Quantile Filtering and Learning
Quantile and least-absolute deviations (LAD) methods are popular robust statistical methods but have not generally been applied to state filtering and sequential parameter learning. This paper introduces robust state space models whose error structure coincides with quantile estimation criterion, with LAD a special case. We develop an efficient particle based method for sequential state and parameter inference. Existing approaches focus solely on the problem of state filtering, conditional on ...
New: Optimal Filtering of Jump Diffusions: Extracting Latent States from Asset Prices
This paper provides an optimal filtering methodology in discretely observed continuous-time jump-diffusion models. Although the filtering problem has received little attention, it is useful for estimating latent states, forecasting volatility and returns, computing model diagnostics such as likelihood ratios, and parameter estimation. Our approach combines time-discretization schemes with Monte Carlo methods. It is quite general, applying in nonlinear and multivariate jump-diffusion models and ...
New: Particle Filtering and Parameter Learning
In this paper, we provide an exact particle filtering and parameter learning algorithm. Our approach exactly samples from a particle approximation to the joint posterior distribution of both parameters and latent states, thus avoiding the use of and the degeneracies inherent to sequential importance sampling. Exact particle filtering algorithms for pure state filtering are also provided. We illustrate the efficiency of our approach by sequentially learning parameters and filtering states in ...
MCMC Methods for Continuous-Time Financial Econometrics
This chapter develops Markov Chain Monte Carlo (MCMC) methods for Bayesian inference in continuous-time asset pricing models. The Bayesian solution to the inference problem is the distribution of parameters and latent variables conditional on observed data, and MCMC methods provide a tool for exploring these high-dimensional, complex distributions. We first provide a description of the foundations and mechanics of MCMC algorithms. This includes a discussion of the Clifford-Hammersley theorem, ...
The Impact of Jumps in Volatility and Returns
This paper examines continuous-time stochastic volatility models incorporating jumps in returns and volatility. We develop a likelihood-based estimation strategy and provide estimates of parameters, spot volatility, jump times, and jump sizes using S&P 500 and Nasdaq 100 index returns. Estimates of jump times, jump sizes, and volatility are particularly useful for identifying the effects of these factors during periods of market stress, such as those in 1987, 1997, and 1998. Using formal and ...
Nonlinear Filtering of Stochastic Differential Equations with Jumps
In this paper, we develop an approach for filtering state variables in the setting of continuous-time jump-diffusion models. Our method computes the filtering distribution of latent state variables conditional only on discretely observed observations in a manner consistent with the underlying continuous-time process. The algorithm is a combination of particle filtering methods and the "filling-in-the-missing-data" estimators which have recently become popular. We provide simulation evidence to ...
Sequential Optimal Portfolio Performance: Market and Volatility Timing
This paper studies the economic benefits of return predictability by analyzing the impact of market and volatility timing on the performance of optimal portfolio rules. Using a model with time-varying expected returns and volatility, we form optimal portfolios sequentially and generate out-of-sample portfolio returns. We are careful to account for estimation risk and parameter learning. Using S&P 500 index data from 1980-2000, we find that a strategy based solely on volatility timing uniformly ...
Bayesian Analysis of a Stochastic Volatility Model with Leverage Effect and Fat Tails
The basic univariate stochastic volatility model specifies that conditional volatility follows a log-normal auto-regressive model with innovations assumed to be independent of the innovations in the conditional mean equation. Since the introduction of practical methods for inference in the basic volatility model (JPR-(1994)), it has been observed that the basic model is too restrictive for many financial series. We extend the basic SVOL to allow for a so-called "Leverage effect" via ...
The Impact of Jumps in Volatility and Returns
This paper examines a class of continuous-time models that incorporate jumps in returns and volatility, in addition to diffusive stochastic volatility. We develop a likelihood-based estimation strategy and provide estimates of model parameters, spot volatility, jump times and jump sizes using both S&P 500
and Nasdaq 100 index returns. Estimates of jumps times, jump sizes and volatility are particularly useful for disentangling the dynamic effects of these factors during periods of market ...