Robert Gramacy studies Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. His application areas of interest include spatial data, sequential computer experiments, ecology, epidemiology, finance and public policy.
Gramacy has taught at both the undergraduate and graduate levels. Prior to joining Booth in 2010, he was a lecturer in the Statistical Laboratory at the University of Cambridge and a fellow of Jesus College. He also was a visitor of the Statistics and Applied Probability department at UC Santa Barbara.
An important aspect of Gramacy’s research is implementation. Trained as an engineer, Gramacy believes that releasing high quality open source software for new statistical methodologies is just as important as putting them in print. His software packages for R include the widely used tgp package for nonparametric regression. This emphasis on software development also defines Gramacy’s teaching style. His lectures regularly include live demonstrations, and he asks students to demonstrate understanding by producing their own code.
Gramacy earned four degrees from the University of California, Santa Cruz. In 2001 he was awarded a B.A. (Honors) in Mathematics and a B.Sc. (Highest Honors) in Computer Science. In 2003 he earned a M.Sc. in Computer Science, and in 2005 a Ph.D. in Applied Mathematics & Statistics. Gramacy was honored with the Savage Award in 2006 for his Ph.D. thesis “Bayesian treed Gaussian Process models.”
His hobbies include cycling, traveling, and ice hockey.
2014 - 2015 Course Schedule
Cycling, travel, music, and hockey.
Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Application areas include spatial data, sequential computer experiments, ecology, epidemiology, finance and public policy.
With M.A. Taddy and N.G. Polson, “Dynamic trees for learning and design,” Journal
of the American Statistical Association (2011).
With E. Pantaleo, “Shrinkage regression for multivariate inference with missing data, and an application to portfolio balancing,” Bayesian Analysis (2010).
With R.J. Samworth and Ruth King, “Importance tempering,” Statistics and Computing (2010).
With D. Merl, L.R. Johnson, and M.S. Mangel, “A statistical framework for the adaptive management of epidemiological interventions,” PLoS ONE (2009).
With H.K.H. Lee, “Bayesian treed Gaussian process Models with an application to computer modeling,” Journal of the American Statistical Association (2008).
REVISION: Timing Foreign Exchange Markets
We take a novel approach to short-horizon exchange rate forecasting by using priced, predictable, and traded foreign exchange market factors as fundamentals. Conditional linear and Bayesian treed Gaussian process (BTGP) models with perfect foresight of these carry and dollar factors substantially outperform the random walk with respect to both squared error and profitability-related loss functions for most currencies. These results are driven primarily by information in the dollar factor. For factor timing strategies that exploit factor predictability directly, Sharpe ratios for the carry factor timing strategy are over three times those earned by timing the dollar factor, however. Simple trading strategies based on combining ex ante factor forecasts with rolling BTGP factor models for individual currencies outperform the random walk on directional accuracy, market timing, profitability, and Sharpe ratio measures for the median currency in our worldwide sample.
REVISION: Regression-Based Earnings Forecasts
We provide a comprehensive examination of regression-based earnings forecasts. Specifically, we evaluate forecasts of scaled and unscaled net income along a number of relevant dimensions including variable selection, estimation methods, estimation windows, and Winsorization. Overall, we find that forecasts generated using ordinary least squares and lagged net income are broadly more accurate for both earnings constructs. Moreover, at a one year horizon, the random walk model performs as well as modern sophisticated methods that use larger predictor sets. This finding echoes an old result that, given recent applications of forecasts in the literature, may have been forgotten.
New: Market-Based Credit Ratings
We present a methodology for rating the creditworthiness of public companies in the U.S. from the prices of traded assets. Our approach uses asset pricing data to impute a term structure of risk neutral survival functions or default probabilities. Firms are then clustered into ratings categories based on their survival functions using a functional clustering algorithm. This allows all public firms whose assets are traded to be directly rated by market participants. For firms whose assets are not