Sanjog Misra
Charles H. Kellstadt Distinguished Service Professor of Marketing and Applied AI
Charles H. Kellstadt Distinguished Service Professor of Marketing and Applied AI
Sanjog Misra is the Charles H. Kellstadt Professor of Marketing at the University of Chicago Booth School of Business. His research focuses on the use of machine learning, deep learning and structural econometric methods to study consumer and firm decisions. In particular, his research involves building data-driven models aimed at understanding how consumers make choices and investigating firm decisions pertaining to pricing, targeting and salesforce management issues. More broadly, Professor Misra is interested in the development of scalable algorithms, calibrated on large-scale data, and the implementation of such algorithms in real world decision environments.
Professor Misra's research has been published in the Econometrica, The Journal of Marketing Research, The Journal of Political Economy, Marketing Science, Quantitative Marketing and Economics, the Journal of Law and Economics, among others. He has served as the co-editor of Quantitative Marketing and Economics and as area editor at Management Science, the Journal of Business and Economic Statistics, Marketing Science, Quantitative Marketing and Economics, the International Journal of Research in Marketing and the Journal of Marketing Research.
Misra is actively involved in partnering with firms in his research and has worked as an advisor to a number of companies such as Transunion, Oath, Verizon, Eli Lilly, Adventis, Mercer Consulting, Sprint, MGM, Bausch & Lomb, Xerox Corporation and Ziprecruiter with the aim of helping them design efficient, analytics-based, management systems that result in better decisions. He currently serves as an advisor to startups in the marketing technology, measurement and AI space. At Booth Professor Misra teaches courses on Algorithmic Marketing. These courses bring his practical and research expertise in the algorithmic marketing domain into the classroom. He is hopeful that these classes will get students ready for the next evolution of marketing that he believes is already underway.
Prior to joining Booth, Misra was Professor of Marketing at UCLA Anderson School of Management and Professor at the Simon School of Business at the University of Rochester. In addition he has been visiting faculty at the Johnson School of Management at Cornell University and the Graduate School of Business at Stanford University.
The Dynamics of Retail Oligopoly
Date Posted:Tue, 16 Apr 2024 14:56:32 -0500
This paper empirically examines competition between supermarkets, treating it as a dynamic discrete game between heterogeneous firms. We focus on the overall impact of Wal-Mart's entry on incumbent supermarket firms, quantifying the effects on prices, producer surplus, consumer welfare and overall competitive structure. Employing a thirteen-year panel dataset of store level observations that includes every supermarket firm operating in the United States across a large sample of geographic markets, alongside the rapid proliferation of Wal-Mart Supercenters, we propose and estimate a dynamic structural model of chain level competition. In this model, incumbent firms decide each period whether to add or subtract stores or exit the market entirely, and potential entrants choose whether or not to enter. Product market competition is captured via a discrete-choice demand system, incorporating detailed information on prices and characteristics of chains, as well as unobserved heterogeneity in chain-level quality. Our estimation approach combines two-step estimation techniques with a novel random forest based value function approximation technique that can accommodate the high-dimensional structure of the state space.
Causal Alignment: Augmenting Language Models with A/B Tests
Date Posted:Mon, 15 Apr 2024 20:06:12 -0500
We develop a general framework for improving human decisions in unstructured action spaces using language models and A/B tests. Given results from past A/B tests, we fine-tune a language model to convert worse-performing decisions into better-performing decisions. When deployed, the language model generates improvements to decisions proposed by a human. This design makes it unlikely for AI assistance to harm performance, which mitigates risks and eases adoption, and is applicable to generic business or organizational objectives. We confirm in a field experiment that our framework performs as intended, with AI-assisted decisions outperforming a human expert baseline. In 36 email marketing campaigns covering 283 million total impressions, subject lines created with assistance from our tuned model attain click-through rates 33% higher than an unassisted human. The precise measurement of treatment effects in the email marketing setting ensures that observed performance improvements are attributable to improvements in decision quality, thus validating the effectiveness of our framework. Additionally, assistance from a general-purpose language model with 30x the number of parameters fails to improve outcomes, i.e., fine-tuning is necessary, and small language models are sufficient. Overall, our results illustrate how significant gains can be achieved by integrating AI and human expertise in decision-making processes.
Simulated Maximum Likelihood Estimation of the Sequential Search Model
Date Posted:Tue, 28 Mar 2023 15:46:36 -0500
The authors propose a new approach to simulate the likelihood of the sequential search model. By allowing search costs to be heterogeneous across consumers and products, the authors directly compute the joint probability of the search and purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under the assumptions of Weitzman's sequential search algorithm, the proposed procedure recursively makes random draws for each quantity that requires numerical integration in order to compute the joint probabilities of consumers' search and purchase decisions. In an extensive simulation study, the proposed method is compared with existing likelihood simulators that have recently been used to estimate the sequential search model. In addition, the proposed method recovers the consumers? relative preferences even if the utility function and/or the search cost distribution is mis-specified. The proposed method is then applied to online search data from Expedia for field-data validation. The more precise estimation of the model parameters and the improved prediction accuracy of the proposed approach stem from attributing researcher uncertainty in the search order to the consumer-product-level distribution of search costs and the randomness in the choice decision to the distribution of match values across consumers and products. From a substantive perspective, the authors find that search costs and "position" effects affect products
REVISION: The Identity Fragmentation Bias
Date Posted:Mon, 23 May 2022 03:20:54 -0500
Consumers interact with firms across multiple devices, browsers, and machines; these interactions are often recorded with different identifiers for the same consumer. The failure to correctly match different identities leads to a fragmented view of exposures and behaviors. This paper studies the identity fragmentation bias, referring to the estimation bias resulted from using fragmented data. Using a formal framework, we decompose the contributing factors of the estimation bias caused by data fragmentation and discuss the direction of bias. Contrary to conventional wisdom, this bias cannot be signed or bounded under standard assumptions. Instead, upward biases and sign reversals can occur even in experimental settings. We then compare several corrective measures, and discuss their respective advantages and caveats.
REVISION: Personalized Pricing and Consumer Welfare
Date Posted:Thu, 24 Jun 2021 06:03:45 -0500
We study the welfare implications of personalized pricing, an extreme form of third-degree price discrimination implemented with machine learning for a large, digital firm. Using data from a unique randomized controlled pricing field experiment we train a demand model and conduct inference about the effects of personalized pricing on firm and consumer surplus. In a second experiment, we validate our predictions in the field. The initial experiment reveals unexercised market power that allows the firm to raise its price optimally , generating a 55% increase in profits. Personalized pricing improves the firm's expected posterior profits by an additional 19%, relative to the optimized uniform price, and by 86%, relative to the firm's unoptimized status quo price. Turning to welfare effects on the demand side, total consumer surplus declines 23% under personalized pricing relative to uniform pricing, and 47% relative to the firm's unoptimized status quo price. However, over 60% of ...
REVISION: The Identity Fragmentation Bias
Date Posted:Tue, 02 Feb 2021 05:16:26 -0600
Consumers interact with firms across multiple devices, browsers, and machines; these interactions are often recorded with different identifiers for the same consumer. The failure to correctly match different identities leads to a fragmented view of exposures and behaviors. This paper studies the identity fragmentation bias, referring to the estimation bias resulted from using fragmented data. Using a formal framework, we decompose the contributing factors of the estimation bias caused by data fragmentation and discuss the direction of bias. Contrary to conventional wisdom, this bias cannot be signed or bounded under standard assumptions. Instead, upward biases and sign reversals can occur even in experimental settings. We then compare several corrective measures, and discuss their respective advantages and caveats.
REVISION: Valuing Brand Collaboration: Evidence From a Natural Experiment
Date Posted:Mon, 17 Aug 2020 03:53:07 -0500
We study complementarities between brands in the context of collaborations across museums. Over the course of our sample, one major museum with a highly recognized brand closed temporarily and sequentially collaborated with two established local museums. With individual panel data on museum memberships around these events, we measure how collaborations affect demand using an empirical framework of complementarities that are newly applied to the branding context. We observe two counter-acting demand patterns. First, customers with no history of buying membership from either museum enter the market, suggesting brand complementarities. Second, a sub-group of customers who previously purchased from either or both of the museums display decreased demand, consistent with brand dilution. Any structural approach that models the demand for collaboration with existing preferences for separate brands fails to create accurate demand predictions. The magnitude of the offsetting forces varies ...
REVISION: Personalized Pricing and Customer Welfare
Date Posted:Fri, 21 Feb 2020 11:57:13 -0600
Abstract We study the welfare implications of personalized pricing, an extreme form of third-degree price discrimination implemented with machine learning for a large, digital firm. We conduct a randomized controlled pricing field experiment to train a demand model and to conduct inferences about the effects of personalized pricing on firm and customer surplus. In a second experiment, we validate our predictions out of sample. Personalized pricing improves the firm's expected posterior profits by 19%, relative to optimized uniform pricing, and by 86%, relative to the firm's status quo pricing. On the demand side, customer surplus declines slightly under personalized pricing relative to a uniform pricing structure. However, over 60% of customers benefit from personalized prices that are lower than the optimal uniform price. Based on simulations with our demand estimates, we find several cases where customer surplus increases when the firm is allowed to condition on more customer ...
REVISION: Can Open Innovation Survive? Imitation and Return on Originality in Crowdsourcing Creative Work
Date Posted:Tue, 14 Jan 2020 10:04:52 -0600
Open innovation platforms that enable organizations to crowdsource ideation to parties external to the firm are proliferating. In many cases, the platforms use open contests that allow the free exchange of ideas with the goal of improving the ideation process. In open contests, participants (“solvers”) observe the ideas of others as well as the feedback received from the contest sponsor (“seeker”). The open nature of such contests generate incentives for imitating successful early designs by future solvers at the cost of the original solvers. As such, this creates the possibility of the platform unraveling when original solvers strategically withdraw from the platform, expecting their ideas will be copied without recompense. To investigate agent behavior in such a setting, we analyze publicly accessible micro-data on more than 6,000 design contests, submissions and participants from crowdsourced open ideation platforms and augment this analysis with field and online experiments. ...
The Identity Fragmentation Bias
Date Posted:Fri, 10 Jan 2020 16:16:19 -0600
Consumers interact with firms across multiple devices, browsers, and machines; these interactions are often recorded with different identifiers for the same consumer. The failure to correctly match different identities leads to a fragmented view of exposures and behaviors. This paper studies the identity fragmentation bias, referring to the estimation bias resulted from using fragmented data. Using a formal framework, we decompose the contributing factors of the estimation bias caused by data fragmentation and discuss the direction of bias. Contrary to conventional wisdom, this bias cannot be signed or bounded under standard assumptions. Instead, upward biases and sign reversals can occur even in experimental settings. We then compare several corrective measures, and discuss their respective advantages and caveats.
REVISION: The Identity Fragmentation Bias
Date Posted:Fri, 10 Jan 2020 06:16:42 -0600
Consumers interact with firms across multiple devices, browsers, and machines; these interactions are often recorded with different identifiers for the same individual. The failure to correctly match different identities leads to a fragmented view of exposures and behaviors. This paper studies the identity fragmentation bias, referring to the estimation bias resulted from using fragmented data. Using a formal framework, we decompose the contributing factors of the estimation bias caused by data fragmentation and discuss the direction of bias. Contrary to conventional wisdom, this bias cannot be signed or bounded under standard assumptions. Instead, upward biases and sign reversals can occur even in experimental settings. We then propose and compare several corrective measures, and demonstrate their performances using an empirical application.
Selling and Sales Management
Date Posted:Wed, 19 Jun 2019 18:47:02 -0500
About 10% of the US labor force is employed in selling related occupations and the expenditures on selling activities total close to 5% of the US GDP. Without question, selling occupies a prominent role in our economy. This chapter offers a discussion on the construct of selling, its role in economic models and the various aspects of firm decisions that relate to it.
New: Selling and Sales Management
Date Posted:Wed, 19 Jun 2019 09:47:08 -0500
About 10% of the US labor force is employed in selling related occupations and the expenditures on selling activities total close to 5% of the US GDP. Without question, selling occupies a prominent role in our economy. This chapter offers a discussion on the construct of selling, its role in economic models and the various aspects of firm decisions that relate to it.
REVISION: A Copycat Penalty: Micro Evidence From an Online Crowdsourcing Platform
Date Posted:Wed, 29 May 2019 11:41:39 -0500
Crowdsourced innovation platforms that enable organizations to outsource ideation to parties external to the firm are proliferating. In many cases, the platforms use open contests that allow the free exchange of ideas with the goal of improving the ideation process. In open contests, participants (“solvers”) observe the ideas of others as well as the feedback received from the contest sponsor (“seeker”). The open nature of such contests generate incentives for free-riding and copying by opportunistic solvers. As such, this creates the possibility of the platform unraveling when good solvers strategically withdraw from the platform, expecting their ideas will be copied. To investigate agent behavior in such a setting, we collect micro-data on design contests, submissions and participants from the inception of an online crowdsourced open ideation platform. These data include the original image files submitted to the contests, which enables us to compare how similar one image is to ...
REVISION: Estimation of Sequential Search Model
Date Posted:Thu, 09 May 2019 11:09:16 -0500
We propose a new likelihood-based estimation method for the sequential search model. By allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated with the purchase decision and the search sequence under the sequential search algorithm. We then present details from an extensive simulation study that compares the proposed approach with existing estimation methods recently used for sequential search model estimation, viz., the kernel-smoothed frequency simulator (KSFS) and the crude frequency simulator (CFS). In the empirical application, we apply the proposed method to the Expedia dataset from Kaggle which has previously been ...
REVISION: Exact MCMC for Choices from Menus -- Measuring Substitution and Complementarity among Menu Items
Date Posted:Wed, 08 May 2019 11:53:50 -0500
Choice environments in practice are often more complicated than the well studied case of choice between perfect substitutes. Consumers choosing from menus or configuring products face choice sets that consist of substitutes, complements and independent items, and the utility maximizing choice corresponds to a particular item combination out of a potentially huge number of possible combinations. This reality is mirrored in menu-based choice experiments. The inferential challenge posed by data from such choices is in the calibration of utility functions that accommodate a mix of substitutes, complements, and independent items. We develop a model that not only accounts for heterogeneity in preferences but also in what consumers perceive to be substitutes and complements and show how to perform Bayesian inference for this model based on the exact likelihood, despite its practically intractable normalizing constant. We characterize the model from first principles and show how it ...
REVISION: Valuing Brand Collaboration: Evidence From a Natural Experiment
Date Posted:Sat, 16 Mar 2019 17:47:14 -0500
We study how brand impacts consumer demand in the context of museum memberships in a U.S. metropolitan city. Over the course of our sample, one major museum with a highly recognized brand closed. During the closure, it sequentially co-branded with two established local museums. The closure and collaboration events, combined with individual panel data on museum memberships, allow us to measure how these changes in brand affect demand. Collaboration with the closed museum lifts demand for the partner museum; however, this aggregate increase masks two counter-acting forces. First, customers with no history of buying membership from either museum enter the market, consistent with the prominent brand providing a signal of vertical quality. Second, a sub-group of customers who previously purchased from either or both of the museums display decreased demand. This is consistent with a model of brand providing information about horizontal match value, with decreasing demand from the ...
Valuing Brand Collaboration: Evidence From a Natural Experiment
Date Posted:Thu, 07 Mar 2019 14:32:14 -0600
We study complementarities between brands in the context of collaborations across museums. Over the course of our sample, one major museum with a highly recognized brand closed temporarily and sequentially collaborated with two established local museums. With individual panel data on museum memberships around these events, we measure how collaborations affect demand using an empirical framework of complementarities that are newly applied to the branding context. We observe two counter-acting demand patterns. First, customers with no history of buying membership from either museum enter the market, suggesting brand complementarities. Second, a sub-group of customers who previously purchased from either or both of the museums display decreased demand, consistent with brand dilution. Any structural approach that models the demand for collaboration with existing preferences for separate brands fails to create accurate demand predictions. The magnitude of the offsetting forces varies between collaboration events, which makes demand prediction even more challenging. These results call for a theory of brand being beyond a fixed utility primitive and have implications for counterfactuals that involve combining or altering of brands..
REVISION: Estimation of Sequential Search Model
Date Posted:Thu, 07 Mar 2019 05:50:36 -0600
We propose a new likelihood-based estimation method for the sequential search model. By allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated with the purchase decision and the search sequence under the sequential search algorithm. We then present details from an extensive simulation study that compares the proposed approach with existing estimation methods recently used for sequential search model estimation, viz., the kernel-smoothed frequency simulator (KSFS) and the crude frequency simulator (CFS). In the empirical application, we apply the proposed method to the Expedia dataset from Kaggle which has previously been ...
REVISION: Valuing Brand Collaboration: Evidence From a Natural Experiment
Date Posted:Thu, 07 Mar 2019 04:33:55 -0600
We study how brand impacts consumer demand in the context of museum memberships in a U.S. metropolitan city. Over the course of our sample, one major museum with a highly recognized brand closed. During the closure, it sequentially co-branded with two established local museums. The closure and collaboration events combined with individual panel data on museum memberships allow us to measure how these changes in brand affect demand. Collaboration with the closed museum lifts demand for the partner museum, however this aggregate increase masks two counteracting forces. First, customers with no history of buying membership from either museum enter the market, consistent with the prominent brand providing a signal of vertical quality. Second, a sub-group of customers who previously purchased from either or both of the museums display decreased demand. This is consistent with a model of brand providing information about horizontal match value, with decreasing demand from the broadening of ...
REVISION: Estimation of Sequential Search Model
Date Posted:Wed, 27 Feb 2019 09:48:44 -0600
We propose a new likelihood-based estimation method for the sequential search model. By allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated with the purchase decision and the search sequence under the sequential search algorithm. We then present details from an extensive simulation study that compares the proposed approach with existing estimation methods recently used for sequential search model estimation, viz., the kernel-smoothed frequency simulator (KSFS) and the crude frequency simulator (CFS). In the empirical application, we apply the proposed method to the Expedia dataset from Kaggle which has previously been ...
REVISION: Estimation of Sequential Search Models
Date Posted:Thu, 30 Aug 2018 08:12:37 -0500
In this paper, we propose a new likelihood based estimation method for the sequential search model. We demonstrate that, by allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated the purchase decision and the search sequence under the sequential search algorithm. Next, we present two extensions of the estimation method to allow the estimation of the sequential search model 1) when researchers have access only to market share data instead of individual purchase data (data on individual search sequences are still available) and 2) when consumers search to discover a subset of product characteristics (e.g. price) besides their ...
REVISION: Estimation of Sequential Search Models
Date Posted:Fri, 17 Aug 2018 10:38:04 -0500
In this paper, we propose a new likelihood based estimation method for the sequential search model. We demonstrate that, by allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated the purchase decision and the search sequence under the sequential search algorithm. Next, we present two extensions of the estimation method to allow the estimation of the sequential search model 1) when researchers have access only to market share data instead of individual purchase data (data on individual search sequences are still available) and 2) when consumers search to discover a subset of product characteristics (e.g. price) besides their ...
REVISION: Estimation of Sequential Search Models
Date Posted:Fri, 03 Aug 2018 09:19:23 -0500
In this paper, we propose a new likelihood based estimation method for the sequential search model. We demonstrate that, by allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated the purchase decision and the search sequence under the sequential search algorithm. Next, we present two extensions of the estimation method to allow the estimation of the sequential search model 1) when researchers have access only to market share data instead of individual purchase data (data on individual search sequences are still available) and 2) when consumers search to discover a subset of product characteristics (e.g. price) besides their ...
REVISION: Exact MCMC for Choices from Menus -- Measuring Substitution and Complementarity among Menu Items
Date Posted:Wed, 25 Jul 2018 06:19:17 -0500
Choice environments in practice are often more complicated than the well studied case of choice between perfect substitutes. Consumers choosing from menus or configuring products face choice sets that consist of substitutes, complements and independent items, and the utility maximizing choice corresponds to a particular item combination out of a potentially huge number of possible combinations. This reality is mirrored in menu-based choice experiments. The inferential challenge posed by data from such choices is in the calibration of utility functions that accommodate a mix of substitutes, complements, and independent items. We develop a model that not only accounts for heterogeneity in preferences but also in what consumers perceive to be substitutes and complements and show how to perform Bayesian inference for this model based on the exact likelihood, despite its practically intractable normalizing constant. We characterize the model from first principles and show how it ...
Estimation of Sequential Search Model
Date Posted:Sat, 21 Jul 2018 16:10:19 -0500
We propose a new likelihood-based estimation method for the sequential search model. By allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated with the purchase decision and the search sequence under the sequential search algorithm. We then present details from an extensive simulation study that compares the proposed approach with existing estimation methods recently used for sequential search model estimation, viz., the kernel-smoothed frequency simulator (KSFS) and the crude frequency simulator (CFS). In the empirical application, we apply the proposed method to the Expedia dataset from Kaggle which has previously been analyzed using the KSFS estimator and the assumption of homogeneous search costs. We demonstrate that the proposed method has a better predictive performance associated with differences in the estimated effects of various drivers of clicks and purchases, and highlight the importance of the heterogeneous search costs assumption even when KSFS is used to estimate the sequential search model. Lastly, from a managerial perspective, we show that sorting products by their expected utilities can enhance consum
REVISION: Estimation of Sequential Search Models
Date Posted:Sat, 21 Jul 2018 07:10:20 -0500
In this paper, we propose a new likelihood based estimation method for the sequential search model. We demonstrate that, by allowing search costs to be heterogeneous across consumers and products, we can directly compute the joint probability of the search sequence and the purchase decision when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under this procedure, one recursively makes random draws for each dimension that requires numerical integration to simulate the probabilities associated the purchase decision and the search sequence under the sequential search algorithm. Next, we present two extensions of the estimation method to allow the estimation of the sequential search model 1) when researchers have access only to market share data instead of individual purchase data (data on individual search sequences are still available) and 2) when consumers search to discover a subset of product characteristics (e.g. price) besides their ...
Targeted Undersmoothing
Date Posted:Wed, 09 May 2018 09:14:04 -0500
This paper proposes a post-model selection inference procedure, called targeted undersmoothing, designed to construct uniformly valid confidence sets for functionals of sparse high-dimensional models, including dense functionals that may depend on many or all elements of the high-dimensional parameter vector. The confidence sets are based on an initially selected model and two additional models which enlarge the initial model. By varying the enlargements of the initial model, one can also conduct sensitivity analysis of the strength of empirical conclusions to model selection mistakes in the initial model. We apply the procedure in two empirical examples: estimating heterogeneous treatment effects in a job training program and estimating profitability from an estimated mailing strategy in a marketing campaign. We also illustrate the procedure?s performance through simulation experiments.
New: Targeted Undersmoothing
Date Posted:Wed, 09 May 2018 00:14:05 -0500
This paper proposes a post-model selection inference procedure, called targeted undersmoothing, designed to construct uniformly valid confidence sets for functionals of sparse high-dimensional models, including dense functionals that may depend on many or all elements of the high-dimensional parameter vector. The confidence sets are based on an initially selected model and two additional models which enlarge the initial model. By varying the enlargements of the initial model, one can also conduct sensitivity analysis of the strength of empirical conclusions to model selection mistakes in the initial model. We apply the procedure in two empirical examples: estimating heterogeneous treatment effects in a job training program and estimating profitability from an estimated mailing strategy in a marketing campaign. We also illustrate the procedure’s performance through simulation experiments.
Can Open Innovation Survive? Imitation and Return on Originality in Crowdsourcing Creative Work
Date Posted:Sat, 03 Mar 2018 19:04:30 -0600
Open innovation platforms that enable organizations to crowdsource ideation to parties external to the firm are proliferating. In many cases, the platforms use open contests that allow the free exchange of ideas with the goal of improving the ideation process. In open contests, participants (?solvers?) observe the ideas of others as well as the feedback received from the contest sponsor (?seeker?). The open nature of such contests generate incentives for imitating successful early designs by future solvers at the cost of the original solvers. As such, this creates the possibility of the platform unraveling when original solvers strategically withdraw from the platform, expecting their ideas will be copied without recompense. To investigate agent behavior in such a setting, we analyze publicly accessible micro-data on more than 6,000 design contests, submissions and participants from crowdsourced open ideation platforms and augment this analysis with field and online experiments. These data include the original image files submitted to the contests, which enable us to compare how similar one image is to another using a customized ensemble image comparison algorithm. We find that better rated designs are likely to be imitated by later entering solvers, thereby generating significant risk to early entrants that their ideas will be appropriated by later entrants without recompense. As a countervailing force, we document that seekers tend to reward original designs, and avoid pick
REVISION: A Copycat Penalty: Micro Evidence From an Online Crowdsourcing Platform
Date Posted:Sat, 03 Mar 2018 09:04:30 -0600
Crowdsourced innovation platforms that enable organizations to outsource ideation to parties external to the firm are proliferating. In many cases, the platforms use open contests that allow the free exchange of ideas with the goal of improving the ideation process. In open contests, participants (“solvers”) observe the ideas of others as well as the feedback received from the contest sponsor (“seeker”). The open nature of such contests generate incentives for free-riding and copying by opportunistic solvers. As such, this creates the possibility of the platform unraveling when good solvers strategically withdraw from the platform, expecting their ideas will be copied. To investigate agent behavior in such a setting, we collect micro-data on design contests, submissions and participants from the inception of crowdSPRING, an online crowdsourced open ideation platform. These data include the original image files submitted to the contests, which enables us to compare how similar one ...
Heterogeneous Treatment Effects and Optimal Targeting Policy Evaluation
Date Posted:Tue, 06 Feb 2018 13:19:43 -0600
Abstract We present a general framework to target customers using optimal targeting policies, and we document the profit differences from alternative estimates of the optimal targeting policies. Two foundations of the framework are conditional average treatment effects (CATEs) and off-policy evaluation using data with randomized targeting. This policy evaluation approach allows us to evaluate an arbitrary number of different targeting policies using only one randomized data set and thus provides large cost advantages over conducting a corresponding number of field experiments. We use different CATE estimation methods to construct and compare alternative targeting policies. Our particular focus is on the distinction between indirect and direct methods. The indirect methods predict the CATEs using a conditional expectation function estimated on outcome levels, whereas the direct methods specifically predict the treatment effects of targeting. We introduce a new direct estimation method called treatment effect projection (TEP). The TEP is a non-parametric CATE estimator that we regularize using a transformed outcome loss which, in expectation, is identical to a loss that we could construct if the individual treatment effects were observed. The empirical application is to a catalog mailing with a high-dimensional set of customer features. We document the profits of the estimated policies using data from two campaigns conducted one year apart, which allows us to assess the transpo
New: Heterogeneous Treatment Effects and Optimal Targeting Policy Evaluation
Date Posted:Tue, 06 Feb 2018 03:19:45 -0600
We discuss how to construct optimal targeting policies and document the difference in profits from alternative targeting policies by using estimation approaches that are based on recent advances in causal inference and machine learning. We introduce an approach to evaluate the profit of any targeting policy using only one single randomized sample. This approach is qualitatively equivalent to conducting a field test, but reduces the cost of multiple field tests because all comparisons can be conducted in only one sample. The approach allows us to compare many alternative optimal targeting policies that are constructed based on different estimates of the conditional average treatment effect, i.e. the incremental effect of targeting. We draw a conceptual distinction between methods that predict the conditional average treatment effect indirectly via the conditional expectation function trained on the outcome level, and methods that directly predict the conditional average treatment ...
Personalized Pricing and Consumer Welfare
Date Posted:Mon, 11 Sep 2017 10:51:45 -0500
We study the welfare implications of personalized pricing, an extreme form of third-degree price discrimination implemented with machine learning for a large, digital firm. Using data from a unique randomized controlled pricing field experiment we train a demand model and conduct inference about the effects of personalized pricing on firm and consumer surplus. In a second experiment, we validate our predictions in the field. The initial experiment reveals unexercised market power that allows the firm to raise its price optimally, generating a 55% increase in profits. Personalized pricing improves the firm's expected posterior profits by an additional 19%, relative to the optimized uniform price, and by 86%, relative to the firm's unoptimized status quo price. Turning to welfare effects on the demand side, total consumer surplus declines 23% under personalized pricing relative to uniform pricing, and 47% relative to the firm's unoptimized status quo price. However, over 60% of consumers benefit from lower prices under personalization and total welfare can increase under standard inequity-averse welfare functions. Simulations with our demand estimates reveal a non-monotonic relationship between the granularity of the segmentation data and the total consumer surplus under personalization. These findings indicate a need for caution in the current public policy debate regarding data privacy and personalized pricing insofar as some data restrictions may not per se im
REVISION: Scalable Price Targeting
Date Posted:Sun, 27 Aug 2017 13:44:36 -0500
Abstract We study the welfare implications of scalable price targeting, an extreme form of third-degree price discrimination implemented with machine learning for a large, digital firm. Targeted prices are computed by solving the firm's Bayesian Decision-Theoretic pricing problem based on a database with a high-dimensional vector of customer features that are observed prior to the price quote. To identify the causal effect of price on demand, we first run a large, randomized price experiment and use these data to train our demand model. We use l1 regularization (lasso) to select the set of customer features that moderate the heterogeneous treatment effect of price on demand. We use a weighted likelihood Bayesian bootstrap to quantify the firm's approximate statistical uncertainty in demand and profitability. We then conduct a second experiment that implements our proposed price targeting scheme out of sample. Theoretically, both firm and customer surplus could rise with scalable ...
REVISION: Distributed Markov Chain Monte Carlo for Bayesian Hierarchical Models
Date Posted:Thu, 27 Jul 2017 04:10:28 -0500
This article proposes a distributed Markov chain Monte Carlo (MCMC) algorithm for estimating Bayesian hierarchical models when the number of cross-sectional units is very large and the objects of interest are the unit-level parameters. The two-stage algorithm is asymptotically exact, retains the flexibility of a standard MCMC algorithm, and is easy to implement. The algorithm constructs an estimator of the posterior predictive distribution of the unit-level parameters in the first stage, and uses the estimator as the prior distribution in the second stage for the unit-level draws. Both stages are embarrassingly parallel. The algorithm is demonstrated with simulated data from a hierarchical logit model and is shown to be faster and more efficient (in effective sample size generated per unit of computing) than a single machine algorithm by at least an order of magnitude. For a relatively small number of observations per cross-sectional unit, the algorithm is both faster and has better ...
REVISION: Big Data and Marketing Analytics in Gaming: Combining Empirical Models and Field Experimentation
Date Posted:Tue, 27 Jun 2017 22:56:10 -0500
Efforts on developing, implementing and evaluating a marketing analytics framework at a real-world company are described. The framework uses individual-level transaction data to fit empirical models of consumer response to marketing efforts, and uses these estimates to optimize segmentation and targeting. The models feature themes emphasized in the academic marketing science literature, including incorporation of consumer heterogeneity and state-dependence into choice, and controls for the endogeneity of the firm's historical targeting rule in estimation. To control for the endogeneity, we present an approach that involves conducting estimation separately across fixed partitions of the score variable that targeting is based on, which may be useful in other behavioral targeting settings. The models are customized to facilitate casino operations and are implemented at the MGM Resorts International's group of companies. The framework is evaluated using a randomized trial implemented at ...
REVISION: Scalable Price Targeting
Date Posted:Tue, 27 Jun 2017 05:57:45 -0500
We study the welfare implications of scalable price targeting, an extreme form of third-degree price discrimination, for a large, digital firm. Targeted prices are computed by solving the firm’s Bayesian Decision-Theoretic pricing problem based on a database with a high-dimensional vector of customer features that are observed prior to the price quote. To identify the causal effect of price on demand, we first run a large, randomized price experiment. These data are used to train our demand model. We use lasso regularization to select the set of customer features that moderate the heterogeneous treatment effect of price on demand. We use a weighted likelihood Bayesian bootstrap to quantify the firm’s approximate statistical uncertainty in demand and profitability. Theoretically, both firm and customer surplus could rise with scalable price targeting. We test the welfare implications out of sample with a second randomized price experiment with new customers. Optimized uniform pricing ...
Personalized Pricing and Consumer Welfare
Date Posted:Mon, 26 Jun 2017 16:40:51 -0500
We study the welfare implications of personalized pricing, an extreme form of third-degree price discrimination implemented with machine learning for a large, digital firm. Using data from a unique randomized controlled pricing field experiment we train a demand model and conduct inference about the effects of personalized pricing on firm and consumer surplus. In a second experiment, we validate our predictions in the field. The initial experiment reveals unexercised market power that allows the firm to raise its price optimally , generating a 55% increase in profits. Personalized pricing improves the firm's expected posterior profits by an additional 19%, relative to the optimized uniform price, and by 86%, relative to the firm's unoptimized status quo price. Turning to welfare effects on the demand side, total consumer surplus declines 23% under personalized pricing relative to uniform pricing, and 47% relative to the firm's unoptimized status quo price. However, over 60% of consumers benefit from lower prices under personalization and total welfare can increase under standard inequity-averse welfare functions. Simulations with our demand estimates reveal a non-monotonic relationship between the granularity of the segmentation data and the total consumer surplus under personalization. These findings indicate a need for caution in the current public policy debate regarding data privacy and personalized pricing insofar as some data restrictions may not per se improve consumer
REVISION: Scalable Price Targeting
Date Posted:Mon, 26 Jun 2017 07:40:51 -0500
We study the welfare implications of scalable price targeting, an extreme form of third-degree price discrimination, for a large, digital firm. Targeted prices are computed by solving the firm’s Bayesian Decision-Theoretic pricing problem based on a database with a high-dimensional vector of customer features that are observed prior to the price quote. To identify the causal effect of price on demand, we first run a large, randomized price experiment. These data are used to train our demand model. We use lasso regularization to select the set of customer features that moderate the heterogeneous treatment effect of price on demand. We use a weighted likelihood Bayesian bootstrap to quantify the firm’s approximate statistical uncertainty in demand and profitability. Theoretically, both firm and customer surplus could rise with scalable price targeting. We test the welfare implications out of sample with a second randomized price experiment with new customers. Optimized uniform pricing ...
Distributed Markov Chain Monte Carlo for Bayesian Hierarchical Models
Date Posted:Tue, 09 May 2017 13:00:24 -0500
This article proposes a distributed Markov chain Monte Carlo (MCMC) algorithm for estimating Bayesian hierarchical models when the number of cross-sectional units is very large and the objects of interest are the unit-level parameters. The two-stage algorithm is asymptotically exact, retains the flexibility of a standard MCMC algorithm, and is easy to implement. The algorithm constructs an estimator of the posterior predictive distribution of the unit-level parameters in the first stage, and uses the estimator as the prior distribution in the second stage for the unit-level draws. Both stages are embarrassingly parallel. The algorithm is demonstrated with simulated data from a hierarchical logit model and is shown to be faster and more efficient (in effective sample size generated per unit of computing) than a single machine algorithm by at least an order of magnitude. For a relatively small number of observations per cross-sectional unit, the algorithm is both faster and has better mixing properties than the standard hybrid Gibbs sampler. We illustrate our approach with data on 1,100,000 donors to a charitable organization, and simulations with up to 100 million units.
REVISION: Distributed Markov Chain Monte Carlo for Bayesian Hierarchical Models
Date Posted:Tue, 09 May 2017 04:00:25 -0500
This article proposes a distributed Markov chain Monte Carlo (MCMC) algorithm for estimating Bayesian hierarchical models when the number of cross-sectional units is very large and the objects of interest are the unit-level parameters. The two-stage algorithm is asymptotically exact, retains the flexibility of a standard MCMC algorithm, and is easy to implement. The algorithm constructs an estimator of the posterior predictive distribution of the unit-level parameters in the first stage, and uses the estimator as the prior distribution in the second stage for the unit-level draws. Both stages are embarrassingly parallel. The algorithm is demonstrated with simulated data from a hierarchical logit model and is shown to be faster and more efficient (in effective sample size generated per unit of computing) than a single machine algorithm by at least an order of magnitude. For a relatively small number of observations per cross-sectional unit, the algorithm is both faster and has better ...
Exact MCMC for Choices from Menus -- Measuring Substitution and Complementarity among Menu Items
Date Posted:Mon, 24 Apr 2017 22:07:17 -0500
Choice environments in practice are often more complicated than the well studied case of choice between perfect substitutes. Consumers choosing from menus or configuring products face choice sets that consist of substitutes, complements and independent items, and the utility maximizing choice corresponds to a particular item combination out of a potentially huge number of possible combinations. This reality is mirrored in menu-based choice experiments. The inferential challenge posed by data from such choices is in the calibration of utility functions that accommodate a mix of substitutes, complements, and independent items. We develop a model that not only accounts for heterogeneity in preferences but also in what consumers perceive to be substitutes and complements and show how to perform Bayesian inference for this model based on the exact likelihood, despite its practically intractable normalizing constant. We characterize the model from first principles and show how it structurally improves on the multivariate probit model (Liechty et al., 2001) and on models that include cross-price effects in the utility function (Orme, 2010). We find empirical support for our model in a menu-based discrete choice experiment investigating demand for game consoles and accessories. Finally, we illustrate substantial implications from modeling substitution and complementarity for optimal pricing.
REVISION: Measuring Substitution and Complementarity Among Offers in Menu Based Choice Experiments
Date Posted:Mon, 24 Apr 2017 13:07:18 -0500
Choice experiments designed to extend beyond the classic application of choice among perfect substitutes have become popular in marketing research. In these experiments, often referred to as menu based choice, respondents face choice sets that may comprise substitutes, complements, and offers that provide utility independently, or any mixture of these three types. The inferential challenge posed by data from such experiments is in the calibration of utility functions that accommodate a mix of substitutes, complements, and independent offers. Moreover, while a prior understanding of the product categories under study may, for example, suggest that two offers in a set are essentially perfect substitutes, this may not be true for all respondents. To address these challenges, we combine Besag's (Besag 1972, Besag 1974) autologistic choice model with a flexible hierarchical prior structure. We explain from first principles how the autologistic choice model improves on the multivariate ...
REVISION: Big Data and Marketing Analytics in Gaming: Combining Empirical Models and Field Experimentation
Date Posted:Thu, 15 Dec 2016 10:01:33 -0600
Efforts on developing, implementing and evaluating a marketing analytics framework at a real-world company are described. The framework uses individual-level transaction data to fit empirical models of consumer response to marketing efforts, and uses these estimates to optimize segmentation and targeting. The models feature themes emphasized in the academic marketing science literature, including incorporation of consumer heterogeneity and state-dependence into choice, and controls for the endogeneity of the firm's historical targeting rule in estimation. To control for the endogeneity, we present an approach that involves conducting estimation separately across fixed partitions of the score variable that targeting is based on, which may be useful in other behavioral targeting settings. The models are customized to facilitate casino operations and are implemented at the MGM Resorts International's group of companies. The framework is evaluated using a randomized trial implemented at ...
Big Data and Marketing Analytics in Gaming: Combining Empirical Models and Field Experimentation
Date Posted:Thu, 27 Feb 2014 08:41:44 -0600
Efforts on developing, implementing and evaluating a marketing analytics framework at a real-world company are described. The framework uses individual-level transaction data to fit empirical models of consumer response to marketing efforts, and uses these estimates to optimize segmentation and targeting. The models feature themes emphasized in the academic marketing science literature, including incorporation of consumer heterogeneity and state-dependence into choice, and controls for the endogeneity of the firm's historical targeting rule in estimation. To control for the endogeneity, we present an approach that involves conducting estimation separately across fixed partitions of the score variable that targeting is based on, which may be useful in other behavioral targeting settings. The models are customized to facilitate casino operations and are implemented at the MGM Resorts International's group of companies. The framework is evaluated using a randomized trial implemented at MGM involving about 1.5M consumers. Using the new model produces about $1M to $5M incremental profits per campaign, translating to about 20cent incremental profit per dollar spent relative to the status quo. At current levels of marketing spending, this implies between $10M and $15M in incremental annual profit for the firm. The case-study underscores the value of using empirically-relevant marketing analytics solutions for improving outcomes for firms in real-world settings.
REVISION: Big Data and Marketing Analytics in Gaming: Combining Empirical Models and Field Experimentation
Date Posted:Wed, 26 Feb 2014 22:41:45 -0600
This paper reports on the development and implementation of a large-scale, marketing analytics framework for improving the segmentation, targeting and optimization of a consumer-facing firm’s marketing activities. The framework leverages detailed transaction data of the type increasingly becoming available in such industries. The models are customized to facilitate casino operations and implemented at the MGM Resorts International’s group of companies. The core of the framework consists of empirical models of consumer casino visitation and play behavior and its relationship to targeted marketing effort. Important aspects of the models include incorporation of rich dimensions of heterogeneity in consumer response, accommodation of state-dependence in consumer behavior, as well as controls for the endogeneity of targeted marketing in inference, all issues that are salient in modern empirical marketing research. The paper discusses details of the models as well as practical issues ...
Homogenous Contracts for Heterogeneous Agents: Aligning Salesforce Composition and Compensation
Date Posted:Sun, 23 Feb 2014 13:57:48 -0600
Observed contracts in the real-world are often very simple, partly reflecting the constraints faced by contracting firms in making the contracts more complex. We focus on one such rigidity, the constraints faced by firms in fine-tuning contracts to the full distribution of heterogeneity of its employees. We explore the implication of these restrictions for the provision of incentives within the firm. Our application is to salesforce compensation, in which a firm maintains a salesforce to market its products. Consistent with ubiquitous real-world business practice, we assume the firm is restricted to fully or partially set uniform commissions across its agent pool. We show this implies an interaction between the composition of agent types in the contract and the compensation policy used to motivate them, leading to a ?contractual externality? in the firm and generating gains to sorting. This paper explains how this contractual externality arises, discusses a practical approach to endogenize agents and incentives at a firm in its presence, and presents an empirical application to salesforce compensation contracts at a US Fortune 500 company that explores these considerations and assesses the gains from a salesforce architecture that sorts agents into divisions to balance firm-wide incentives. Empirically, we find the restriction to homogenous plans significantly reduces the payoffs of the firm relative to a fully heterogeneous plan when it is unable to optimize the composition
New: Homogenous Contracts for Heterogeneous Agents: Aligning Salesforce Composition and Compensation
Date Posted:Sun, 23 Feb 2014 03:57:48 -0600
Observed contracts in the real-world are often very simple, partly reflecting the constraints faced by contracting firms in making the contracts more complex. We focus on one such rigidity, the constraints faced by firms in fine-tuning contracts to the full distribution of heterogeneity of its employees. We explore the implication of these restrictions for the provision of incentives within the firm. Our application is to salesforce compensation, in which a firm maintains a salesforce to market its products. Consistent with ubiquitous real-world business practice, we assume the firm is restricted to fully or partially set uniform commissions across its agent pool. We show this implies an interaction between the composition of agent types in the contract and the compensation policy used to motivate them, leading to a “contractual externality” in the firm and generating gains to sorting. This paper explains how this contractual externality arises, discusses a practical approach to ...
Disentangling Preferences and Learning in Brand Choice Models
Date Posted:Wed, 24 Oct 2012 16:46:07 -0500
In recent years there has been a growing stream of literature in marketing and economics that models consumers as Bayesian learners. Such learning behavior is often embedded within a discrete choice framework that is then calibrated on scanner panel data. At the same time, it is now accepted wisdom that disentangling preference heterogeneity and state dependence is critical in any attempt to understand either construct. We posit that this confounding between state dependence and heterogeneity often carries through to Bayesian learning models. That is, the failure to adequately account for preference heterogeneity may result in over- or underestimation of the learning process because this heterogeneity is also reflected in the initial conditions. Using a unique data set that contains stated preferences (survey) and actual purchase data (scanner panel) for the same group of consumers, we attempt to untangle the effects of preference heterogeneity and state dependence, where the latter arises from Bayesian learning. Our results are striking and suggest that measured brand beliefs can predict choices quite well and, moreover, that in the absence of such measured preference information, the Bayesian learning behavior for consumer packaged goods is vastly overstated. The inclusion of preference information significantly reduces evidence for aggregate-level learning and substantially changes the nature of individual-level learning. Using individual-level outcomes, we illustrate why
New: Disentangling Preferences and Learning in Brand Choice Models
Date Posted:Wed, 24 Oct 2012 11:46:07 -0500
In recent years there has been a growing stream of literature in marketing and economics that models consumers as Bayesian learners. Such learning behavior is often embedded within a discrete choice framework that is then calibrated on scanner panel data. At the same time, it is now accepted wisdom that disentangling preference heterogeneity and state dependence is critical in any attempt to understand either construct. We posit that this confounding between state dependence and heterogeneity ...
REVISION: Repositioning Dynamics and Pricing Strategy
Date Posted:Fri, 12 Oct 2012 00:50:35 -0500
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to ...
REVISION: Enriching Interactions: Incorporating Outcome Data into Static Discrete Games
Date Posted:Fri, 12 Oct 2012 00:42:21 -0500
When modeling the behavior of firms, marketers and micro-economists routinely confront complex problems of strategic interaction. In competitive environments, firms make strategic decisions that not only depend on the features of the market, but also on their beliefs regarding the reactions of their rivals. Structurally modeling these interactions requires formulating and estimating a discrete game, a task which, until recently, was considered intractable. Fortunately, two-step estimation ...
REVISION: Repositioning Dynamics and Pricing Strategy
Date Posted:Tue, 11 Sep 2012 11:47:57 -0500
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to ...
REVISION: Repositioning Dynamics and Pricing Strategy
Date Posted:Fri, 15 Jun 2012 16:18:24 -0500
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to ...
REVISION: Estimating Discrete Games
Date Posted:Tue, 31 Jan 2012 06:57:40 -0600
This paper provides a critical review of the methods for estimating static discrete games and their relevance for quantitative marketing. We discuss the various modeling approaches, alternative assumptions, and relevant trade-offs involved in taking these empirical methods to data. We consider both games of complete and incomplete information, examine the primary methods for dealing with the coherency problems introduced by multiplicity of equilibria, and provide concrete examples from the ...
REVISION: Enriching Interactions: Incorporating Outcome Data into Static Discrete Games
Date Posted:Tue, 31 Jan 2012 06:54:55 -0600
When modeling the behavior of firms, marketers and micro-economists routinely confront complex problems of strategic interaction. In competitive environments, firms make strategic decisions that not only depend on the features of the market, but also on their beliefs regarding the reactions of their rivals. Structurally modeling these interactions requires formulating and estimating a discrete game, a task which, until recently, was considered intractable. Fortunately, two-step estimation ...
REVISION: Repositioning Dynamics and Pricing Strategy
Date Posted:Tue, 31 Jan 2012 06:50:26 -0600
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to ...
REVISION: Repositioning Dynamics and Pricing Strategy
Date Posted:Tue, 29 Nov 2011 00:09:38 -0600
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to ...
Update: Disentangling Preferences and Learning in Brand Choice Models
Date Posted:Tue, 25 Oct 2011 07:59:37 -0500
In recent years there has been a growing stream of literature in marketing and economics that models consumers as Bayesian learners. Such learning behavior is often embedded within a discrete choice framework which is then calibrated on scanner panel data. At the same time it is now accepted wisdom that disentangling preference heterogeneity and state dependence is critical in any attempt to understand either construct. We posit that this confounding often carries through to Bayesian learning mo
New PDF Uploaded
REVISION: Estimating Discrete Games
Date Posted:Thu, 18 Aug 2011 05:01:55 -0500
This paper provides a critical review of the methods for estimating static discrete games and their relevance for quantitative marketing. We discuss the various modeling approaches, alternative assumptions, and relevant trade-offs involved in taking these empirical methods to data. We consider both games of complete and incomplete information, examine the primary methods for dealing with the coherency problems introduced by multiplicity of equilibria, and provide concrete examples from the ...
REVISION: Enriching Interactions: Incorporating Outcome Data into Static Discrete Games
Date Posted:Thu, 18 Aug 2011 04:59:37 -0500
When modeling the behavior of firms, marketers and micro-economists routinely confront complex problems of strategic interaction. In competitive environments, firms make strategic decisions that not only depend on the features of the market, but also on their beliefs regarding the reactions of their rivals. Structurally modeling these interactions requires formulating and estimating a discrete game, a task which, until recently, was considered intractable. Fortunately, two-step estimation ...
New: How Consumers' Attitudes toward Direct-to-Consumer Advertising of Prescription Drugs Influence and
Date Posted:Wed, 17 Aug 2011 18:33:35 -0500
Data from 1081 adults surveyed by the FDA were analyzed to explore consumers’ attitudes toward direct-toconsumer advertising (DTCA) of prescription drugs, and the relation between these attitudes and health related consumption behaviors. We report the favorableness of consumers’ reactions to DTCA, and more importantly, demonstrate that consumers’ attitudes toward DTCA are related to whether they search for more information about a drug that is advertised, and ask their physician about the drug.
How Consumers? Attitudes toward Direct-to-Consumer Advertising of Prescription Drugs Influence and Effectiveness, and Consumer and Physician Behavior
Date Posted:Wed, 17 Aug 2011 17:53:53 -0500
Data from 1081 adults surveyed by the FDA were analyzed to explore consumers? attitudes toward direct-toconsumer advertising (DTCA) of prescription drugs, and the relation between these attitudes and health related consumption behaviors. We report the favorableness of consumers? reactions to DTCA, and more importantly, demonstrate that consumers? attitudes toward DTCA are related to whether they search for more information about a drug that is advertised, and ask their physician about the drug. Finally, we document how consumers? attitudes towards DTCA relate to the prescription writing behavior of their physicians. Mediation analyses that more fully explicate these findings are discussed.
REVISION: Enriching Interactions: Incorporating Outcome Data into Static Discrete Games
Date Posted:Thu, 28 Jul 2011 18:31:42 -0500
When modeling the behavior of firms, marketers and micro-economists routinely confront complex problems of strategic interaction. In competitive environments, firms make strategic decisions that not only depend on the features of the market, but also on their beliefs regarding the reactions of their rivals. Structurally modeling these interactions requires formulating and estimating a discrete game, a task which, until recently, was considered intractable. Fortunately, two-step estimation ...
REVISION: Estimating Discrete Games
Date Posted:Tue, 26 Jul 2011 18:37:07 -0500
This paper provides a critical review of the methods for estimating static discrete games and their relevance for quantitative marketing. We discuss the various modeling approaches, alternative assumptions, and relevant trade-offs involved in taking these empirical methods to data. We consider both games of complete and incomplete information, examine the primary methods for dealing with the coherency problems introduced by multiplicity of equilibria, and provide concrete examples from the ...
New: Disentangling Preferences and Learning in Brand Choice Models
Date Posted:Sat, 18 Jun 2011 17:05:54 -0500
In recent years there has been a growing stream of literature in marketing and economics that models consumers as Bayesian learners. Such learning behavior is often embedded within a discrete choice framework which is then calibrated on scanner panel data. At the same time it is now accepted wisdom that disentangling preference heterogeneity and state dependence is critical in any attempt to understand either construct. We posit that this confounding often carries through to Bayesian learning ...
Disentangling Preferences and Learning in Brand Choice Models
Date Posted:Sat, 18 Jun 2011 00:00:00 -0500
In recent years there has been a growing stream of literature in marketing and economics that models consumers as Bayesian learners. Such learning behavior is often embedded within a discrete choice framework which is then calibrated on scanner panel data. At the same time it is now accepted wisdom that disentangling preference heterogeneity and state dependence is critical in any attempt to understand either construct. We posit that this confounding often carries through to Bayesian learning models. That is, the failure to adequately account for preference heterogeneity may result in over/under estimation of the learning process since this heterogeneity is also reflected in the initial conditions. Using a unique dataset that contains stated preferences (survey) and actual purchase data (scanner panel) for the same group of consumer, we attempt to untangle the effects of preference heterogeneity and state dependence, where the latter arises from Bayesian learning. Our results are striking and suggest that measured brand beliefs can predict choices quite well and moreover that in the absence of such measured preference information the Bayesian learning behavior for consumer packaged goods is vastly overstated. The inclusion of preference information significantly reduces evidence for aggregate-level learning and substantially changes the nature of individual-level learning. Using individual-level outcomes, we illustrate why the lack of preference information leads to faulty ..
Update: Estimating Discrete Games
Date Posted:Thu, 09 Jun 2011 08:11:11 -0500
This paper provides a critical review of the methods for estimating static discrete games and their relevance for quantitative marketing. We discuss the various modeling approaches, alternative assumptions, and relevant trade-offs involved in taking these empirical methods to data. We consider both games of complete and incomplete information, examine the primary methods for dealing with the coherency problems introduced by multiplicity of equilibria, and provide concrete examples from the liter
New PDF Uploaded
REVISION: Estimating Discrete Games
Date Posted:Mon, 21 Mar 2011 14:55:27 -0500
This paper provides a critical review of the methods for estimating static discrete games and their relevance for quantitative marketing. We discuss the various modeling approaches, alternative assumptions, and relevant trade-offs involved in taking these empirical methods to data. We consider both games of complete and incomplete information, examine the primary methods for dealing with the coherency problems introduced by multiplicity of equilibria, and provide concrete examples from the ...
Update: Enriching Interactions: Incorporating Outcome Data into Static Discrete Games
Date Posted:Mon, 21 Mar 2011 09:47:23 -0500
When modeling the behavior of firms, marketers and micro-economists routinely confront complex problems of strategic interaction. In competitive environments, firms make strategic decisions that not only depend on the features of the market, but also on their beliefs regarding the reactions of their rivals. Structurally modeling these interactions requires formulating and estimating a discrete game, a task which, until recently, was considered intractable. Fortunately, two-step estimation method
New PDF Uploaded
Update: Repositioning Dynamics and Pricing Strategy
Date Posted:Mon, 21 Mar 2011 09:39:13 -0500
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to thei
New PDF Uploaded
Estimating Discrete Games
Date Posted:Mon, 21 Mar 2011 00:00:00 -0500
This paper provides a critical review of the methods for estimating static discrete games and their relevance for quantitative marketing. We discuss the various modeling approaches, alternative assumptions, and relevant trade-offs involved in taking these empirical methods to data. We consider both games of complete and incomplete information, examine the primary methods for dealing with the coherency problems introduced by multiplicity of equilibria, and provide concrete examples from the literature. We illustrate the mechanics of estimation using a real world example and provide the computer code and dataset with which to replicate our results.
REVISION: Enriching Interactions: Incorporating Outcome Data into Static Discrete Games
Date Posted:Thu, 10 Mar 2011 16:15:09 -0600
When modeling the behavior of firms, marketers and micro-economists routinely confront complex problems of strategic interaction. In competitive environments, firms make strategic decisions that not only depend on the features of the market, but also on their beliefs regarding the reactions of their rivals. Structurally modeling these interactions requires formulating and estimating a discrete game, a task which, until recently, was considered intractable. Fortunately, two-step estimation ...
Enriching Interactions: Incorporating Outcome Data into Static Discrete Games
Date Posted:Thu, 10 Mar 2011 05:42:36 -0600
When modeling the behavior of firms, marketers and micro-economists routinely confront complex problems of strategic interaction. In competitive environments, firms make strategic decisions that not only depend on the features of the market, but also on their beliefs regarding the reactions of their rivals. Structurally modeling these interactions requires formulating and estimating a discrete game, a task which, until recently, was considered intractable. Fortunately, two-step estimation methods have cracked the problem, fueling a growing literature in both marketing and economics that tackles a host of issues from the optimal design of ATM networks to the choice of pricing strategy. However, most existing methods have focused on only the discrete choice of actions, ignoring a wealth of information contained in post-choice outcome data and severely limiting the scope for performing informative counterfactuals or identifying the deep structural parameters that drive strategic decisions. The goal of this paper is to provide a method for incorporating post-choice outcome data into static discrete games of incomplete information. In particular, our estimation approach adds a selection correction to the two-step games approach, allowing the researcher to use revenue data, for example, to recover the costs associated with alternative actions. Alternatively, a researcher might use R&D expenses to back out the returns to innovation.
REVISION: Repositioning Dynamics and Pricing Strategy
Date Posted:Tue, 15 Feb 2011 15:24:44 -0600
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to ...
Repositioning Dynamics and Pricing Strategy
Date Posted:Tue, 15 Feb 2011 00:00:00 -0600
We measure the revenue and cost implications to supermarkets of changing their price positioning strategy in oligopolistic downstream retail markets. Our estimates have implications for long-run market structure in the supermarket industry, and for measuring the sources of price rigidity in the economy. We exploit a unique dataset containing the price-format decisions of all supermarkets in the U.S. The data contain the format-change decisions of supermarkets in response to a large shock to their local market positions: the entry of Wal-Mart. We exploit the responses of retailers to Wal-Mart entry to infer the cost of changing pricing-formats using a "revealed-preference" argument similar to the spirit of Bresnahan and Reiss (1991). The interaction between retailers and Wal-Mart in each market is modeled as a dynamic game. We find evidence that suggests the entry patterns of Wal-Mart had a significant impact on the costs and incidence of switching pricing strategy. Our results add to the marketing literature on the organization of retail markets, and to a new literature that discusses implications of marketing pricing decisions for macroeconomic studies of price rigidity. More generally, our approach which incorporates long-run dynamic consequences, strategic interaction, and sunk investment costs, outlines how the paradigm of dynamic games may be used to model empirically firms' positioning decisions in Marketing.
Update: A Structural Model of Sales-Force Compensation Dynamics: Estimation and Field Implementation
Date Posted:Wed, 21 Oct 2009 13:22:00 -0500
We present an empirical framework to analyze real-world sales-force compensation schemes. The model is flexible enough to handle quotas and bonuses, output-based commission schemes, as well as "ratcheting" of compensation based on past performance, all of which are ubiquitous in actual contracts. The model explicitly incorporates the dynamics induced by these aspects in agent behavior. We apply the model to a rich dataset that comprises the complete details of sales and compensation plans for a
New PDF Uploaded
New: A Structural Model of Sales-Force Compensation Dynamics: Estimation and Field Implementation
Date Posted:Tue, 29 Sep 2009 10:16:40 -0500
We present an empirical framework to analyze real-world sales-force compensation schemes. The model is flexible enough to handle quotas and bonuses, output-based commission schemes, as well as "ratcheting" of compensation based on past performance, all of which are ubiquitous in actual contracts. The model explicitly incorporates the dynamics induced by these aspects in agent behavior. We apply the model to a rich dataset that comprises the complete details of sales and compensation plans for ...
New: A Structural Model of Sales-Force Compensation Dynamics: Estimation and Field Implementation
Date Posted:Fri, 18 Sep 2009 04:20:29 -0500
We present an empirical framework to analyze real-world sales-force compensation schemes. The model is flexible enough to handle quotas and bonuses, output-based commission schemes, as well as "ratcheting" of compensation based on past performance, all of which are ubiquitous in actual contracts. The model explicitly incorporates the dynamics induced by these aspects in agent behavior. We apply the model to a rich dataset that comprises the complete details of sales and compensation plans for ...
A Structural Model of Sales-Force Compensation Dynamics: Estimation and Field Implementation
Date Posted:Fri, 18 Sep 2009 00:00:00 -0500
We present an empirical framework to analyze real-world sales-force compensation schemes. The model is flexible enough to handle quotas and bonuses, output-based commission schemes, as well as "ratcheting" of compensation based on past performance, all of which are ubiquitous in actual contracts. The model explicitly incorporates the dynamics induced by these aspects in agent behavior. We apply the model to a rich dataset that comprises the complete details of sales and compensation plans for a set of 87 sales-people for a period of 3 years at a large contact-lens manufacturer in the US. We use the model to evaluate profit- improving, theoretically-preferred changes to the extant compensation scheme. These recommendations were then implemented at the focal firm. Agent behavior and output under the new compensation plan is found to change as predicted. The new plan resulted in a 9% improvement in overall revenues, which translates to about $0.98 million incremental revenues per month, indicating the success of the field-implementation. The results bear out the face validity of dynamic agency theory for real-world compensation design. More generally, our results fit into a growing literature that illustrates that dynamic programming-based solutions, when combined with structural empirical specifications of behavior, can help significantly improve marketing decision-making, and firms?profitability.
REVISION: Scheduling Sales Force Training: Theory and Evidence
Date Posted:Wed, 04 Feb 2009 10:12:47 -0600
To have a productive sales force, firms must provide their salespeople with sales training. But from a profit-maximizing perspective, there are also reasons to limit training: training is expensive, it has diminishing returns, and trained salespeople need to be compensated at a higher level since their value in the outside labor market has increased. Due to these reasons, the following inter-related questions are not straightforward to answer: (1) How much training should be provided and how ...
Contract Duration: Evidence from Franchise Contracts
Date Posted:Mon, 28 Apr 2003 23:26:36 -0500
This study provides evidence on the determinants of contract duration using a large sample of franchise contracts. We find that the term of the contract systematically increases with the franchisee's physical and human capital investments, measures of recontracting costs, and the franchisor's experience in franchising (which we argue is negatively related to uncertainty about optimal contract provisions). These results are consistent with the hypothesis that the optimal contract duration ...
Contract Duration: Evidence from Franchise Contracts
Date Posted:Mon, 10 Mar 2003 10:23:30 -0600
This study provides evidence on the determinants of contract duration using a large sample of franchise contracts. We find that the term of the contract systematically increases with the franchisee's physical and human capital investments, measures of recontracting costs, and the franchisor's experience in franchising (which we argue is negatively related to uncertainty about optimal contract provisions). These results are consistent with the hypothesis that the optimal contract duration involves a tradeoff between protecting the parties against potential hold-up of relationship-specific investment and reducing the flexibility that the parties have to respond to environmental changes.
Scheduling Sales Force Training: Theory and Evidence
Date Posted:Thu, 23 Jan 2003 00:00:00 -0600
To have a productive sales force, firms must provide their salespeople with sales training. But from a profit-maximizing perspective, there are also reasons to limit training: training is expensive, it has diminishing returns, and trained salespeople need to be compensated at a higher level since their value in the outside labor market has increased. Due to these reasons, the following inter-related questions are not straightforward to answer: (1) How much training should be provided and how should training be scheduled over time? (2) How should compensation vary with training? (3) Should salespeople be asked to pay for some or all of their training? An analytical model is developed and analyzed using optimal control theory to provide answers to these questions. Thereafter, an empirical investigation is undertaken that broadly corroborates the analytical findings.
Salesforce Design with Experience-based Learning
Date Posted:Sun, 09 Sep 2001 12:35:02 -0500
This paper proposes and analyzes an integrated model of salesforce learning, product portfolio pricing and salesforce design. We consider a firm with a pool of sales representatives that is split into separate salesforces, one for each product. The salesforce assigned to each product is faced with an independent stream of sales leads. The salesforce may also handle leads that overflow from other product salesforces. In addition, salespeople "learn by doing" over their tenure on the job. In ...
Salesforce Design with Experience-Based Learning
Date Posted:Sat, 08 Sep 2001 00:00:00 -0500
This paper proposes and analyzes an integrated model of salesforce learning, product portfolio pricing and salesforce design. We consider a firm with a pool of sales representatives that is split into separate salesforces, one for each product. The salesforce assigned to each product is faced with an independent stream of sales leads. The salesforce may also handle leads that overflow from other product salesforces. In addition, salespeople "learn by doing" over their tenure on the job. In particular, the more time they spend selling a particular product, the more productive the sales effort. The objective of the firm is to maximize profits by optimizing the size of all salesforces as well as the prices of all products. The results obtained from this model reveal some important insights into the structure and size of optimal salesforces in environments characterized by learning and product complexity. Numerical experiments with the model indicate that salesforce size increases with salesforce productivity and decreases with salesforce costs (per representative), product production costs and consumer price sensitivity. We also find that learning plays a complex role in determining optimal salesforce staffing. In particular, when calculating the value of an additional salesperson, we identify a tradeoff between the incremental revenue due to increased throughput and the incremental decrease in utilization, learning, and productivity for the entire salesforce. We also examine th
Companies have few good options for constructing accurate user profiles.
{PubDate}A panel of experts from academia and industry discusses how marketers can use A.I. responsibly.
{PubDate}Personalized fines would be a win-win for municipalities and their residents.
{PubDate}