Working Papers

A Revealed Preference Approach to Identification and Inference in Consumer Models

(Job Market Paper)

This paper provides a new identification result for a large class of consumer problems using a revealed preference approach. I show that the utility maximization hypothesis nonparametrically identifies production functions via restrictions from the first-order conditions. In addition, I derive a nonparametric characterization of the class of models that operationalizes the identification strategy. Finally, I use a novel and easy-to-apply inference method for the estimation of the production functions. This method can be used to statistically test the model, can deal with any type of latent variables (e.g., measurement error), and can be combined with standard exclusion restrictions. Using data on shopping expenditures from the Nielsen Homescan Dataset, I show that a doubling of shopping intensity decreases prices paid by about 15%. At the same time, I find that search costs more than double within the support of the data, hence largely diminishing net benefits of price search.

Download Paper (January 2023)
Send me an email at charles.gauthier@ulb.be for the latest version

This paper supersedes "Price Search and Consumption Inequality: Robust, Credible, and Valid Inference" (First version: March 2021)

Dynamic and Stochastic Rational Behavior  (joint with Victor Aguiar and Nail Kashaev and Martin Plávala)

We analyze choice behavior using Dynamic Random Utility Model (DRUM). Under DRUM, each consumer or decision-maker draws a utility function from a stochastic utility process in each period and maximizes it subject to a menu. DRUM allows for unrestricted time correlation and cross-section heterogeneity in preferences. We fully characterize DRUM when panel data on choices and menus are available. Our results cover consumer demand with a continuum of choices and finite discrete choice setups. DRUM is linked to a finite mixture of deterministic behaviors that can be represented as the Kronecker product of static rationalizable behaviors. We exploit a generalization of the Weyl-Minkowski theorem that uses this link and enables conversion of the characterizations of the static Random Utility Model (RUM) of McFadden-Richter (1990) to its dynamic form. DRUM is more flexible than Afriat’s (1967) framework and more informative than RUM. In an application, we find that static utility maximization fails to explain population behavior, but DRUM can explain it. 

Download Paper (April 2023)

Robust Inference on Discount Factors

The exponential discounting model is a predominant tool for analyzing dynamic choice in applied work. Its attractiveness rests in that time preferences are summarized by a single parameterthe discount factor. This allows one to tractably analyze a decision maker's intertemporal choices, which is crucial in a vast range of applications. Accordingly, many studies have tried to recover its key time parameter. However, a common feature in this literature is the specification of the consumer's preferences. This constitutes a potentially important limitation as erroneously specifying preferences may lead to spurious estimates of the discount factor. As such, this paper provides set estimates of individual-specific discount factors by using the concavity of the utility function without relying on parametric assumptions. Furthermore, I develop a novel methodology that allows me to evaluate the sensitivity of discounts factors with respect to measurement error in variables. Contrary to the experimental literature, my methodology is applicable to choices over multidimensional goods. Given observations on prices and demands from a checkout scanner panel data set, I find that accounting for unobserved heterogeneity is important as observable characteristics fail to capture differences in discounting.

Download Paper (September 2021)

An Inquiry into Dynamic Consistency

This paper proposes summarizes the empirical content of a wide range of dynamically consistent model and provides novel revealed preference conditions that are necessary and sufficient for time consistency. Using standard machine learning tools, I show how to find the parameters of a model that best fit the data. Moreover, I propose a simple likelihood-ratio test to compare the performance of nested models. The empirical application tests the additional restrictions of the quasilinear model, exponential discounting model, and quasi-hyperbolic discounting model against the model of static utility maximization.

Coming soon (2023)