The usage of big data is likely to transform economic measurement in ways that we are only beginning to grasp (Cavallo and Rigobon 2016). Big data encompasses four fundamental shifts from standard datasets: volume, veracity, velocity, and variety. It is commonplace to focus on the first three of these. Clearly, scanner data offers us more observations, fewer opportunities for human input errors to creep in, and the capacity to measure prices and sales almost instantaneously instead of waiting for monthly surveys. These benefits are creating opportunities for improving the timeliness and quality of existing measurement techniques. However, in a new paper, we argue that the most exciting part of the big data revolution is likely to come from the new varieties of data that have become available (Redding and Weinstein 2016).
One of the principal challenges in producing numbers like real GDP or real wages is that while nominal variables are easy to measure, the measurement of real variables requires a theory of economic behaviour. Just as accountants like to joke that “sales are a fact; profits are an idea”, in economics, we face a similar conundrum -“consumer expenditures are a fact; real income is an idea”. While few people would disagree about what the nominal sales of any firm are or how much a consumer spends on a product, translating nominal numbers into real output or welfare is challenging (and was a key component of the path-breaking work of the recent Nobel Laureate, Angus Deaton, as in Deaton and Muellbauer 1980).
Unfortunately, the problem is not that we don’t know how to convert nominal expenditures into welfare; it is that we know too many ways of doing it. Broadly speaking, the profession has settled on three disjoint approaches. First, macroeconomists typically assume there are no demand shifts when measuring real income movements. A foundational assumption in these models is the idea that taste parameters never shift, so the utility function is constant. Economists make this assumption in order to derive a ‘money-metric’ utility function, which guarantees that welfare can be measured if one only knows income and prices. Applied microeconomists take a very different approach by assuming that there are time-varying demand and supply curves. Although it is not typically acknowledged, the existence of these time-varying demand curves is inconsistent with the macroeconomist’s idea that taste parameters are fixed. Demand shifts reflect the fact that a consumer likes one product more than another, which in general will mean that utility is not money-metric. Finally, actual price and real output data is constructed by statistical agencies using formulas that differ from either approach.
The inconsistencies are so deep that the same assumptions that form the foundation of demand-system estimation can be used to prove that standard price indexes are incorrect, and the assumptions underlying standard price indexes invalidate demand-system estimation because if no demand parameter ever shifts, one can recover the demand elasticity without recourse to estimation. In other words, extant micro and macro welfare estimates are inconsistent with each other as well as the data.
One can ignore these problems in conventional datasets, because one typically does not observe all the key features of each transaction. Thus, one can assume errors in demand systems are due to unobserved quality changes, measurement error, or something other than a shift in demand. However, it is much harder to ignore these contradictions in barcode data because we observe prices and quantities sold of precisely defined products. While conventional price measurement is based on identifying ‘the price’ for heterogeneous bundles of goods (milk, carbonated beverages, computers) that contain substantial quality variation within product categories, barcode data eliminates the ambiguity in the definition of a product. Put differently, while one might be able to assume that shifts in demand for carbonated beverages are the result of unobserved quality upgrading, it strains credulity to make the same assumption for a 330mL can of Coke Zero.
The precision of the data impels us to be precise about the theoretical assumptions. In order to deal with this problem, we present a new empirical methodology, which we term ‘the unified approach’, that nests all major price indexes used in welfare or demand system analysis (Redding and Weinstein 2016). We show that the measures of welfare used by economists and statistical agencies can be understood in terms of an internally consistent approach that has been altered by ignoring data or theoretical conditions that must hold in any coherent system. As shown in Figure 1, all of the major approaches to price (and therefore welfare) measurement are actually linked together via the unified approach.
Figure 1. The big picture
 The big picture
Image: VoxEU
The first key insight of our unified approach is that any demand system errors (e.g. taste shocks) must show up in the utility and unit expenditure functions, and therefore the price index. However, all economically motivated price indexes are derived under the assumption that the demand parameter for each good is time invariant. Researchers make this assumption because it is a sufficient condition to guarantee the existence of a constant aggregate utility function, but it stands in the face of an overwhelming body of evidence that demand curves shift.
Our analysis shows that the assumption of time-invariant preferences for each good is neither the correct nor necessary condition to make consistent comparisons of welfare over time when there are demand shocks for individual goods. To be able to make such consistent welfare comparisons, one must obtain the same change in the cost of living between a pair of time periods, whether one uses today's preferences for both periods, yesterday's preferences for both periods, or the preferences for each period. This necessary condition is (trivially) satisfied when preferences for each good are time invariant. One of our main contributions is to show that this necessary conditional can also be satisfied when shifts in demand cancel on average. This yields our ‘unified price index’ that is valid even when the set of goods is changing over time (due to product innovation, as in Feenstra 1994).
Our index enables us to identify a novel form of bias that arises from the assumption of time-invariant demand for each good in existing price indexes. ‘Consumer valuation bias’ arises whenever expenditure shares respond to demand shifts. Since conventional indexes assume that expenditure shares are only affected by price changes, they will be biased whenever expenditure share changes are correlated with demand shifts. For example, if higher consumer demand causes prices to rise, a conventional index will overstate cost-of-living changes because it will not adjust for the fact that some of the price increase is offset by the higher utility per unit associated with the demand shift.
Our second main insight is to develop a novel way of estimating the elasticity of substitution between goods. Extant approaches focus on identification from supply and demand systems. However, we show that one can also identify this parameter by combining information from the demand system and unit expenditure function. One of the desirable properties of this ‘reverse-weighting’ estimator is that it minimises departures from money-metric utility given the observed data on prices and expenditure and the assumed constant elasticity of substitution utility function.
Finally, we use barcode data to examine the properties of our unified price index and reverse-weighting estimator. We find that we obtain reasonable elasticity estimates in the sense that they are similar to those identified using other methodologies on the same data. Moreover, the consumer valuation biases in existing indexes appear to be quite substantial, suggesting that allowing for demand shifts is an economically important force in understanding price and real income changes.
Figure 2. Changes in cost of living for various indexes
 Changes in cost of living for various indexes
Image: VoxEU
We can see these differences at the aggregate level in Figure 2, which plots the expenditure-share-weighted average of the changes in the cost of living across product groups for each of the different index numbers over time, again using the initial period expenditure share weights. Not surprisingly, the Fisher, Törnqvist and Sato-Vartia result in almost identical changes in the cost of living that are bounded by the Paasche and Laspeyres indexes. This similarity is driven by the fact that they all assume no demand shifts for any good. The distance between the Sato-Vartia index and the Common-Goods Unified Price Index tells us the importance of the consumer valuation bias and the distance between the Sato-Vartia and the Feenstra-CPI indicates the value of the adjustment for changes in variety. In other words, big data suggests that standard methods of measuring welfare overstate cost of living increases by several percentage points per year because they ignore new goods and demand shifts.
References
Cavallo, A and R Rigobon (2016) “The Billion Prices Project: Using online data for measurement and research”, VoxEU, 24 April.
Deaton, A and J Muellbauer (1980) Economics and Consumer Behavior, Cambridge: Cambridge University Press.
Feenstra, R C (1994) “New product varieties and the measurement of international prices,”American Economic Review, 84(1): 157-177.
Redding, S J and D E Weinstein (2016) “A unified approach to estimating demand and welfare,” NBER Working Paper, no 22479.