Competition agencies and their classically trained economists continue to see “privacy” as a value distinct from their traditional domain (the pursuit of market power violations). And because there is data protection regulation, and associated privacy regulators, the typical posture is that privacy and data protection are the sole domain of specialist data protection agencies.1 But data protection regulators in Europe have consistently failed to enforce existing laws against large tech firms. Consumers are thus caught up in a double failure: data protection regulators have failed to enforce, while the economic orthodoxy that prevails within the antitrust agencies views these concerns as (largely) not their problem. There is some recognition among antitrust practitioners that “privacy” could be seen as a dimension of digital product “quality”, but the traditional posture of the classically trained economists who dwell in the agencies (reinforced by the majority of classically trained economists in academe) is that more information is always good; therefore more data is good, and in the digital space combining and exploiting more data into new products will typically generate social surplus.
While this is the traditional way of thinking about “information” in economic models, it is the wrong posture. Misuse of personal data is becoming ever more harmful. In this piece we make the case that (lack of) privacy is an (often unobservable) price of using digital platforms, and that (lack of) privacy facilitates mainstream antitrust harms such as exploitation and foreclosure by dominant digital platforms. As a starting point we need privacy and data protection experts to be heard by the antitrust experts. And we need privacy manipulation to be directly recognised as leading to antitrust harm. The objective is not so much to establish the “intersection” between privacy and antitrust (a common way of framing the issue, which we do not like) but rather to integrate data protection considerations as part of the antitrust assessment.
In what follows, we briefly survey the landscape, lay out our case, and then propose a few questions that competition enforcers should consider (as a starting point) in future competition cases involving data.
Recent cases are not encouraging
Remarkably, after decades of inaction the U.S. has leapt forward on antitrust enforcement over the last few months, and has managed to incorporate privacy considerations while doing so. Privacy is an important part of the recent State Attorneys General antitrust lawsuits against Facebook and Google. In particular, the December 2020 complaint against Facebook (filed by 48 State AGs, led by New York) says reduced consumer privacy is a form of monopoly rent. The complaint against Google filed days later (by 9 State AGs, led by Texas) raises the issue of “privacy fixing” between competitors.
Europe has had a data protection regulation (GDPR) since 2018, but with the exception of Germany and its Facebook case this has not spurred the European competition agencies (notably the EC) to pursue actual cases around data misuse as direct market power manipulation and extraction.2 In the main, the antitrust orthodoxy has continued to rely on its traditional tools. In mergers, conventional “theories of antitrust harm” do not adequately consider the use of personal data. Competition agencies stick to “traditional” foreclosure concerns, such as whether the merged entity will restrict “access to data” to third parties in future, and possibly seek to remedy these concerns by requiring commitments to preserve data access. At best, they stretch to theories of “loss of potential competition” where the merged entity could have become a significant player on a standalone basis. However, they shun as “too speculative” concerns that data could be used even by a hyper-dominant player to expand, exploit consumers, and build further dominance in other products and markets.
And yet, while traditional antitrust theories of harm are of course part of the story, there can be more to it. For instance, combining personal data can allow consumers to be targeted and discriminated against in novel ways, especially in delicate fields like personal health (or personal finance or employment). Monopoly makes these harms more acute, because consumers have no alternatives. Indeed, contrary to the orthodoxy view that “more data is good” (more information, greater efficiency), dominance will generally imply that data combinations do not imply efficiency gains; instead it allows a discriminating monopolist to extract the majority of the rents from “good customers”, and jack up prices to bad ones; and these price effects cannot be competed away precisely because of lack of competition.
A now notorious example is the Google/Fitbit case,3 where the EC’s own justification for allowing Google to absorb Fitbit’s health data, and for rejecting warnings that Google will combine it with its unique demographic, interest, and location data in health-tech applications (including insurance), was that the concerns were “too speculative”. The EC focused instead on the potential for Google to use Fitbit’s data to enhance its targeted advertising (where Google is already dominant), and to restrict others’ existing access to the Fitbit data (in addition to restricting interoperability between rival wearables and the Android ecosystem). The EC Chief Economist responded to criticism from other economists (including the authors of this piece),4 consumer groups, data protection experts and civil society organisations with arguments that he “had not seen any evidence” to substantiate concerns around the use of data in digital health applications, asking why it would be bad for Google to get health data if that can get us more cool products (e.g., diagnostics, treatments, insurance rates), and concluding that if society does not like it, “it should regulate”.5 This seems to us extraordinary. Google has an established track record of adding ever more data to its hoard, enabling an internal free-for-all in order to cascade its dominance into new markets. We think this simply perpetuates a pattern of poor enforcement,6 a pattern which is much less forgivable in 2021, knowing what we now know.
Let’s see what happens next. The UK’s CMA, one of the world’s most enlightened and forward-thinking agencies on digital issues, and committed to a “holistic antitrust and privacy approach”,7 recently announced its decision to take Facebook/Giphy to Phase 2. The decision says that it is pursuing “theories of harm” about the “loss of potential competition in display advertising” (Giphy could have become a bigger and better competitor without the deal, as it could itself monetise its GIF library) and “foreclosure” (Facebook may, post-deal, make access to GIFs more difficult to rival platforms like Twitter and TikTok).8 Both theories are from the standard antitrust playbook. The decision also mentions the possibility that the deal could “increase Facebook’s data advantage in display advertising”, but sets this theory aside on the basis that “the additional data to which Facebook may gain access post-Merger [would not appear to] materially increase its existing data advantage”. This determination (albeit at the provisional stage) seems odd. The deal is likely motivated by Facebook’s desire to acquire Giphy’s tracking capabilities (Giphy embeds identifiers into its GIFs that track people across the web),9 and this should be at the core of the antitrust investigation as it is equivalent to a direct increase in the price users pay not only for using Giphy, but also for using Facebook.
Meanwhile, the US FTC is investigating Facebook’s acquisition of Kustomer, while in Europe the Austrian authority (which has jurisdiction on the case) recently asked the EC to take over its own investigation. The concern here is that through its “chat bot”, Kustomer may have gathered a large amount of data, including special category personal data. Questions have been raised about the use that Facebook will put the data to, and it remains to be seen whether the agencies will pick this up.10
In the rare case where an enforcer has been brave enough to actively consider “excessive data collection” as an antitrust concern, the courts have been confused. The German Cartel Office has pursued Facebook’s tracking and data collection, but the Düsseldorf Higher Regional Court (the last in a round of appeals) has been unable to decide whether “the office appl(ied) the GDPR here? Or did it only use the value judgments of the GDPR to justify or illustrate a violation of antitrust law? Is this a problem of economic power (i.e. antitrust law) that is mirrored in too intensive data processing? Or is it about the overly intensive data processing itself (i.e. privacy law)?”.11 The case was referred to the European Court of Justice, which will take years to clarify the situation.
The competition consequences of (lack of) data protection need to be incorporated into contemporary antitrust analysis
Understanding the data protection dimension of a deal or a firm’s conduct is key to the formulation of the antitrust theory of harm. For example, consider Google’s “Privacy Sandbox” (a curious choice of terminology). It preserves Google’s internal data free-for-all, but limits how third parties can track people online for targeted advertising.12 Complaints about it have been filed with the UK CMA. How do we think about it? Is Google’s characterisation of the Privacy Sandbox as a pro-privacy initiative just “privacy theatre” that is intended to preserve its first-party tracking while starving third-party tracking?13 Is it an antitrust violation in the form of self-preferencing?14 Do we want to support the complainants by agreeing to the “self-preferencing” antitrust characterisation, but preserve the “external data free for all” by doing so? No. Rather, we understand that all parties involved are in a toxic ecosystem of tracking people and selling data about them, imposing “prices” in the form of data misuse. We also observe that this unobserved price has remained high because consumers have few ways to say no.15
Is impending regulation of digital giants’ conduct looking like it may address these harms? The signs are uncertain. The current draft of the EC’s Digital Markets Act (“DMA”)16 is essentially a set of B-to-B rules drawn from past/current antitrust investigations, where the focus is on fairness and barriers to entry but just between dominant platforms and their dependent businesses.17 There is not so much focus on consumers. The traditional antitrust posture – again – would say “but that’s what we do, we look after competition and both competitors and consumers benefit as a result”. Yes. However, there is now also an important trade – implicit or explicit – between dominant platforms and the consumers they extract data from. That asymmetry often represents a consumer harm. In addition, while several rules in the proposed DMA feature “data interoperability” as an antitrust remedy, this too involves questions of data protection that need to be addressed as part of the regulation.18
Why is the Antitrust Orthodoxy not engaging with these concerns?
Why are most practicing enforcers – and especially the economists – so reluctant to take on the issues that so animate privacy and data protection experts? We think there are several reasons.
First, traditionally-trained economists who hold sway at the agencies have canonical tools that rely on a set of established models based on prices, quantities, and other measurable factors. Data and/or data protection don’t enter into these, so economists just don’t see the direct harm to consumers from misuse of data as being “in their remit”.
Second, privacy attributes are largely unobservable, i.e. consumers are unaware of what is done with their personal data and cannot make informed choices. While there are new “behavioural” models that speak directly to harms from such “unobserved prices”,19 these come from a growing but still niche sub-field in economics, are therefore not in the canon, and thus are also (usually) left out.
Third, the economists in the main “stay in their lane” and do not push the agency hierarchy to be bold and visionary and make decisions even without the support of existing case law. Enforcers acknowledge the insights of major reports in this space (the Furman report in the UK, Cremer et al. for the EC, the Stigler report in the US) but they have been very slow to challenge the manifest harm to consumers and competition from the growing exploitation of consumer data in actual cases.
Our main message is that we need to think not just about the “intersection” of antitrust and data protection (a formulation that suggests distinct paths, briefly crossing at one point and then diverging again); but about the integration of data protection concerns into antitrust – both in the design of new regulations and in the implementation of enforcers’ existing tools.
A framework to incorporate Privacy and Data Protection considerations into Antitrust
A necessary step towards the goal of “integration” involves formulating and framing privacy and data protection into more “traditional” economic theories of harm. There are two big harms that competition economists are not properly capturing when considering conduct and transactions involving data in digital markets:
(A) (Lack of) privacy is a price – and it is time to treat it that way
Digital market power often results in greater collection, combination, use and sale of personal data, rather than increases in the conventional prices typical of the exercise of market power in a non-digital market. When they choose to think about it at all, economists tend to think of privacy and/or data protection as a “quality characteristic”. But that framing is not the most helpful, because economists see the link between market power and quality provision as ambiguous (it can go either way depending on the shape of demand). We need to think instead of privacy features, and the loss or lack of data protection that comes with their relaxation, as a price. When they become aware of how their data is collected, combined, and used, consumers express their dissatisfaction (and heterogeneously, much as they have heterogeneous willingness to pay for products).20 Framing privacy attributes as prices helps clarify that deals or conduct that allows the greater collection, combination, and use of data are deals or conduct that are simply raising consumer prices for those services.
Furthermore, because firms often “obfuscate by design”, consumers don't know how their data is collected, combined, used, and/or sold shows that (lack of) privacy isn't just a price, it's an unobservable price. This framing as also useful, as it allows enforcers to draw upon the substantial literature on the welfare consequences of unobserved prices in the context of privacy / data protection “price increases” and harms.21 As just one example, because data protection characteristics are unobserved and consumers cannot therefore act on them, competition won’t work to “lower prices” (here, provide more data protection). Therefore traditional remedies may also not work and we may need others, more suited to the specifics of unobserved privacy attributes – for instance, purpose limitation conditions.
(B) Lack of privacy and data protection facilitate exploitation and foreclosure
Data is a special input which can threaten competition and consumer welfare for at least three reasons.
First, it can create a positive feedback loop: firms with larger/more comprehensive data sets can offer better products thanks to data-enabled learning, attracting more customers, and accumulating yet more data. This can tip markets in some circumstances, enabling dominance and, with it, further increases in both privacy/data protection “prices” to consumers as well as conventional prices to conventional customers (e.g. online advertisers). Second, because data can allow the personalization of product offers and prices, it also creates discriminating power, charging relatively high prices to poor customers and low prices to good ones. Third, consumer data is a “general-purpose” input: it can be applied widely across product markets, inducing complementarities that can be exploited to extend market power into adjacent markets (e.g. through practices such as tying, bundling, below-cost pricing, etc.). Dominant digital platforms have notoriously cascaded their monopoly from one market to others, by “offensively leveraging” personal data that they collected for one purpose but applied to many others.22 Although this is at odds with the “purpose limitation principle” in data protection law, big tech firms’ internal data “free-for-alls” have not been stopped by either data protection or competition agencies.
Data monetization often underlies both discriminatory exploitation and foreclosure/envelopment strategies by dominant digital platforms. For example, motivated by the Google/Fitbit merger, Chen et. al. (2020) analyse mergers between firms in complementary markets for data collection and data applications, and in particular the ability to combine data across markets to enable the personalization of offers.23 They show formally how this personalization enables exploitation of consumers, because only the merged entity can combine data this way. It also enables foreclosure, because “the merged entity is secured a large base of non-contestable consumers under personalization and will compete aggressively for contestable consumers.” If there is power in the market for data collection, the merged entity is thus able to leverage this power into an adjacent application, building dominance there as well.
This recent paper is one of very few to focus explicitly on the role of data to power mechanisms for foreclosure and exploitation. But it also points to something economists should well understand: because the exploitation of data in multiple markets involves complementarities, this brings directly into play the existing literature (from Choi and Stefanidis (2001) and Carlton and Waldman (2002)) about tying, bundling, and dynamic foreclosure effects in vertical and conglomerate settings.24 As a straightforward application, Condorelli and Padilla (2020) have indeed adapted Carlton and Waldman to the case of supply-side (data) complementarity to show that a firm dominant in its market can engage in predatory pricing and “privacy-policy tying” to deter entry and lower consumer surplus.25
Papers such as these provide formal support for what is obvious to anyone who has seen dominant digital platforms exploit consumers and envelop market after market in the last decade. Yet formal analyses that focus on mechanisms specific to data and lack of privacy will be useful to get the antitrust orthodoxy to be more willing to adopt this approach.
Issues to explore in digital competition cases involving data
As of now, competition agencies should consider adding at least the following questions to their analysis of future cases:
On (Lack of) Privacy as a Price:
1. As a preliminary matter, for each “processing purpose”26 for which the firm uses personal data, how were the data being processed obtained, and what lawful basis is claimed? Even if this amounts to no more than establishing whether the firm abides by the GDPR, it matters as a threshold matter for the competition assessment. And if consent is the lawful basis, is the consent request transparent and fair?
2. Furthermore, if privacy and data protection is a “price” (and we think it is), then the data firms collect and how they are combined, used, and sold can be thought of also as “prices”. Could competition agencies actively learn how consumers value these attributes of firms’ data activities? Where relevant (likely to be often), could they also examine the implications of the lack of consumer observability of the data firms collect, combine, use, and sell for market outcomes and consumer welfare?
3. Could the authorities also examine whether a deal or a form of conduct can create greater scope for discrimination and/or extraction of surplus (for example by creating incentives to exploit and leverage data in new applications)? Such exploitation is more likely when a firm has significant market power in one market/type of data, and thus where other firms are or will be unable to match their data combination capabilities to provide competition in these new data uses.
On (Lack of) Privacy as Facilitating Exploitation and Foreclosure:
4. Is it possible to itemise each processing purpose that the firm uses personal data for, and use this itemisation to investigate the extent to which a deal or conduct can leverage personal data into new (and possibly tied) markets or services, or create conflicts of interest?
5. Can the economists investigating data-related conduct or deals reframe the classic categories of antitrust concern (tying, leveraging, foreclosure, self-preferencing, loss of potential competition) around personal data as the relevant asset?
In all cases:
6. Could we pre-empt discrimination, leveraging and foreclosure concerns by making certain data available to rivals on a non-discriminatory basis? And if this is impossible under the data protection rules, can we credibly prevent an internal “data-free-for-all” through siloing? Should we go further into use purpose limitation rules, for instance as a condition for merger approval?
The antitrust orthodoxy needs to be open to dealing with the harms caused by the exploitation of consumer data. Data protection concerns need to be integrated holistically into the antitrust assessment. Privacy experts need a place at the antitrust table. Pursuing these few questions to start with, in cases involving data use, should be a necessary analytical step on a par with formulating and testing more canonical theories of antitrust harm.
Authors’ note: Caffarra has been involved in advisory work for multiple parties adverse to Google, and has consulted for Apple, Amazon, Microsoft, Uber and multiple other digital businesses. Crawford has done some work for Apple.
ACCC (2019), Digital Platforms Inquiry Final Report.
Bourreau, M et al. (2020) “Google/Fitbit will monetize data and harm consumers”, VoxEU.org, 30 September.
Caffarra, C and F Scott Morton (2021), “The European Commission Digital Markets Act: A translation”, VOX CEPR, 5 January.
Caffarra, C and T Valletti (2020), “Google/Fitbit review: Privacy IS a competition issue”, VoxEU.org, 4 March.
CMA (2020), A New Pro-Competition Regime for Digital Markets, Advice of the Digital Markets Taskforce, December.
CMA (2020), Online Advertising and Digital Markets Study, July.
European Commission (2020), “Proposal for a Regulation of the European Parliament and of the Council on contestable and fair markets in the digital sector (Digital Markets Act)”, Brussels, 15 December.
Furman, J et al. (2019), Unlocking Digital Competition, Report of the Digital Competition Expert Panel.
Geradin, D and D Katsifis (2019), "An EU competition law analysis of online display advertising in the programmatic age", European Competition Journal 15(1).
Scott Morton, F and D Dinelli (2020), Roadmap for a Digital Advertising Monopolisation Case Against Google, Omydiar Network Report.
Scott Morton, F and D Dinelli (2020), Roadmap for a Monopolization Case Against Google Regarding the Search Market, June.
Scott Morton, F and D Dinelli (2020), Roadmap for a Monopolization case against Facebook, June.
Srinivasan, D (2019a), “Why Google dominates advertising markets”, December.
Srinivasan, D (2019b), “The Antitrust Case Against Facebook”, February.
State of Texas et al., Complaint Against Google.
State of NY et al, Complaint Against Facebook.
Stigler Committee on Digital Platforms (2019), Final Report, September.
1 We are aware there is a current initiative of the International Competition Network (the global network of antitrust agencies) with a Steering Group Project on the Intersection Between Competition, Consumer Protection and Privacy. This is laudable but there is still a strong view of privacy as part of consumer protection, not antitrust. And it has not yet percolated through into the practice of agency economists in actual cases.
2 Further, Peukert et al (2020), “Regulatory exports and spillovers: How GDPR affects global markets for data,” VoxEU, describe how GDPR has, if anything, led to increased concentration in web tracking technologies.
3 Case M.9660, Google / Fitbit, approved on 17 December 2020. See press release at https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2484
4 Bourreau et. al, (2020), "Google/Fitbit will monetise health data and harm consumers", VoxEU.
5 “First, I have not seen any evidence of the vaunted synergies between the type of data controlled by Google and Fitbit data. The magnitude of such synergies would have to be established to proceed with such a theory of harm. Second, why is having more information on individual health status and habits harmful? It can allow for better diagnostics, better treatment and, even, fairer health insurance rates. Do people who exercise want to pay more because claims to healthy living are hard to verify? Third, if society feels that some type of personal (health) information ought not to be used to discriminate in the provision of health-related services, it should regulate. Giving lower driver-insurance rates to young women than to young men is no longer lawful in the US. One could as easily forbid the use of, say, existing conditions when pricing insurance. Finally, should the merger really be blocked, despite the remedies offered, in order to prevent Google from fusing its data with Fitbit’s and use this package in the health sector? If combining data in a manner that leads to more discrimination in the health market is undesirable, then why use merger review to prevent such combinations from Google only? Regulation would be far superior in that it would at least preserve a level playing field” (Pierre Regibeau, "Why I Agree with the Google-Fitbit decision", VoxEU, 13 March).
6 The FTC and others considered this issue with the Google DoubleClick merger but did not require Google promises with respect to data access to be binding. Thus, promises held no weight and Google’s subsequent foreclosure of access has been the source of the competition issues we are dealing with today. Reference to Texas AG Complaint. Similarly, see Facebook/Whatsup and Facebook/Instagram. Reference to NY AG Complaint.
7 Andrea Coscelli, CMA CEO, 20 April 2021 at the Open Markets conference After Google and Facebook: The Future of Journalism and Democracy, interviewed by Julia Angwin.
8 Facebook Inc./ Giphy Inc. merger inquiry, CMA 1 April 2021, at https://www.gov.uk/cma-cases/facebook-inc-giphy-inc-merger-inquiry
9 Robert Bateman, Does Facebook Track You Using Gifs In Third-Party Apps?, Data Protection News, March 2021, see https://data-protection.news/blog/facebook-giphy-acquisition
11 Rupprecht Podszun, "Facebook: Next Stop Europe", D’Kart Antitrust Blog, 25 March 2021.
13 Gilad Edelman, "Google and the Age of Privacy Theater", WIRED, 18 March 2021.
14 Tim Cowen, “Privacy Fixing” After Texas et al v. Google and CMA v. Google (Privacy Sandbox): Approaches to Antitrust Considerations of Privacy, CPI Competition Policy International 26 January 2021.
15 Johnny Ryan and Cristina Caffarra, "Ending the ‘Data Free-For-All’: Antitrust Vs GDPR enforcement", Euractiv 22 January 2021.
16 Regulation of the European Parliament and of the Council on contestable and fair markets in the digital sector (Digital Markets Act), 15 December 2020, see https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0842&from=en
17 See Cristina Caffarra and Fiona Scott Morton, "The European Commission Digital Markets Act: A translation", VoxEU, 5 January 2021.
18 Bergemann et. al.(2020), “The economics of social data,” VoxEU, emphasize that the presence of data externalities means that regulations that empower individuals, if enacted, are not likely to be effective because an individual’s behaviour is predictive of others’ behaviour and that won’t be taken into account in their individual decision-making. See also Acemoglu et al (2019), “Can we have too much data?”, VoxEU, available at https://voxeu.org/article/can-we-have-too-much-data.
19 See, among others, Paul Heidhues and Botod Koszegi, Chapter 6 - "Behavioral Industrial Organization", Handbook of Behavioral Economics: Applications and Foundations 1, Volume 1, 2018, Pages 517-612.
20 For example, Lorenzo, P., Padilla, J., and Requejo, A (2020), “Consumer Preferences for Personal Data Protection in Social Networks: A choice modelling exercise,” Working Paper, conduct a conjoint analysis to identify consumer willingness to accept for the use of their data in online advertising.
21 See Heidhues and Koszegi (2018), op cit., supra note 18.
22 See, inter alia, Condorelli, D., “Data-driven Platform Envelopment with Privacy-Policy Tying,” OECD Competition Open Day Blog series #5, February 18, 2021.
23 See Chen, Z., Choe, C., Cong, J., Matsushima, N. (2020), “Data driven mergers and personalization,” ISER Discussion Paper 1108, Institute for Social and Economic Research, Osaka University.
24 Choi, J.P. and Stefanadis, C. (2001), “Tying, Investment, and the Dynamic Leverage Theory,” The RAND Journal of Economics, 32(1): 52–71; Carlton, D.W. and Waldman, M ( 2002), “The Strategic Use of Tying to Preserve and Create Market Power in Evolving Industries”, The RAND Journal of Economics, 33(2): 194–220.
25 Condorelli, D. and J. Padilla (2020), “Data-driven Envelopment with Privacy-Policy Tying,” Working Paper. For a contrary view, see Tucker, Catherine (2019), "Digital data, platforms and the usual [antitrust] suspects: Network effects, switching costs, essential facility", Review of Industrial Organization 54.4 (2019): 683-694
26 This is a principle of data protection law. See Article 5(1)b of the GDPR. The caselaw of the European Court of Justice says that the scope of an individual processing purpose is what the person concerned could reasonably foresee their data would be used for at the time it was collected for a specific purpose.