Chillin'Competition

Relaxing whilst doing Competition Law is not an Oxymoron

Search Results

Data protection and antitrust law

with 2 comments

Regretably I couldn’t attend Concurrence’s New Frontiers of Antitrust conference held last Friday in Paris in spite of Nicolas Charbit’s kind invitation. I hear that the conference was once again most interesting, so congrats again to Nicolas and the rest of the team at Concurrences.

Perhaps the most prominent topic in this year’s program related to the interface between data protection and antitrust law. I’m sorry to have missed the discussions over this issue, for perhaps they would have enabled me to see where’s the substantive beef that justifies all the recent noise. Whereas I understand the practical reasons why this issue has conveniently become a hot one in certain academic circles, I confess my inability to see the specific features that make this debate so deserving of special attention.

The way I see it, personal data are increasingly a necessary input to provide certain online services, notably in two-sided markets. So far so good. But this means that personal data are an input, like any other one in any other industry, with the only additional element that the recompilationa and use of such input is subject to an ad hoc legal regime -data protection rules-.

In my view, competition rules apply to the acquisition and use of personal data exactly in the same way that they apply to any other input, and then there’s a specific layer of protection. I therefore understand that data protection experts have an interest in finding out about the basics of antitrust law to realize about how it may affect their discipline, but I fail to see the reasons why competition law experts and academics should devote their time to an issue which, in my personal view, raises no particularly significant challenges. [The only specificity may be that data protection practices may constitute a relevant non-priceparameter of competition, for companies may compete on how they protect consumer data]. I would argue that this is a serious matter, but one for consumer protection laws to deal with, and in which competition policy may at most play a marginal role (I understand this was also the view expressed by Commissioner Almunia in a recent speech).

To compensate for my absence at Concurrence’s conference, on Saturday morning I read some interesting “preliminary thoughts” published last week by Damien Geradin and Monika Kuschewsky: Competition Law and Personal Data: Preliminary Thoughts on a Complex Issue. The piece provides a contrarian view to the one I just expressed. Since I might very well be wrong (that’s at least what my girlfriend’s default assumption in practically all situations…) I would suggest that you take the time to read it in order to make up your own mind. It won’t take you long, but since behavioral economics (and the clickthrough rates to the links we show) tells us that many of you are of the lazy type, in the interest of a balanced debate here’s a brief account of its content; my comments appear in brackets:

(Click here if you’re interested in reading more)

Read the rest of this entry »

Written by Alfonso Lamadrid

25 February 2013 at 1:47 pm

My first piece as joint general editor of the Journal of European Competition Law & Practice

with 2 comments

JECLAP

I am delighted to have joined the Journal of European Competition Law & Practice as joint general editor (together with my friend Gianni De Stefano, from Hogan Lovells; and an impressive team of editors). JECLAP has become a reference in a short time, and I am really excited to become involved in this venture (I just regret that I will not overlap with Judge Nihoul, who is stepping down).

I reproduce below my first piece published in my new capacity. The journal version is available here. I look forward to your comments, and to your submissions too (I have published in the journal a couple of times and can tell you first hand that JECLAP has the swiftest and most professional process I have seen around; I can also tell you that I will make sure it stays this way!).

I leave you with the editorial:

Changing Times for JECLAP, Changing Times for Competition Law

In less than 10 years, JECLAP has established itself as one of the (if not the) leading competition law journals in Europe. Thus, I felt honoured (and, why not say it, also somewhat overwhelmed) when I was asked to replace Paul Nihoul as one of the general editors—together with Gianni De Stefano. Needless to say, I gladly accepted. Inevitably, doing so made me think about changes in the enforcement of EU competition law since JECLAP’s creation, and about the role that the journal can play in these changing times.

This editorial is prepared at a time when EU competition law is undergoing a noticeable evolution. When JECLAP was founded in 2010, it looked like we were close to reaching the ‘end of history’ in the field. A landmark moment was the adoption of the Commission Guidance Paper on Article 102 TFEU enforcement—which was in fact the subject of an article by Giorgio Monti in the first issue. It looked like efficiency and consumer welfare were just about to become the keystones around which the interpretation and application of EU competition law would revolve.

Things look very different 7 years later. We have witnessed the emergence of new analytical frameworks and new ideas. Concepts like choice and, more recently, fairness, have found their way in academic and policy discussions. Paul Nihoul himself has in fact been one of the most vocal proponents of choice as a guiding principle in EU competition law (see for instance ‘Choice vs Efficiency’, JECLAP (2012) 3(4): 315–316). What these developments show is that the efficiency-based framework has failed to win the hearts and minds of many lawyers. The consensus around consumer welfare that exists on the other side of the Atlantic has not materialised in Europe, and perhaps never will.

In addition, new developments in the field are pushing the boundaries of EU competition law. Looking back at the past decade, it looks like authorities in Europe have become less reluctant to interfere with the exploitation of intellectual property rights, to mention one example. Pay-for-delay settlements in the pharmaceutical sector, and the use of injunctions in the context of standard-essential patents (both abundantly discussed in the pages of JECLAP) are clear milestones in this sense. Similarly, discussions relating
to the use of big data and the impact of algorithms on firms’ ability and incentive to engage in collusive and/or discriminatory conduct have moved to the centre stage.

What is, and can be, the role of JECLAP in reaction of these developments? Allow me to share a few thoughts with you.

Place the law at the centre of the analysis: If EU competition law is fascinating, this is in part because it is at the intersection of many disciplines. The downside is that, for that very reason, we run the risk of ignoring that enforcement and policy-making is achieved through the law. It is my hope that JECLAP will contribute to ensuring that the law remains at the centre of discussions. This can be achieved in many ways. One way is to keep up to date with legal developments and putting them in their economic, regulatory and technological context—whether through current intelligence pieces or through the excellent surveys that have greatly contributed to the journal’s name.

I also hope JECLAP will engage critically with new approaches to the interpretation and enforcement of EU competition law. As has been recently argued in these same pages, there is no reason to rule out fairness as a guiding principle for enforcement. At the same time, it is necessary to bear in mind that high-level objectives need to be made operational. If a principle is not, or cannot, be broken down into a set of practicable legal
tests (that is, if it lacks a concrete content that can be anticipated in advance by stakeholders), it may open the door to arbitrary decision-making—and arbitrary decision-making is inherently unfair.

Economics? More of it! This said…: Economic tools are widely used in EU competition law. In fact, never in the history of the discipline has its use been more frequent and pervasive. On the other hand, it is impossible to ignore that the rise of economics has been received with scepticism, if not overt resistance, by some lawyers. Several factors can explain this reaction. The perceived ‘imperialistic’ inclinations of economics—that is, the tendency of economists to apply their approaches and techniques to phenomena that are studied by the other social sciences—is one of them. In this sense, it is hoped that
JECLAP will continue to bridge the divide by encouraging the dialogue between lawyers and economists.

It is also hoped that JECLAP will help understand that economic analysis makes some fundamental contributions to the discipline that are often ignored. Economics in competition law is not just about defining overarching benchmarks (namely efficiency and consumer welfare) and about econometric forecasting. It is also valuable as—if not primarily—a means to define boundaries on administrative action and, by the same token, as a tool that contributes to the clarity and predictability of the law.

A platform to deal with new (and old) ideas: It is exciting to see there is no shortage of new ideas in competition law, and JECLAP editors would like the journal to contribute to their production and dissemination. On the other hand, we would like to encourage discussions that do not miss the forest for the trees and that address transversal issues that are of the utmost relevant in practice. These are the sort of questions that require the combined skills of practitioners—who have a nose for relevant issues—and academically minded lawyers—who have developed the ability to see the big picture.
JECLAP has been and should continue to be the preeminent forum for these exchanges.

In this sense, and as I write this editorial, I can think of some questions of fundamental importance that have not yet been clarified. For instance, there is still uncertainty as to what is exactly meant by an ‘anticompetitive effect’ in the case law. The same is true of other fundamental concepts, including that of counterfactual—which is central to ongoing cases relating to the exploitation of intellectual property rights (just think of Lundbeck and Servier). Readers are hereby invited to contribute to these.

JECLAP is a success story. I have no choice but to work hard, together with the rest of the team, to ensure the success lasts many more years!

Written by Pablo Ibanez Colomo

14 August 2017 at 7:34 pm

Posted in Uncategorized

The innovation offence (by Stephen Kinsella)

with 4 comments

offence

[We are happy to publish a guest post from one of the most respected and interesting practitioners in the EU market, Stephen Kinsella (he’s of course best known for having been a speaker at the first Chillin’Competition conference and for being the husband of a great novelist  who is currently crowdfunding her new novel. Below he gives his views on a very topical matter on which we have also commented before. As always, we will be happy to foster discussion and are open to publishing other views on the matter. Enjoy!]

Mergers tend to get more attention in press coverage than other antitrust activity in Europe.  That is partly because they have compressed timetables and obvious milestones to trigger stories (announcement, filing, enquiries, third party interventions etc) but also because they can be resolved to a binary choice between approval or block, with readily understandable consequences.  Not only shareholders but other financial players follow closely each twist and turn, placing bets on rumours of setbacks or “theories of harm”.

All this froth can sometimes mask a duller reality, which is that at the EU level of an average 350 or so deals notified in Brussels each year, less than one a year is ultimately prohibited. Of those that are permitted only around 5% are subject to any modifications or commitments as the price of approval.

And there is a reason for this. The system is weighted in favour of approval.  It carries within it a presumption that deals will be cleared, and speedily, unless good evidence can be brought forward of some creation or strengthening of dominance causing harm to effective competition, to the detriment of consumers denied the benefits of choice.

Granted, merger control, like other aspects of antitrust, is not static.  It evolves in response to evidence, to greater learning about how markets behave and to developments in legal and economic thinking.  But it does so cautiously, trying to balance the risks of excessive intervention (in the jargon, Type 1 errors) against non-intervention (Type 2 errors).  The Type 1 errors could include not only hampering the ability of the merged entity to innovate, but also deterring those who would invest in creating products with the aim of selling them to another who is better able to exploit them.

One area in which we are seeing calls for such an evolution relates to “big data”.  Enforcers at EU and national level are asking themselves whether the mantra that “knowledge is power” literally means that the acquisition or accumulation of data, in particular about the behaviour of and relationships between large numbers of individuals, could confer the power to exploit and exclude.

This is not to be confused with concerns over privacy.  We have seen numerous statements, including from Commissioner Vestager, that competition law is not to be used to try to cure possible concerns that fall more properly in the realm of consumer (or data) protection.  Rather the question is a narrower one: whether a data set might be so special, valuable and non-replicable that its concentration in one undertaking would give it an overwhelming competitive advantage that could be checked only by regulatory intervention.

Such a theory is not controversial in principle if one looks at data as if it were an essential facility.  But it runs up against the objection that unlike a piece of infrastructure such as a port or a pipeline, the data (or substitutable data) may well be capable of being compiled by others, or already exist in the form of other accessible compilations.  Again to cite Commissioner Vestager in a recent speech, the data might quickly go out of date and need refreshing, and “we also need to ask why competitors couldn’t get hold of equally good information”.  And while we sometimes see reference to the question of whether the data is “unique”, the better way of expressing it, as recognised in the Franco-German discussion paper from May this year, is whether it is really “unmatched”.  This recognition has led to understandable caution.  A recent consultation exercise by the Commission is beginning to explore whether the merger rules need to be adjusted – though even here the focus is more on the jurisdictional thresholds that might be appropriate to ensure deals receive proper scrutiny, rather than suggesting that data poses particularly intractable problems.

Against this backdrop, the public discussion around the Microsoft – Linked-in transaction is interesting, and rather curious [my firm has advised Microsoft on a range of antitrust issues but I am not acting on the notification of the Linked-In deal] .  I only have access to what is in the public domain, but it appears to be a case where a company acquires a target with which it is not in competition and where there is no suggestion that the target will alter its commercial strategy in terms of its market behaviour or how it makes available its data to third parties.  According to press reports the acquirer has already given assurances to that effect and those assurances do not seem to  be seriously disputed.  Therefore it is hard to see that any adverse change in the market will inevitably occur that is “deal specific”.

At the same time, though nobody disputes the value of the data held by Linked-In, there are many other players in the market and apparently many other ways of obtaining similar or competitive data sets.  In fact an increasing number of companies hold substantial amounts of data regarding their customers or others with whom they interact and some use that purely for internal purposes while others develop business models around exploiting that data.  Unless there is convincing evidence that a particular data set is genuinely both non-replicable and uncontestable, it would place an unreasonable burden on competition enforcers if they were always obliged to analyse the impact on some rather nebulous “data market”.

But horizontal concerns are not the end of the story.  There have been claims by opponents of the deal that in the future, in some unspecified manner, the two companies could combine their data and expertise. In doing so they would come up with some new product, for which there would be strong consumer demand, and with which third parties would struggle to compete (though I have seen no suggestion that those third parties would be forced to exit the market).  Such reasoning evidently includes a number of leaps and suppositions, but reduced to its essentials it seems to try to take merger theory even beyond the notion of an “efficiency offence” (always rejected by the Commission) into the realm of an “innovation offence”.

One can well understand why any regulator would be sceptical about such an approach.  Merger control has to be to some extent forward looking in that it must try to identify the suppression not only of actual but also of potential competition.  But when asked to go even further and tackle some speculative impact on a form of competition that absent the transaction would not anyway have taken place, combined with the fact that the transaction will in the complainants’ “worst case” scenario introduce a new element of competition through a new product, the levels of abstraction introduce too much uncertainty into the merger review process.

Moreover, it is not as if merger control is the last and only chance that the Commission has to protect competition.  If following a concentration some development occurred that put the new entity into a position of unassailable market power, there remains Article 102.  Indeed we saw relatively recently in the Thomson-Reuters case that the Commission, having cleared a merger, then opened a proceeding to verify the impact on the market of the merged firm’s unilateral behaviour and extracted a package of commitments that the Court subsequently ruled was sufficient to restore competition.

Those opposed to transactions will continue to innovate with theories of harm.  Regulators will continue to welcome and even encourage their contribution, while maintaining a healthy scepticism regarding their agenda.  But the threshold for intervention remains high.

Written by Alfonso Lamadrid

14 November 2016 at 12:08 pm

Posted in Uncategorized

Materials on Competition and Regulation in Digital Markets

leave a comment »

The slides of the conference Competition and Regulation in Digital Markets held at the University of Leeds on 9 September are now available here.

You will see some very interesting materials there (not my slides, which are a slightly modified version of my earlier presentations on the same topic: big data) [yawn intermission]. At least some of the jokes in my intervention (pictured below) were new…

Actually, if it weren’t for the minor issue that that the jokes aren’t really funny I would  consider joining Chicago Antitrust Professor Randy Picker in his stand-up comedy events.

leeds

By the way, an interesting development regarding the topic of my presentation took place last Friday, when the European Data Protection Supervisor published a new “Opinion on coherent enforcement of fundamental rights in the age of big data“.

The Opinion interestingly  acknowledges that “it would be inappropriate for one area of regulation to look to another area to compensate for its own weaknesses. Authorities in each area have limited tools at their disposal, for example competition enforcement can only address abuse of dominance, cartel behaviour and mergers which are not in the consumer interest; abusive conditions of service are not necessarily an antitrust issue”.

At the same time, however, it holds the (now more nuanced) view thatdata protection authorities can help shed light on how and to what extent the control of personal data is so crucial for companies in markets. The synergies between the fields of law, which have been discussed intensively in the recent years, could propel closer cooperation between authorities, especially where there is neither guidance nor case law. It is not a question of ‘instrumentalising’ another area of law but rather of synchronising EU policies and enforcement activities, adding value where a supervisory authority lacks expertise or legal competence in analysing“. The EDPS therefore  offers “the expertise of independent data authorities in advising on how to assess the significance for consumer welfare in such proposed acquisitions“.

One may or may not agree with the EDPS’s views on this matter (and you know my take), but it him and his team deserve credit for having made a popular issue out of this, thereby reviving some of the old -and most important- debates in EU competition law.

Written by Alfonso Lamadrid

27 September 2016 at 6:43 pm

Posted in Uncategorized

eBook on Competition and Platforms

with one comment

ed6d021da58016c5a15e8250b20b3910

A colleague just congratulated me for an article included in an ebook that was recently published on Competition and Platforms. Interestingly, I did not know that the book was out nor that it included my piece!

In any case I suggest you download it and take a look. It’s sponsored by our friends at CCIA and edited by a former colleague Aitor Ortiz (now at Competition Policy International). It compiles a number of interesting pieces on multi-sided markets.

Mine (“The double duality of two-sided markets”) was initially written as a speech for the Pros and Cons conference in Stockholm and was later published in Competition Law Journal, so it is also multi-published and multi-used. Talk of multi-homing….

The ebook is available here. It features the following pieces:

-Understanding Online Platform Competition: Common Misunderstandings By Daniel O’Connor

-The Move to Smart Mobile and its Implications for Antitrust Analysis of Online Markets By David S. Evans, Hermant K. Bhargava & Deepa Mani

– Failed Analogies: Net Neutrality vs. “Search” and “Platform” Neutrality By Marvin Ammori

-Antitrust Regulation and the Neutrality Trap: A plea for a Smart, Evidence-Based Internet Policy By Andrea Renda

-Multisided Platforms, Dynamic Competition, and the Assessment of Market Power for Internet-Based Firms By David S. Evans

-The Double Duality of Two-Sided Markets by me.

-Should Uber be Allowed to Compete in Europe? And if so, How? By Damien Geradin (Juan M. Delgado & Anna Tzanakis, ed.)

-Online Intermediation Platforms and Free Trade Principles – Some Reflections on the Uber Preliminary Ruling Case By Damien Geradin

-Competition Policy in Consumer Financial Services: The Disparate Regulation of Online Marketplace Lenders and Banks By Thomas P. Brown and Molly E. Swartz

-Legal Boundaries of Competition in the Era of the Internet: Challenges and Judicial Responses By Zhu Li

-Can Big Data Protect a Firm from Competition? By Anja Lambrecht & Catherine E. Tucker

 

Written by Alfonso Lamadrid

8 June 2016 at 12:04 pm

Posted in Uncategorized

Conferences (including the theme of the 2nd Chillin’Competition conference)

with 4 comments

canseco-press-conference

We have already decided on the topic of the next Chillin’Competition conference. The common thread will be “Neutrality Everywhere“. The dates are yet to be determined (not likely to happen until after the summer). If any of you have original ideas (for a panel, for a paper you would like to present, for sponsors or even for a venue), please send them our way!

And speaking of conferences:

On 27 May the Brussels School of Competition will host a morning briefing on the very timely topic of mobile network consolidation. For more, see here.

On 2 June there wil be a couple of most interesting events in Brussels. First, Global Competition Review, Baker Botts and Shearman&Sterling will be holding the GCR Live 4th Annual IP and Antitrust event, and have managed to come up with a great program. That same day, a bit later, the Academy of European Law (ERA) will host a seminar under the title: What’s New in Art 102 TFEU? Latest Issues on Price and Non-price Related Conduct: for more info, see here.

On 8 June the GCLC and UCL have organized a conference under the title Competition Policy at the Intersection of Equity and Efficiency Honoring the Scholarship of Eleanor Fox. The programme is available here.

On 10 June Pablo will follow my footsteps 🙂 and will address the Association of European Competition Law Judges which this time is meeting in Madrid. The conference will address the competition – IP interface.

On 13 June Concurrences will host the New Frontiers of Antitrust Conference in Paris. The conference has been promoted with a teaser-interview with Nicolas Petit, available here.

Also on 13 June there will be a conference on Competition Law and Competitiveness in the EU at the Reform Club in London featuring an impressive speaker line-up too.

On 14 June the College of Europe will hold the annual symposium organized by the ELEA (European Law and Economic Analysis) students. There will be a panel on geo-blocking that will feature big names such as Thomas Kramler and Mike Walker and small names like Pablo 😉

The big global event on 23 June will be the British referendum my intervention in a symposium titled  Online platforms, Big Data and privacy: What role for competition policy?. It will be hosted by the Centre for Studies on Media Information and Telecommunication (SMIT) & the Brussels Centre for Competition Policy (BCCP) at Vrije Universiteit Brussel (VUB), Brussels. Those of you interested can download the program here.

On 4-8 July I will also be teaching at the College of Europe’s Summer Course on Competition Law taking place in Bruges. For more info, click here. I will also be lecturing at the College of Europe’s Summer Competition Law School for Chinese officials, but I fear you may not be eligible for that one…

And during 2016/2017 (and beyond) Pablo will be… Actually, he can tell you himself.

Written by Alfonso Lamadrid

19 May 2016 at 10:44 am

Posted in Uncategorized

More on the antitrust-privacy interface

with 2 comments

In some previous posts we’ve commented on the interface between the competition rules and data protection/privacy regulation, which is one of the trendiest topics in international antitrust these days.

As you may recall, the European Data Protection Supervisor recently held a high level workshop (high level but for my intervention on it, that is) on Privacy, Competition, Consumers and Big Data. On Monday, the EDPS made available on its website a report summarizing what was discussed in the workshop (conducted under Chatham House rules). The EDPS’ summary is available here:  EDPS Report_Privacy, competition, consumers and big data.

A summary of my intervention at the workshop was published in two recent posts (here and here).

For more, you can re-read Orla Lynskey’s A Brave New World: The Potential Intersection of Competition Law and Data Protection Regulation as well as the interesting comment by Angela Daly on my latest post on the issue.

The German Monopolkommission has also addedd its voice to the debate by issuing a recent report (“A competitive order for the financial markets“) which contains a section on data-related questions regarding the internet economy. The Press Release (in English here) expressess some concerns but notes that, according to the report, “an extension of the competition policy toolkit does not (yet) seem advisable on the basis of current knowledge and understanding“.

Written by Alfonso Lamadrid

16 July 2014 at 9:33 am

Speaking engagements

leave a comment »

Minutes after I published the post on endives’ right to be forgotten I received a call from the European Data Protection Supervisor’s office. At first I admit I thought it was someone (my first suspect was that guy from 21stcenturycompetition because he’d read a draft of the endive thing; don’t worry, Kevin, I won’t disclose you thought it was serious) returning the joke, but it wasn’t, and I got invited to speak next Monday  the most interesting (but closed door) Workshop on privacy consumers, competition and big data (to be held at the European Parliament and arranged in the wake of the EDPS report that we –actually Orla- discussed here).

I’d solemnly committed myself to have a life and not take on any more non-work (non-billable, that is) stuff in the coming weeks/months, but it was an offer I couldn’t refuse. My topic is Market Power in the Digital Economy.

Three days later, on Wednesday 5 June I’ll be providing an overview of the commitment decisions adopted by the Commission since the enactment of Regulation 1/2003 at the Brussels School of Competition’s annual conference. This event you really should attend (click here for info: Programme_Commitments in EU Competition Policy – 5 June 2014).

[ I apologize in advance to all attendants at these two conferences: I’ve an important General Court deadline on Friday and then a bachelor party weekend, so preparing might be a challenge. Yes, this is the ol old expectation-lowering trick ! ]

Then on 8 July I’ll be lecturing on EU competition procedure and on Special and Exclusive Rights (Art. 106) at the College of Europe’s Competition Summer School for Chinese officials. Talking with Chinese officials about how competition law applies to public measures should be quite an interesting experience.  And then on the 11th same procedural class in the context of the College’s summer course on competiiton law.

And then, following my first paternity leave in September, I really plan to take on less of these commitments.

Well, on 28 November I’ll be participating at the Swedish Competition Authority’s annual and always excellent Pros and Cons conference, which on this edition will be devoted to Two Sided Markets, but I couldn’t say no to that either…

Written by Alfonso Lamadrid

28 May 2014 at 5:52 pm

Scale Effects – What We Can Learn From National Football Teams (by Stephen Lewis)

with 2 comments

by Stephen Lewis

What determines the quality of a national football team?  Other things being equal, we would expect countries with a large population to produce stronger teams than those with a smaller population.  They have more people to select from. It is therefore quite intuitive that football team quality must, to at least some extent, be positively impacted by population size.

This intuition seems to be borne out if we consider pairs of countries that have markedly different population sizes but are similar along other relevant dimensions.  For example, take Italy and San Marino. Italy has a population of 60 million, while San Marino has a population of less than 50,000.   The countries are otherwise (broadly) similar with respect to other factors that might determine football team quality, such as length of football tradition, the cultural significance of football, the relative popularity of alternative sports, climate, etc.  Italy last played San Marino in 2017 and won 8-0 (having won all previous encounters on record).  Results like this certainly cast doubt on any claim that there is no link between population and football team quality.  There may even be a “minimum efficient scale” below which a national football team cannot credibly compete with leading football nations (and perhaps San Marino is below that scale).

But the question is how strong is the link between population size and football team quality and how small is any minimum efficient scale?  Answer: surprisingly weak and surprisingly small.  This is obvious from a cursory review of the international football landscape.  The two most populous countries on the planet, China and India, have qualified for one world cup between them (China in 2002).  Meanwhile, Croatia has achieved an all time FIFA ranking high of 3rd (in 1999) and reached the World Cup final in 2018.   Croatia’s population is 4 million – smaller than the United Arab Emirates (10 million), which recently beat India 6-0.

Even ignoring the high leverage outliers of India and China and considering clusters of countries in relatively close geographic proximity where football has a similar level of cultural significance, the effect of population on performance seems remarkably weak above a certain size.  Uruguay (population: 3.5 million, FIFA ranking 9), is a match for much larger Argentina (population 45 million, FIFA ranking 8), which in turn is a match for much larger Brazil (population 220 million, FIFA ranking 3).  Similarly, Belgium (population 12 million, FIFA ranking 1) is evenly matched with France (population 65 million, FIFA ranking 2).  Indeed, today’s top 10 ranked teams include four countries with populations under 12 million (Belgium, Portugal, Uruguay and Denmark), while Germany (population 84 million) for the time being languishes in position 12.

And even amongst those countries with a very low population there are some standout national football teams, suggesting that if there is a minimum efficient scale, it may be very small indeed.  With a population of around 300,000, Iceland knocked England (population 55 million) out of Euro 2016, and reached an impressive FIFA ranking of 18 in 2018.

Quantitative studies support the view that population has weak explanatory power for football team quality.

A 2010 PWC study performed a statistical analysis in which total World Cup points were regressed against population, average income levels and a count variable based on the number of times a country has hosted the competition (with values 0, 1 or 2).  This included only 52 countries that have played at least 5 World Cup finals matches (so excluded China and India).  Even among this football-playing-country sample, population is insignificant once these other variables are included. 

Gelade (2007) finds that the relationship between FIFA ratings and (linear) total population is “vanishingly small”, finding in a sample of 204 countries that only 1% of variation in FIFA Ratings is explained by total population, and notes that this counterintuitive finding has also been reported by other studies.

The discussion above has focused on the Men’s game but considering the relative performance of teams in Women’s football reinforces the idea that factors other than population size are important for explaining football team quality.  For example, the US is ranked 1st in the Women’s FIFA ranking and 20th in the Men’s, whereas the comparative advantage arising from having a large total population to select from is equivalent for both the Men’s and Women’s teams.

Now imagine a strange parallel universe where the only two countries are Brazil and Australia.  Brazil is 10 times bigger than Australia and consistently wins when they play football.  In this parallel universe, researchers are tempted to conclude that the relationship between population and football team quality is very strong.  Not only are there sound a priori grounds for believing a larger population should translate into better football team quality, but this seems to be borne out by the only two observations available.  But this inference is not valid.  Brazil and Australia differ along various dimensions that are critical determinants of football team quality, such as footballing tradition and competition for athletic talent from other sports (football is the national sport of Brazil but football in Australia has to compete with other ball sports such cricket, Aussie rules, rugby league and rugby union).  Of course, this would be obvious in a world with hundreds of observations available; far less so in our parallel universe with two.

What has this got to do with online search engines?  

I should start by making clear that I make no claim that the apparent weakness of population scale effects in national football has any bearing at all on the strength or otherwise of any scale effects affecting search engine quality.  The lesson from the football analogy is that researchers could be fooled into thinking that they can see a strong scale effect if they compare a small number of subjects that differ in scale and quality and do not take account of other factors that also affect quality.  

My claim is that when it comes to analysing the effect of scale on search quality, competition authorities have not got far beyond the following reasoning:

Query data is used to produce search results (people are used to produce football teams).  More query data is better than less query data (more people to select from is better than fewer people to select from).  Google has many times more queries than Bing (Brazil has many times more people than Australia).  Google has much higher search quality than Bing (Brazil has a much better football team than Australia).  Therefore, query scale is a crucial determinant of search quality (population is a crucial determinant of national football team quality).

Some competition authorities have gone deeper than others, for example, by examining query level datasets to gain a better understanding of differences in the range and volumes of the distinct queries each search engine sees.    But a query level comparison of Google and Bing just confirms the obvious – Google has a scale advantage over Bing.  This, entirely unsurprisingly, implies that for any given distinct query, Google is likely to receive higher query volumes than Bing.  It follows that queries that are rare for Bing are not rare for Google, while the converse tends not to be true.  But this just supports the existence of a scale advantage.  It does not shed light on how this translates to quality and the relative importance of scale compared to other factors.  This would be like a researcher going to some lengths to establishing that not only does England have a higher population than Iceland, but also that for every left-footed person who can run fast (and who would therefore on paper make a good left wing back) in Iceland, there are 100 such individuals in England, and that for every tall agile person (who would on paper make a good goalkeeper) in Iceland, there are 100 in England.  This deeper assessment of the nature of the scale advantage should not be confused with an assessment of the explanatory power of scale for performance.

Yet the reasoning in italics above is clearly faulty. 

Companies, much like countries, differ in their histories, cultures and priorities.  Just as national football team quality may be better explained by length of football tradition, cultural factors and presence of competing sports than by population size, the quality of a company’s search engine may be better explained by length of time trying to make incremental improvements to search algorithms, the importance of experimentation and measurable improvement in a company’s culture, and the general strategic centrality of search to the company as a whole, which impacts among other things investment and hiring priorities.

These factors clearly cannot be assumed to be similar across Google and Microsoft.  This means that the extent to which scale advantages drive quality requires some unpicking.  But no competition authority to date has made a serious attempt to do this unpicking. 

So why is Google better than Bing in a given national market for search, say, Belgium?  Of course, data-scale could in principle be a factor that explains the difference in quality, and it could be an important factor.  But there’s another plausible story: it is about how many engineering hours the company has poured into improving its search engine. 

Google entered Belgium in March 2002, launching a localised version of its search engine with French and Dutch language capabilities.  Bing entered Belgium in October 2013, over 11 years later.  If search engine quality in Belgium is a function of how many Wednesday-morning-meetings search engineers have had to discuss improving search quality in Belgium, then Google might be better than Bing simply because its engineers have had about 600 more Wednesday-morning-meetings than Bing.  

So there are competing theories as to why Google is better than Bing in Belgium – is it data or is it the number of Wednesday-morning-meetings?  Both are consistent with a scale gap (under one theory the scale gap drives a quality difference and under the other it is caused by a quality difference).  Analysis of the extent of the scale advantage, even when based on granular query level data, cannot distinguish between these two competing theories.

Indeed, trying to unpick which theory is more plausible (or how much weight to place on each) is an area where competition authorities have yet to really scratch the surface.  They are still trying to make inferences on the importance of population for football team quality by comparing Brazil and Australia.

Written by Alfonso Lamadrid

21 June 2021 at 4:30 pm

Posted in Uncategorized

Beyond a bananas approach to antitrust: Understanding competition in tech (by Renato Nazzini)

Now then. Let me come clear on this. I fear this is the first blog I have ever written. Nothing to do with my age, I assure you. There were already personal computers when I went to Uni (just about). But there had to be a first time – there is one for most things, or at least there ought to be. And I am glad that the occasion came in the shape of an invitation to comment on Nicolas Petit’s forthcoming book on his moligopoly theory. Surely, this is a timely book – on the digital economy no less – and written by an author who can be trusted, a household name, so to speak. Surely, there is loads going on about the digital economy. And the book is original and adopts a new, and robust, approach to empirical evidence, building a theory of the boundaries and nature of competition among the big tech based on verifiable market perceptions and assessments rather than abstract models or political positioning. We find a wealth of information in the book about big tech, how the “giants” of the digital economy became what they are now, how they compete against each other and with firms outside the club. We are also presented with operational policy recommendations that are thoughtful and such that they should trigger new research to test their theoretical and empirical bases, their feasibility and their implications. But these are not the main reasons why I enjoyed this book. I will now tell you what they are.

The first is, I guess, that for the first time I have read in Nicolas’ book a theoretically sound explanation of what competition between “ecosystems” means. We have heard the phrase many times – from big tech themselves and from market analysists and pretty much anybody who has a technical or commercial understanding of the digital space – but this has always remained a vague concept. A concept hanging mid-air between the instinctively sound intuition that this is exactly what happens and the equally sound scepticism of the pragmatic lawyer or official, who needs to rely on a verifiable framework to reach conclusions on, say, whether a merger should be cleared or a business model is anti-competitive. Now Nicolas gives us this framework. Competition between ecosystems means that big tech compete in an integrated space that is a combination of a structural monopoly in a core product/service and oligopolistic dynamics in neighbouring or related markets. Hopefully, this framework will give competition authorities the confidence to move away from their static, narrow understanding of market definition to a more holistic and nuanced understanding of the digital space.

The second must be, surely, the reliance on the concept of “competitive pressure” as a guiding principle or, perhaps I should say, a yardstick in the analysis of competition in the digital space. I have always tried to explain market definition to my students as a way to get a first understanding of the short-term (mainly) demand-side competitive pressure on a focal product or service. As such, market definition is part of a continuum that moves on to look at supply-side pressure, both short-term (generally included in market definition, but who cares where you place it, what matters is just that you look at it!), and then moves on to short- to medium-term pressures that include entry, dynamic competition, innovation, and so on. The obsession with the “apples and bananas” approach to market definition may lead to both over-enforcement – seeing monopolies in narrow spaces that make no economic and commercial sense – and under-enforcement. Cases like Facebook/WhatsApp and Facebook/Instagram were not cleared because very perceptive competition authorities saw, researched and established competitive harm but, alas! an antiquated law prevented them from prohibiting the mergers. And so that the law must be changed at all costs and quickly to avoid further harm – as the politicians and their appointees to competition authorities claim louder and louder. Those mergers – and others – were cleared because, assuming, without conceding, that there was harm, the competition authorities did not see it. Did they not see it because they had not understood moligopolistic competition?

The third – and there are many more but I stop here (I am told blogs are short and punchy, you see … ) – is the clear articulation of the idea that competition law cannot enforce rivalry. Data sharing, interoperability and break-ups are likely to be sterile remedies, at best, or, more likely, damaging political stich-ups that will harm consumers, businesses and, ultimately, the economy (these are my words, not Nicolas’!). And also, I always thought that competition law needs to protect rivalry. It is not its job to try and “create” competition. Businesses create competition, not bureaucrats and politicians. Competition authorities are not – or, if they are or think they are … an increasingly real risk – should not be central planners or the long-arms of a dirigiste, illiberal state who has fed to the public opinion the fable that the problems of today’s world are the fault of big tech. My hope is that competition authorities will continue to be – as they have been so far, chapeau! – the bulwark of impartiality, rigour and evidence-based analysis in dealing with that reasonably narrow – but by no means unimportant – set of economic problems that are brought about by restrictions or reductions of competition. My hope is that this book will give them a tool, among others, to do so.

Written by Pablo Ibanez Colomo

26 November 2020 at 7:33 pm

Posted in