We are having a great day at the Chillin’Competition conference. We will be reporting on the substance in due course, but here is a teaser video.
By the way, we will be having post-conference drinks at around 18.30 at the Red Monkey (Rue de l’Aqueduc 109); feel free to join even if you did not register for the conference. The first round is on us ;)
As we are starting to prepare for our upcoming 2nd Chillin’Competition conference we wanted to take a minute to publicly express our gratitude to all of you for the interest (selling out in 6 minutes is quite a feat!), to our speakers (particularly to Commissioner Vestager for making the time) and to our sponsors.
We also apologize to all those of you who could not make it out of the waitlist; he hope to make up for that in future events.
Special thanks go to our sponsors; their support is what has enabled us to put together a conference with an exceptional line up of speakers and that is free for all attendees [who will surely all appreciate it, if only for the hot food and wine…😉 ]
The final programme is available here: chillin-competition-conference-2016-final-programme
Our friend, longtime colleague and founder of this blog, Professor Nicolas Petit, has coined a new word to describe tech giants rivalry: “moligopolism”. As he has explained to us, the lenght of this paper (76 pages) is a very good metric for the opportunity cost of running a daily blog… We’re happy to read that Nicolas is on top of his game. The paper conveys a very original approach to competition in the digital economy, and is well worth a read.
Here’s the link to his paper https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2856502. This is the abstract:
“This paper shows that the technology giants that antitrust agencies tend to characterize as entrenched monopolists can also be seen as firms engaged in a process of vibrant oligopolistic competition. Those firms – we refer to them as “moligopolists” – compete against the non-consumption in search of new and low-end market footholds. The failure of the antitrust structure to see that rivalry – its intensity may vary from one company to another – originates both in mainstream economics and applied competition theory. We believe those defects can be cured with a rechanneling of antitrust policy towards certain types of restraints, in certain types of market settings“.
For the past several years, antitrust/competition law practitioners from more than 30 countries have participated in surveys developed by fellow practitioners, who serve as Non-Government Advisers to the International Competition Network (“ICN”). Past surveys were done to inform the ICN Agency Effectiveness Working Group and its Investigative Process Project (in 2014, the survey work was awarded NGA Contribution of the Year honors by the ICN).
The ICN has since issued a Guidance on Investigative Process report. As such, the 2016 practitioner survey will examine to what degree the ICN Guidance is followed in 14 markets: Australia, Brazil, Canada, E.U., France, Germany, India, Japan, Korea, Mexico, South Africa, Taiwan, the United Kingdom and the United States.
If you would like to take the survey, you can do so via this link. Responses should focus on your experiences before a specific competition enforcement agency. If a question doesn’t apply to your jurisdiction or you don’t know the answer, you can skip it.
The identities of survey participants will be kept confidential and care will be taken to present survey answers in a general manner. In addition, respondents have an opportunity to review the summary report in draft form before it is shown publicly.
The deadline for response is Wednesday, November 23.
Last week I also did what Pablo calls a “combo” speaking first (on Tuesday) on State aid and taxation at a Lexxion seminar, and then (on Thursday) at iTechLaw’s European Conference in Madrid (pictured above). I guess it was useful to try to clear my head from what was happening in the real world.
At iTech Law’s conference I talked about geoblocking, particularly regarding intangible content protected by copyright, focusing in particular on the Pay TV investigation (on which as you know I am involved representing the UK’s independent producers and therefore perhaps not objective). I didn’t really say anything you haven’t read from me before on this blog, so I won’t sum it up again.
The slides are available here: itechlaw-madrid-november-2016_lamadrid
At Lexxion’s seminar on fiscal State aid I talked about two streams of quite a few joined cases on which my firm and I are involved regarding the Spanish financial goodwill (Santander/ Autogrill) and tax lease cases. The first saga has attracted a lot of attention as commentators –and the Commission- have tried to tie its fate with that of the investigations into tax rulings. I had avoided commenting publicly on the case but I feel more at liberty to do it now that the ECJ has announced that it will deliver its Grand Chambre Judgment on 21 December; no one can argue now that we are trying to influence the Court via our writings.
Pablo did write a comment on AG Wathelet’s Opinion (which suggested the ECJ to quash the General Court’s Judgment annulling the Commission’s decision) noting that the AG was proposing nothing short of a revolution (see here). One of the comments to that post suggested that this was not the case as what would have allegedly created this revolution would be the General Court’s Judgment. In essence, the crux of the discussion boils down to deciding whether in order to identify a given measure as “selective” the Commission needs to identify one or various categories of undertakings benefitted.
AG Wathelet thought that this requirement would open the possibility for tax benefits to artificially escape from State aid control (paras. 89-90 of the Opinion). The reason why I cannot be convinced is not only that there are a few Judgments that the Opinion directly contradicts (mentioned by Pablo and distinguished in a pretty striking manner in the Opinion itself) but also, and perhaps more tellingly, that until now no one has yet given us an example of a single State aid identified in the past 60 years that would have escaped State aid control under the test verbalized by the General Court in the Santander and Autogrill Judgments. In reality that case (as well as the tax lease stream of cases) do not bring about anything new; they only verbalize what was already implicit –given its obviousness- in the case law and the decisional practice.
The only reason why it was necessary to verbalize the principle in these two cases only had to do with the way in which the Commission investigated the measure. In the goodwill case the Commission thought it would be able to identify de facto selectivity, and when it realized that was impossible it felt forced to stretch the law concluding that a measure can selective even when it is open both de iure and de facto to anyone. The only way out following the opening of the case was to endorse the tautology that a measure is selective when it gives an advantage to some (regardless of who those are or how they are selected), thus blurring –again- the notions of advantage and selectivity.
Another of the problems of this approach (a perhaps even more serious one, not discussed in the Opinion but importantly underlined by AG Kokott in a parallel case) is that it would turn virtually every fiscal measure in every Member State into a selective measure and, hence, would also, thereby, turn the Commission into a tax co-legislator). In the tax lease case, the original sin contaminating the conclusion has to do with the last minute contrivances to limit the effects of the recovery order to investors (for more of my views on this case see the comments below this post). If the measure wasn’t selective as regards the investors, one cannot claim that they were the beneficiaries -much less the sole ones- simply because they received a small fraction of a fiscal advantage generated by the measure.
My slides for this talk are available here: lamadrid_-goodwill-tax-lease
Btw, last week Lexis Nexis also published an all-you-need-to-know practice note on State aid and corporate taxation that I have co-written together with my colleague Miguel Angel Bolsa; it’s available here.
[We are happy to publish a guest post from one of the most respected and interesting practitioners in the EU market, Stephen Kinsella (he’s of course best known for having been a speaker at the first Chillin’Competition conference and for being the husband of a great novelist who is currently crowdfunding her new novel. Below he gives his views on a very topical matter on which we have also commented before. As always, we will be happy to foster discussion and are open to publishing other views on the matter. Enjoy!]
Mergers tend to get more attention in press coverage than other antitrust activity in Europe. That is partly because they have compressed timetables and obvious milestones to trigger stories (announcement, filing, enquiries, third party interventions etc) but also because they can be resolved to a binary choice between approval or block, with readily understandable consequences. Not only shareholders but other financial players follow closely each twist and turn, placing bets on rumours of setbacks or “theories of harm”.
All this froth can sometimes mask a duller reality, which is that at the EU level of an average 350 or so deals notified in Brussels each year, less than one a year is ultimately prohibited. Of those that are permitted only around 5% are subject to any modifications or commitments as the price of approval.
And there is a reason for this. The system is weighted in favour of approval. It carries within it a presumption that deals will be cleared, and speedily, unless good evidence can be brought forward of some creation or strengthening of dominance causing harm to effective competition, to the detriment of consumers denied the benefits of choice.
Granted, merger control, like other aspects of antitrust, is not static. It evolves in response to evidence, to greater learning about how markets behave and to developments in legal and economic thinking. But it does so cautiously, trying to balance the risks of excessive intervention (in the jargon, Type 1 errors) against non-intervention (Type 2 errors). The Type 1 errors could include not only hampering the ability of the merged entity to innovate, but also deterring those who would invest in creating products with the aim of selling them to another who is better able to exploit them.
One area in which we are seeing calls for such an evolution relates to “big data”. Enforcers at EU and national level are asking themselves whether the mantra that “knowledge is power” literally means that the acquisition or accumulation of data, in particular about the behaviour of and relationships between large numbers of individuals, could confer the power to exploit and exclude.
This is not to be confused with concerns over privacy. We have seen numerous statements, including from Commissioner Vestager, that competition law is not to be used to try to cure possible concerns that fall more properly in the realm of consumer (or data) protection. Rather the question is a narrower one: whether a data set might be so special, valuable and non-replicable that its concentration in one undertaking would give it an overwhelming competitive advantage that could be checked only by regulatory intervention.
Such a theory is not controversial in principle if one looks at data as if it were an essential facility. But it runs up against the objection that unlike a piece of infrastructure such as a port or a pipeline, the data (or substitutable data) may well be capable of being compiled by others, or already exist in the form of other accessible compilations. Again to cite Commissioner Vestager in a recent speech, the data might quickly go out of date and need refreshing, and “we also need to ask why competitors couldn’t get hold of equally good information”. And while we sometimes see reference to the question of whether the data is “unique”, the better way of expressing it, as recognised in the Franco-German discussion paper from May this year, is whether it is really “unmatched”. This recognition has led to understandable caution. A recent consultation exercise by the Commission is beginning to explore whether the merger rules need to be adjusted – though even here the focus is more on the jurisdictional thresholds that might be appropriate to ensure deals receive proper scrutiny, rather than suggesting that data poses particularly intractable problems.
Against this backdrop, the public discussion around the Microsoft – Linked-in transaction is interesting, and rather curious [my firm has advised Microsoft on a range of antitrust issues but I am not acting on the notification of the Linked-In deal] . I only have access to what is in the public domain, but it appears to be a case where a company acquires a target with which it is not in competition and where there is no suggestion that the target will alter its commercial strategy in terms of its market behaviour or how it makes available its data to third parties. According to press reports the acquirer has already given assurances to that effect and those assurances do not seem to be seriously disputed. Therefore it is hard to see that any adverse change in the market will inevitably occur that is “deal specific”.
At the same time, though nobody disputes the value of the data held by Linked-In, there are many other players in the market and apparently many other ways of obtaining similar or competitive data sets. In fact an increasing number of companies hold substantial amounts of data regarding their customers or others with whom they interact and some use that purely for internal purposes while others develop business models around exploiting that data. Unless there is convincing evidence that a particular data set is genuinely both non-replicable and uncontestable, it would place an unreasonable burden on competition enforcers if they were always obliged to analyse the impact on some rather nebulous “data market”.
But horizontal concerns are not the end of the story. There have been claims by opponents of the deal that in the future, in some unspecified manner, the two companies could combine their data and expertise. In doing so they would come up with some new product, for which there would be strong consumer demand, and with which third parties would struggle to compete (though I have seen no suggestion that those third parties would be forced to exit the market). Such reasoning evidently includes a number of leaps and suppositions, but reduced to its essentials it seems to try to take merger theory even beyond the notion of an “efficiency offence” (always rejected by the Commission) into the realm of an “innovation offence”.
One can well understand why any regulator would be sceptical about such an approach. Merger control has to be to some extent forward looking in that it must try to identify the suppression not only of actual but also of potential competition. But when asked to go even further and tackle some speculative impact on a form of competition that absent the transaction would not anyway have taken place, combined with the fact that the transaction will in the complainants’ “worst case” scenario introduce a new element of competition through a new product, the levels of abstraction introduce too much uncertainty into the merger review process.
Moreover, it is not as if merger control is the last and only chance that the Commission has to protect competition. If following a concentration some development occurred that put the new entity into a position of unassailable market power, there remains Article 102. Indeed we saw relatively recently in the Thomson-Reuters case that the Commission, having cleared a merger, then opened a proceeding to verify the impact on the market of the merged firm’s unilateral behaviour and extracted a package of commitments that the Court subsequently ruled was sufficient to restore competition.
Those opposed to transactions will continue to innovate with theories of harm. Regulators will continue to welcome and even encourage their contribution, while maintaining a healthy scepticism regarding their agenda. But the threshold for intervention remains high.
If Alfonso’s way of dealing with the unexpected – and potentially catastrophic –is to get his thoughts off his chest, mine is to focus, insofar as it is possible, on life as usual. And nothing says ‘life as usual’ more than another post – as if two were not enough – about Lundbeck.
The Brussels School of Competition organised a morning briefing on the case a couple of weeks ago. Together with David Hull and Luc Gyselen, we had a lively discussion on the case and its implications. Even better, the audience engaged with us and did not hesitate to challenge our views. Their slides, and mine, can be found here.
My presentation put Lundbeck in its broader context. This case, like some other recent ones, suggests that the balance between competition law and intellectual property is changing. In the past few years, the Commission has become less deferential to IP regimes.
How is the balance between competition law and IP changing? Back in 2004, the Commission was of the view that there is no potential competition when market entry requires the infringement of an IP right (see para 29 of the old Guidelines on technology transfer agreements).
Lundbeck shows that this view no longer reflects the approach of the Commission. Its decision in the case is based on the idea that potential competition may exist even when entry requires an infringement of an IP right. What is the logic of the new approach? Well, an IP right does not preclude entry if it is not exercised or if, when exercised, it is declared invalid.
This new logic explains Lundbeck. My guess is that it also explains pending cases like Pay-TV. The Pay TV case is unusual. What prevents Sky from offering online content outside the UK is not the agreement with the major studios, but the copyright system. Why would it be a competition law issue, then? Is it not a copyright problem instead? Well, one could argue – à la Lundbeck – that it is a competition law issue if copyright is never exercised against infringing acts.
Testing new approaches is what a competition authority should do. There is nothing wrong with it. If anything, it should be welcome. It would be disastrous if authorities did not seek to respond to emerging challenges. At the same time, new approaches need to be ultimately validated by the Court.
Alas, I am not convinced that the emerging new balance between competition law and IP will win the day. It seems to be at odds with the case law. I believe that paras 473-474 of Lundbeck capture the tension between the new approach and the case law particularly well (Luc Gyselen made a similar point during the event). These paragraphs read as follows:
‘473. The examination of a hypothetical counterfactual scenario — besides being impracticable since it requires the Commission to reconstruct the events that would have occurred in the absence of the agreements at issue, whereas the very purpose of those agreements was to delay the market entry of the generic undertakings […] — is more an examination of the effects of agreements at issue on the market than an objective examination of whether they are sufficiently harmful to competition […].
474. Accordingly, even if some generic undertakings would not have entered the market during the term of the agreements at issue, as a result of infringement actions brought by Lundbeck […], what matters is that those undertakings had real concrete possibilities of entering the market at the time the agreements at issue were concluded with Lundbeck, with the result that they exerted competitive pressure on the latter. […]’
Why am I of the opinion that these paragraphs are at odds with the case law?
The General Court appears to claim that the objective purpose – i.e. the object – of an agreement can be established without considering the counterfactual. I believe the case law is fairly clear in this regard, and it contradicts this view. It is only possible to figure out the objective purpose of an agreement by considering what would have happened in its absence.
- In fact, the Commission has already conceded that a restriction by object cannot be established without looking at the counterfactual. According to the Guidelines on vertical restraints, for instance, an agreement that restricts active and passive selling into a particular territory is not caught by Article 101(1) TFEU when the analysis of the counterfactual suggests that market entry would not have taken place in its absence.
- I also mentioned a venerable precedent, Remia, in my presentation. If you think about it, Remia is a case where the seller of an undertaking receives a payment to stay out of the relevant market. In spite of this fact, the Court held that the non-compete clause may fall outside the scope of Article 101(1) TFEU. Why? The Court understood that, in the absence of the non-compete obligation, the transaction may have never taken place.
- If the counterfactual shows that the agreement does not restrict competition that would otherwise have existed, one can safely presume that it serves a pro-competitive purpose. If the agreement is not capable of restricting competition, how can one claim that it has an anticompetitive object? This insight is apparent from a case like Micro Leader.
I also pointed out that paras 473-474 are in contradiction with other parts of the judgment. The GC examines the counterfactual at length in the judgment. The analysis of the counterfactual is after all indispensable to determine whether there are ‘real, concrete possibilities’ for generic producers to enter the market.
Thus, the GC examines the counterfactual and, at the same time, denies its relevance . Often, a contradiction of this kind suggests that there is something going on with the reasoning. What paras 473 and 474 reveal, first and foremost, is that there are two possible counterfactuals: one in which generic producers lack the ability to enter the market and one in which they are in a position to do so. In this sense, Lundbeck is different from recent cases like Hitachi and Toshiba.
This aspect of the case suggests, in my view, that the agreements are not only different, but more complex than a market sharing cartel. It also suggests that the rationale for the agreements considered in Lundbeck is not necessarily anticompetitive. The ‘by object’ label, as a result, does not seem appropriate (at least if one accepts the principle that the scope of the ‘by object’ label should be interpreted restrictively).