Protecting the competitive process, not a competitive structure (by Fred Marty)
The pandemic that hit our societies in early 2020 was, for a while, seen as potentially ending Tech Lash. The Big Tech had to demonstrate the gains they bring to consumers and citizens, particularly through their investments in research and development. Indeed, these investments have contributed to the relative resilience of our economies and have largely eased the constraints associated with successive lock-downs. However, the Big Tech sector has not get over the stage of a significant public outcry that has developed in recent years and which has found, particularly in the United States, real support in the academic and political spheres. Although for a decade, it was the European Commission that was at the center of the discussion regarding the application of competition rules to Big Tech, since last year the debate has been particularly intense in the United States and has not been extinguished by the crisis.
After all, the situation is quite similar to the one that existed 130 years ago when the Sherman Act was enacted: something had to be done about trusts in the same way that something has to be done about Big Tech today. William Letwin perfectly described this mood in a 1956 paper published in The University of Chicago Law Review[1] :
“No one denies that Congress passed the Sherman Act in response to real public feeling against the trusts, but at this distance it is difficult to be sure how hostile the public was and why […] In fact, though the public sentiment may not have been so intense as some believed, yet it was more deeply rooted than many have noticed, and sufficient in any event to persuade Congress that something had to be done; but since the public, despite its hostility, did not and could not suggest any specific solution for the problem, Congress was left very much to its own devices in deciding what was to be done”.
The Sherman Act was probably not a consumer welfare prescription[2] in the intentions of the drafters … they would otherwise have defended the trusts themselves, which in terms of allocative efficiency were at the very least difficult to blame[3]. A political agenda is rarely based on economic efficiency concerns.
Big-Tech are now in such an agenda. In February 2020, the FTC initiated a backward-looking investigation into the acquisitions made by the Big Tech companies (in this case, five of them: Google, Apple, Facebook, Amazon and Microsoft[4]), thus contributing to the debate on Big Tech’s acquisitions, if not killer ones, at least consolidating ones. The month of October began with the publication of the Judiciary Committee’s report on the investigation of competition in digital markets[5] and ended with the complaint filed by the Antitrust Division of the DoJ against Google[6].
These initiatives demonstrate the need to question the justification of its actions and the assessment of their potential effects. Nicolas Petit invites us to do so in his book, which is anything but “a plaidoyer for big tech” or an invitation to a conservative conception of the enforcement of competition rules. On the contrary, it is a question of investigating competition in the digital age in order to draw up rules for a sound enforcement. Whether competition rules purpose is defined by the search for allocative efficiency or by the preservation of the competition process, they rely on a case-by-case implementation grounded on the specific circumstances of each case. It is not a question of applying any theory but, on the contrary, of appraising the specific circumstances of the case[7]. Nicolas Petit proposes a pragmatic approach. It supposes to undertake a thorough analysis of the competition between the Big Tech companies. This approach is all the more essential as the competition law and economics debate relies more and more on conceptual frameworks defined a priori. The comments we are going to develop from Big Tech and the Digital Economy testify to the extreme value of the analysis and the paths it paves for both competition law practitioners and academics.
In a first part we confront the molygopoly hypothesis with the arguments of the neo-structuralist movement. In a second part, we consider the molygopoly hypothesis from the perspective of the neo-Austrian economy. In a third part, we focus on the responses that can be made to these new competitive challenges.
I – The Revival of Structuralism: Should competition be evaluated as a situation of effective rivalry between firms?
As Nicolas Petit points out, the antitrust debate of the last five years has been marked by the growing influence of the neo-structuralist or neo-Brandeis movement[8]. This movement examines the phenomena of increasing concentration of economic power, particularly in the field of digital technology. This concentration is called into question according to two registers: the first is economic and the second political. Concentration can have economic effects such as an increased market power towards both consumers and trading partners, an impairing of the development of start-ups, or a decrease of incentives to innovate. The underlying idea is that the concentration of economic power has effects not only in terms of distribution (i.e. the distribution of well-being among economic agents) but also in terms of economic efficiency.
In other words, the ability of some agents to confiscate annuities would have negative effects on economic efficiency in the medium term. To illustrate this point with the case of innovation, which is central to Nicolas Petit’s work, we could consider that agents holding economic power have the capacity but no longer the incentives to invest in innovation, whereas agents in a situation of dependence would have every reason to do so (to escape their dependence) but no longer have the means to do so.
Concentration can also have political effects: the concentration of economic power can give rise both to strategies on the part of its holders to perpetuate their positions[9] and to demands from other stakeholders for public intervention, if not to curb it, at least to regulate it. Even beyond this dimension, the capacity of large digital platforms to influence – voluntarily or not – the construction of public opinion generates calls for a form of regulation. The concomitant emphasis on the political and economic stakes involved in the concentration of private economic power resonates with the debates of the first third of the 20th century led by Louis Brandeis. The latter, who was one of the leading figures of progressivism within the Supreme Court, first with Holmes and then Cardozo, published a pamphlet in 1934 entitled The Curse of Bigness[10]. This book – whose title would later be taken up by Tim Wu[11] – contrasted with a view that was still held by some American institutional economists according to which concentration was a necessary evil in terms of productive efficiency. For these economists, only large firms could amortize high fixed costs and invest efficiently. However, it was necessary to regulate this economic power through public regulation. Brandeis’ approach was somewhat different. Not only was any dominant position in the market seen as the inexorable result of anti-competitive practices, but concentration was also seen as a source of inefficiencies. Thus, concentration had to be resisted not only for political reasons but also for economic reasons.
Brandeis’ conception of competition was to protect small firms for themselves. This led him to regret that the question of the size of large firms was not addressed in itself by anti-trust laws and to defend coordination between small firms in order to compensate for their competitive disadvantage vis-à-vis large firms. The concern is not the defence of the competitive process itself. In this perspective, the natural result of the market process can be opposed if it leads to a concentration that is deemed excessive. For Brandeis, the price of such an intervention is not paid in terms of efficiency since he considers that concentration generates inefficiencies. Similarly, he considers that coordination between small firms does not work to the disadvantage of consumers in terms of prices and quantities produced, insofar as they are not price-makers. However, we should not omit the fact that, in his logic, the dispersion of economic power and the limitation of the size of large firms respond first and foremost to a political purpose.
It should be noted that at exactly the same time, at the University of Chicago, Henry Simons was reaching comparable prescriptions but on very different bases with his Positive Program for Laisser-Faire[12] . Although Simons called for public policies to counteract the concentration of economic power, it was to protect the process of competition – which he believed was tied to a situation of effective rivalry between firms – and to prevent government interference. Even more significantly, his opposition to concentration was based on political arguments. It was especially a question of avoiding the implementation of any form of public regulation… of which he feared the inefficiencies and anticipated the risks of capture…[13] . The least bad solution for Simons was structural remedies, whatever the cost in terms of efficiency. In other words, de-concentration was seen as a necessary evil, whereas Brandeis considered that concentration was inherently inefficient.
Does our digital economy fit this framework? Can we assess using these structural criteria the actual concentration of markets in sectors characterised by high fixed costs and strong network externalities and, above all, by the development of ecosystems around a keystone player? Are the phenomena of ultra-dominance consubstantial to the digital economy and, above all, are they perennial or durable? The response that public authorities can provide to these issues can be put into perspective with the well-known debate on the relative cost of errors in antitrust. In a situation of uncertainty, how can one arbitrate between the risk of false negatives and the risk of false positives? Considering that the former is more costly in terms of welfare entails exposing ourselves to the risk of a definitive consolidation of economic power that will be costly in terms of long-term efficiency. Accepting the risk of false positives can deprive the consumer of the gains associated with market practices or a given market structure in both the short and long terms.
A possible structuralist bias may lead to allocative inefficiencies in the short term and thus reduce consumer welfare. It can also lead to dynamic inefficiencies by negatively affecting the capacities and incentives of large firms to innovate. How could this possible bias materialize? It can in fact take two forms. The first, the most paroxysmal, is a no-fault antitrust bias, and the second, more problematic because at least partially necessary, is a softening of the criteria commonly used in competition cases.
No-fault antitrust is rooted in the history of US antitrust. If 1979, the Supreme Court in the Sonotone Corp. judgement[14] endorsed the definition of antitrust as a consumer welfare prescription, this choice was in opposition to tendencies particularly strong in the post-war antitrust that aimed at transforming the Sherman Act in a tool for deconcentration. Such approaches advocated for an enforcement that should more address the question of the existence of monopoly than that of monopolization.
The debate about excessive concentration in the US economy is a recurring one. It is, of course, particularly acute at the present time[15], but it was just as intense in the immediate post-war period. It should also be noted that the Chicago School’s position on the question of concentration evolved during this period. Simons regarded it as a potential source of efficiency gains, but feared that it would be perennial. His successors eventually took a more positive view[16]. If there are no (regulatory) barriers to entry, no dominant position is sustainable in the long run.
The evolution of Stigler’s position regarding this issue is emblematic here[17]. Initially, George Stigler had defended a no-fault conception, largely inspired by Henry Simons’ conceptions:
“The Sherman Act […] cannot cope effectively with the problem posed by big business [….]. The dissolution of big businesses is … necessary to increase the support for a private, competitive enterprise economy, and reverse the drift toward government control[18].”
The Stigler’s position evolved progressively on the issue of Bigness as the ones of his colleagues of the Second Chicago School[19]. His advocacy for structural remedies to tackle the issue of bigness declined[20]. In 1968, George Stigler indicated in his hearing before the Neal Commission (that we will present below) that no-fault monopoly liability and deconcentration measures do not make sense at the economic point of view despite his initial positions:
“I personally have serious misgivings about the Neal proposals for deconcentration. I worry about the fact that where we have substantial large economies of scale, deconcentration puts burdens on us. Where the economies are not large, private rivals have a tendency to enter and eliminate (excess)profits themselves…. There was a time … when I was enthusiastic for [deconcentration] scheme[s]. I no longer am[21] »
However, the views of the Chicago School were still marginal[22], and in the 1970s a movement towards the deconcentration of the American economy was developing. In 1968, the Neal Report was published[23], which proposed, among other things, measures to deconcentrate American industry. This report initiated a decade that Harry First qualified as the Woodstock Antitrust one[24].
From the perspective of no-fault monopolization (or no-conduct monopolization), if a firm is able to hold a position of ultra-dominance on a lasting basis without this position being eroded by its competitors, it can be considered a structural market failure[25]. This must be corrected by the competition rules, even if this market position only stems from the past merits of the firm within the meaning of the Supreme Court’s Grinnell jurisprudence[26]. The concept of no-fault monopoly was in this context the subject of a proposal for integration in the Sherman Act in the form of a section 2A specifying that: ““every person who is found in a government proceeding to possess monopoly power in any relevant market would be subject to an appropriate remedy[27]”.
The arguments then put forward were close to those we know today, in particular the proposal for complementing the section 2 of the Sherman Act which appears in the report of the Judiciary Committee in order to tackle the issues related to abuses of dominant positions. These debates are all the more interesting for us because the question of the contestability of dominant positions and thus the question of the capacity of the process of competition to erode them was central. In the minds of the promoters of Woodstock, the duration of a dominant position has been held can be used as proof of the inability of market forces alone to challenge the monopolist[28]. In other words, the barrier to entry is inferred from the persistence of the monopoly position. In this perspective the monopoly position (or the dominant position) can be addressed in itself without characterizing any monopolization practices.
A recap of these debates is useful to grasp the stakes of competition between digital ecosystems as described by Nicolas Petit. A structuralist application of the rules of competition would be all the more difficult to implement as the strategy of firms leads them to structure themselves into multi-sided platforms. The latter are characterized by inter-relations between different activities that lend themselves much less to the possibilities of structural transfers than was previously possible in the framework of vertical or conglomerate expansion strategies. A strategy for dismantling such platforms would ignore the complementarities between activities and could be particularly costly in terms of welfare. Only horizontal integration configurations in which a company operates competing services would be worth considering.
On the other hand, questioning vertical integration phenomena could have significant impacts in terms of efficiency, unless dominant operators were obliged to comply with a principle of speciality such as that to which the holders of exclusive rights under French public law were bound before the liberalization of the network industries. The aim was to prevent the diversification of the concerned companies as it was impossible to guarantee a level playing field: the holders of exclusive rights in a market could use their monopoly rents to cross-subsidize. The reference to the rules applied to the network industries is not purely historical: a large part of the proposals that are made today go in the direction of a regulation of digital ecosystems that takes up the logic of activities affected with public interest. The doctrine of affectation was the subject of numerous debates in the United States in the first third of the twentieth century, in particular to decide whether it could be applied beyond the network industries[29].
A final set of responses could involve a relaxation of the criteria used in competition matters. The latter can be read as a reasonable adaptation to the evolution of the relative probability and relative cost of the two types of errors described above: namely, false positive and false negative. If the second becomes more probable than in the past and if it proves more costly because it carries systemic risks or irreversible damage to competition, it may be legitimate to revise the current rules. But where can this revision lead?
A first answer is obviously related to the burden and standard of proof. Its reversal can help to limit the risk of false negatives[30]. According to Jacques Crémer, Yves-Alexandre de Montjoye and Heike Schweitzer:
“We propose that competition law should not try to work with the error cost framework on a case by case basis. Rather, competition law should try to translate general insights about error costs into legal tests. The specific characteristics of many digital markets have arguably changed the balance of error cost and implementation costs, such that some modifications of the established tests, including allocation of the burden of proof and definition of the standard of proof, may be called for. In particular, in the context of highly concentrated markets characterised by strong network effects and high barriers to entry (i.e. not easily corrected by markets themselves), one may want to err on the side of disallowing potentially anti-competitive conducts, and impose on the incumbent the burden of proof for showing the pro-competitiveness of its conduct. This may be true especially where dominant platforms try to expand into neighbouring markets, thereby growing into digital ecosystems, which become ever more difficult for users to leave. In such cases, there may be, for example, a presumption in favour of a duty to ensure interoperability. Such a presumption may also be justified where dominant platforms control specific competitively relevant sets of user or aggregated data that competitors cannot reproduce”
Moreover, the application of a rule of reason could be contested as being effectively costly for the various stakeholders and as leading too easily to a decision favouring the defendant, hence proposals to return to per se rules[31]. Indeed, according to Rohit Chopra and Lina Khan:
“But in practice, the exclusive reliance on case-by-case adjudication has yielded a system of enforcement that generates ambiguity, drains resources, privileges incumbents, and deprives individuals and firms of any real opportunity to participate in the process of creating substantive antitrust rules”.
A second response which is particularly noticeable in the report of the Judiciary Committee consists in the broadening of theories of damage which it is possible to retain and to make evolve some jurisprudential standard. The case of predatory pricing practices, for instance with the requirements of proving below-cost pricing and possibility of issue of recoupment resulting from Supreme Court case law, is emblematic of such proposals.
“The Subcommittee’s investigation identified several instances in which a dominant platform was pricing goods or services below-cost in order to drive out rivals and capture the market. […] Predatory pricing is a particular risk in digital markets, where winner-take-all dynamics incentivize the pursuit of growth over profits, and where the dominant digital platforms can cross-subsidize between lines of business. Courts, however, have introduced a “recoupment” requirement, necessitating that plaintiffs prove that the losses incurred through below-cost pricing subsequently were or could be recouped. Although dominant digital markets can recoup these losses through various means over the long term, recoupment is difficult for plaintiffs to prove in the short term. Since the recoupment requirement was introduced, successful predatory pricing cases have plummeted”
“The Subcommittee recommends clarifying that proof of recoupment is not necessary to prove predatory pricing or predatory buying […]”
A third response may be the lowering of the rules concerning the definition of relevant markets. The latter are in fact particularly difficult to define in the case of multi-sided platforms, and many proposals for regulation of large platforms move in the direction of regulation at the level of each ecosystem. However, the proposal made by the Judiciary Committee amounts to dispensing with this decisive step:
“Clarifying that market definition is not required for proving an antitrust violation, especially in the presence of direct evidence of market power”.
A fourth is the substitution of market inquiries for competition litigation based on the British model introduced by the 2002 Enterprise Act, which may be one of DG COMP’s current sources of inspiration for the future EU Commission New Competition Tool. However, despite its interests such a procedure may lead to disproportionate remedies and impair the judicial control of antitrust decisions. Further, such model may lead to constructivist approaches in matter of competition law enforcement. It could be no longer exclusively a matter of sanctioning anticompetitive practices but building “more competitive” markets:
“Market Investigations can also address markets which have become ‘stuck’ in bad equilibria, which are good for neither firms nor society, but where some form of intervention is required to make the shift to a better equilibrium[32]”.
II – Assessing competitive intensity in practice: competition for the market and competition in the market
Depending on their implementation, these different options may lead to an under- or over-strengthening of competition rules, which could be all the more detrimental since the very economic model of the major players in the digital industry cannot be effectively grasped with industrial organisation underlying models that are obsolete and unable to grasp the new business models, in some respects. It is one of the major contributions of Nicolas Petit’s work to reintegrate the contributions of the neo-Austrian approach with dimensions that are decisive for understanding the dynamics of these industries: time, uncertainty and investment coordination[33].
This approach may under some conditions rehabilitate market structures characterised by monopoly situations. For instance, following Jean-Luc Gaffard in a Schumpeterian perspective “monopoly practices, which limit competitive investments, and price rigidities, far from being the cause of a misallocation of resources, appear to be the means of capturing productivity gains” especially in a context of incomplete information and uncertainties regarding technological dynamics[34]. Such a perspective may support co-operation among competitors echoing the model of digital ecosystems. As Jean-Luc Gaffard states in can be a matter of creating “incentives for firms to engage in co-operation, which is the key to the viability of such a complex process as innovation, which is characterized by interaction among multiple actors. This is not meant to eliminate the competitive character of the market, but to strengthen the co-ordinating role of competition […] This requires creating the conditions [that] take the form of market connections or restraints that limit competitive investments”. In other words, the coordination role of keystone players in digital ecosystems can be seen both as a competition imperfection but also as a necessitiy to achieve dynamic efficiency by securing the investments of its different participants[35].
Combining this Schumpeterian approach with an analysis of the economic power relationships between the different actors in ecosystems enables us to understand how they function. Nicolas Petit shows us how big tech can play the role of investment coordinators within their respective ecosystems. They make it possible to reconcile external uncertainties and internal visibility for stakeholders. The molygopoly hypothesis is central to consider these interactions. Indeed, as Nicolas Petit points out, the Big Tech, or keystone players in each of the digital ecosystems, are both “monopolies” and competing firms. Each ecosystem is in fact in competition with the others, and dominance remains questionable in this respect. At this point in the analysis of Nicolas Petit’s book, it is worth emphasizing the various contributions he has made in terms of understanding the functioning of large digital ecosystems.
The first key dimension is that of time. Nicolas Petit highlights it several times in his work. It is essential in that it is the only way to grasp competition not as an equilibrium that could be improved (by rebalancing the powers of markets as the previous quotation related to market inquiries has proposed ….) but as a dynamic process that is perpetually out of equilibrium. The analysis of firms’ strategies allows us to grasp the persistence of competitive threats and the possible vulnerability of prevailing dominant positions.
This dimension must first be grasped through the long-term strategy carried by the keystones and financial investors. This temporality can be conceived as part of a predation strategy (in the sense of an investment aimed at eventually acquiring market power), it can also be explained by long-term competitive incentives in the context of competition between different ecosystems in existing and also future markets. The analysis of the 10-Ks developed by Nicolas Petit is particularly exciting in this respect.
The second dimension that flows naturally from the consideration of the dynamic dimensions of competition is radical uncertainty. This radical uncertainty does not turn molygopolists into rois fainéants enjoying a quiet life, in the words of John Hicks. Uncertainty does not make them market makers, in other words, actors capable of deciding on prices and innovation dynamics. Radical uncertainty about technologies, about the strategies of other ecosystems and about the possible disruptions that could be induced by disruptions in demand or the entry of mavericks.
However, it should always be kept in mind that if this structuring power (control of prices, investments, technology…) cannot be achieved on the market as a whole, it can be achieved to a certain extent within each of the ecosystems (as long as the latter form silos in which the complementers and users are – at least partially – locked). Nevertheless, the radical uncertainty and the impossibility of tasting the delights of Capua (e.g. enjoying from its monopoly rent without investing to protect its dominance) make it possible to present these molygopolists as firms in a dominant position, but unable to abuse it because of the competitive constraints that continue to be exerted on them.
The third dimension is that this uncertainty (competitive and technological), which marks the impossibility of a stable equilibrium over the long term, also imposes on the firms in question a constant diversification which is both a risk for competition (extension and consolidation of ecosystems) but also a perpetual source of friction between ecosystems and thus of reinforcement of potential competition between them. Future equilibria are not predictable… they are, at best, multiple, or we could even say that market dynamics will be outside the equilibrium.
The fourth dimension highlighted by Nicolas Petit also stems from the radical uncertainty in which firms operate: the technological discontinuities that are always likely to reshuffle the cards in the competitive game. These discontinuities stem largely from the modularity of innovations in the digital world.
This may lead to question the notion of damage to innovation. The competition between Big Tech companies may not have a priori a depressing effect on incentives for innovation. Inter-ecosystem competition and the functioning of digital ecosystems explain the persistence of the innovation endeavor despite the apparent monopoly position of each Big Tech on its ecosystem. One could however wonder, in the continuity of Nicolas Petit’s work, about possible deformations of the dynamics of innovation both in terms of slope and structure. It could be interesting to take into account the power imbalances between the different members of the ecosystems in the sense of the University of Nice School of Law[36]. To extend Nicolas Petit’s analysis of the incentives and capacities to innovate with an analysis of the negotiating powers between the complementors and the keystone would be compelling. Indeed, the problems developed in particular in the field of vertical restrictions, namely by-default contractual provisions impossible to negotiate or abuses of technical or economic dependence. The keystones’ strategy in terms of innovation can be thought of not only from the perspective of inter-system competition but also from the perspective of intra-system coopetition.
A last thrilling point emerges from Nicolas Petit’s analysis of the Big Tech strategy; it deals with time and the lack of a strategy for maximizing profits in the short term. We can see it as a predatory strategy in the long run, a limit price strategy or the erection of barriers to entry but we can also interpret it as Nicolas Petit does as an arbitrage between exploration decisions and exploitation decisions as realized by a machine learning algorithm. This trade-off is consubstantial with the competitive pressure between ecosystems and the way ecosystems function: it involves generating data on customers to predict their needs, identifying competitive threats, diversifying data flows to improve algorithm performance, but also bringing into play the economies of scale and scope that are central to this economy.
Emphasis should be placed on exploration developments. as Nicolas Petit points out: “When exploration is applied to innovation choices, it denotes innovation that is not goal-oriented. The firm puts research money in a black box. It commits to an open-ended innovation process, rather than to a set technological outcome”.
Exploration decisions mean that platforms give up exploitation decisions to better understand the market and therefore to better predict its evolution. This has an opportunity cost for the platform that could be similar to an investment choice logic based on real options. This culture of exploration specific to Big Tech, combined with their technical and financial capacities and with the modular nature of digital innovation, can however also be seen as a means of mastering the future, admittedly imperfect, but asymmetrical compared to what players with lower quality data can do.
As Nicolas Petit points out, quoting Levinthal and March: “Power allows an organization to change its environment rather than adapt to them. Thus, firms with strong market positions impose their policies, products, and strategies on others, rather than learn to adapt to an exogeneous environment[37]”. However, the characteristic of molygopolistic competition is to counteract this entropy. There is no incentive to weaken its efforts. This is undoubtedly favourable in the short term (innovation does not diminish) but it can also play a negative role in that the self-regulating character of the market can be thwarted by leading keystone players to identify nascent competitive threats at their inception.
Thus, competition is competition in a broad sense that is not limited to a relevant market defined geographically or according to substitutability relationships. The competitive forces exerting pressure on molygopolists are other molygopolists, firms outside the Big Tech professions, and competitors not yet identified or not yet appearing on the market. Nicolas Petit summarizes perfectly the effects of the persistence of incentives to innovation in the framework of molygopolistic interactions: “[…] the rate and direction of tech giants’ innovation investment is determined by the vulnerability of their monopoly positions and the inefficiency of large corporate organizations”. Except that the now casting capabilities might limit the risks of missing a competitive or technological breakthrough, as it was the case for the dominant players in technological markets during the last century.
Once the molygopolistic model is established, Nicolas Petit discusses the implications for competition policy. His proposals reside in a differential treatment between markets that have tipped and those that have not yet done so. This second condition still needs to be clearly defined. As Nicolas Petit shows, it is necessary to be cautious of a structuralist bias that would lead one to consider that the persistence of competitive pressure depends on the maintenance of effective rivalry on the market. Everything will depend, of course, on the assessment that is made of the barriers to entry and on the possibility of public intervention to lower these barriers in the markets concerned in order to encourage new entries. In markets that have not tipped to the contrary, according to Nicolas Petit, consumers should not be deprived of efficiency gains by restricting the opportunities for Big Tech to diversify and thus compete.
III – Revisiting the current antitrust debates from the perspective of molygopolistic competition
Having achieved this overview, it is worth turning back to the question at the core of current debates: that of redefining the competitive framework for digital ecosystems. For the time being, it seems to be moving inexorably towards a mix between specific regulation of ecosystems, do and don’t rules laid down ex ante, and stricter application of competition rules through the reversal of presumptions and the opening up of structural remedies. What can be retained from the history of competition law enforcement and proposals of Nicolas Petit?
A first reference that we have already cited is that of Henry Simons stated in 1936 in the American Economic Review. It is all the more interesting because Simons defends a position that makes public regulation the worst possible solution because of its informational difficulties and its high vulnerability to capture phenomena. It should be noted, however, that his understanding of regulation was not that of ex ante rules defining obligations and prohibitions, but of day-to-day sectoral supervision.
A second reference that builds a bridge between Henry Simons and Nicolas Petit is Richard Posner in a 1969 contribution[38]: he shows that regulation can be avoided by shifting the focus from competition in the market to competition for the market. We know well in economics of long-term contracts, including public-private partnership contracts and concessions agreements, that competition for the market is an effective substitute for competition in the market[39]. However, the cards have to be reshuffled frequently. In public contracts, this is done by re-tendering the concession contract when it expires. As far as digital ecosystems are concerned, the question is that of the contestability of the dominant position once the tipping has taken place. This is one of Nicolas Petit’s key points when he distinguishes between the rules to be applied to markets that have tipped and those that have not yet done so.
Even beyond this point, several dimensions remain to be considered.
The first dimension is that of the possible limitation to be placed on diversification and consolidation strategies. It might be interesting to compare Nicolas Petit’s approach with the work carried out by Kamepalli et al.[40] on the notion of Kill Zone and on the difficulty for innovative companies to find financing if they potentially compete with the pivotal firm of an ecosystem or if one of their competitors has just been acquired.
The second dimension is the competitive treatment of inter-ecosystem links. Nicolas Petit rightly points out that collusive behaviour is unlikely. However, several avenues could be followed. The first, in the continuity of the DoJ’s complaint against Google, could be related to the default installation agreements of search engines on Internet browsers. The second could be based on the combined set of contractual clauses vis-à-vis the other stakeholders in the ecosystems (complementors, employees, etc.).
Finally, the third dimension is the appraisal of remedies. Nicolas Petit expressed his doubts about the net effects of open standards and interoperability. This question is all the more decisive since these remedies are the ones often presented as answers to the competitive challenges posed by Big Tech. This position, which runs counter to a quasi-consensus, echoes that defended by Richard Posner in 1969: on the one hand “the control of a monopolist’s profits is not a proper antitrust function” and on the other hand, “it is more effective to promote competition for the market than competition in the market[41]”.
As a conclusion to this too short and very superficial presentation of Nicolas Petit’s work, it would be possible to link the different options open to us in terms of Big Tech regulation with the different positions that structured the debates in the early days of the American republic on the links between the economy and public action.
The position of Brandeis and his epigones is resolutely Jeffersonian. It considers that it is not the behavior of firms that is problematic but their very size, whether in terms of economic efficiency or political freedoms. The dissenting opinion of Louis Brandeis, cited by Nicolas Petit in the Liggett decision of 1933, is particularly evocative of this logic[42]. “Through size, corporations, once merely an efficient tool employed by individuals in the conduct of private business, have become an institution — an institution which has brought such concentration of economic power that so-called private corporations are sometimes able to dominate the state”. From this perspective, it is therefore a question of acting against the concentration of economic power in itself, a concentration of which the size of the firms is an estimate. “Businesses may become as harmful to the community by excessive size as by monopoly or the commonly recognized restraints of trade. If the state should conclude that bigness in retail merchandising as manifested in corporate chain stores menaces the public welfare, it might prohibit the excessive size or extent of that business as it prohibits excessive size or weight in motor trucks or excessive height in the buildings of a city”.
The choice of regulation could be part of a Hamiltonian logic in which concentration is tolerated as a necessary evil (to achieve efficiency) but must be neutralized by strong regulation. In other words, Big Business must be controlled by a Big Government. The vision defended by Nicolas Petit through his molygopolistic competition is of Madisonian inspiration: both factions and keystones must express themselves and develop to be rebalanced.
[1] Letwin W. L., (1956),”Congress and the Sherman Antitrust Law: 1887-1890,” University of Chicago Law Review, 23(2), pp. 221-258.
[2] Bork R., (1978), The Antitrust Paradox – A Policy at War with Itself
[3] Orbach B., (2013), “How Antitrust Lost its Goal”, Fordham Law Review, 81(5), pp. 2253-2277.
[4] https://www.ftc.gov/news-events/press-releases/2020/02/ftc-examine-past-acquisitions-large-technology-companies
[5] https://judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf
[6] Marty F., (2020), “The Complaint brought by the DoJ against Google under Section 2 of Sherman Act: Some possible transatlantic convergences?”, Competition Forum, October 2020, art. n° 0003, https://www.competition-forum.com/
[7] Following Richard Posner “A realistic approach to interpretation is an approach that is analytically simple, that shifts the judicial focus to factual inquiry […]”.
Posner R.A., (2013), Reflections on Judging, Harvard University Press, p.233.
[8] As Nicolas Petit indicates, the rebirth of this movement can be dated in 2010 with the publication of a Barry Linn’s book.
Linn B.C., (2010), Cornered: The New Monopoly Capitalism and the Economics of Destruction, Wiley & Sons.
[9] Zingales L., (2017), “Towards a Political Theory of the Firm”, Journal of Economic Perspectives, volume 31, n°3, summer, 113-130.
[10] Brandeis L., 1934, “The Curse of bigness”, in Hovenkamp H.J. and Crane D.A. (eds), 2013, The Making of Competition Policy. Legal and Economic Sources, Oxford University Press, pp. 185-189.
[11] Wu T. 2018, The Curse of Bigness: Antitrust in the New Gilded Age, Columbia Global Reports, New York.
[12] Simons H.C. 1934, A Positive Program for Laissez-Faire: Some Proposals for a Liberal Economic Policy, University of Chicago Press.
[13] Simons H.C. 1936, “The requisites of free competition”, American Economic Review, 26 (1), 68-76.
[14] Reiter v. Sonotone Corp., 442 U.S. 330 (1979)
[15] Gutiérrez G. and Philippon T., (2018), “How EU Markets Became More Competitive Than US Markets: A Study of Institutional Drift”, NBER Working Paper, n°24700, June.
[16] Van Horn R., (2010), “Chicago’s Shifting Attitude toward Concentrations of Business Power (1934-1962).” Seattle University Law Review, 34, pp.1527-1544.
[17] Lao M., (2020), “No-Fault Digital Platform Monopolization”, Williams and Mary Law Review, 61(3), pp.755-814.
[18] Stigler G.J., (1952), “The Case against Big Business”, Fortune, May.
[19] Bougette P., Deschamps M., Marty F., (2015), “When Economics met Antitrust: The Second Chicago School and the Economization of Antitrust Law”, Enterprise and Society, volume 16, issue 2, June, pp.313-353
[20] Stigler G.J., (1988), Memoirs of an unregulated economist
[21] See Brozen Y., (1977), “The Concentration Collusion Doctrine”, Antitrust Law Journal, 46(3), pp.826-873
[22] Hovenkamp H., (2009), “The 1968 Neal Report: An Introduction and Reprint”, CPI Journal, Competition Policy International, vol. 5
According to Posner: “In some quarters the Chicago school was regarded as little better than a lunatic fringe. Kaysen and Turner’s Antitrust Policy, the classic statement of the Harvard school, published in 1959, contains virtually no trace of any influence of the Chicago school”.
Posner R.A, (1979), “The Chicago School of Antitrust”, University of Pennsylvania Law Review, 127, pp.925-948.
[23] Kovacic W., (1989), “Failed Expectations: The Troubled Past and Uncertain Future of the Sherman Act as a Tool for Deconcentration”, Iowa Law Review, 74, pp.1105-1150.
[24] First H., (2018), “Woodstock Antitrust”, CPI Antitrust Chronicle, April
[25] Williamson O., (1972), “Dominant Firms and the Monopoly Problem: Market Failure Considerations”, Harvard Law Review, volume 85, p. 1512.
[26] United States v. Grinnell Corp., 384 U.S. 563 (1966)
[27] Hart K. and al., (1980), “Comments on the proposal of professor John J. Flynn on no-fault monopoly”, Antitrust Law Journal, 48(3), pp. 897-905
[28] Turner F., (1969), “The Scope of Antitrust and other Regulatory Policies”, Harvard Law Review, 82, p. 1207 et
[29] Hamilton W., (1930), “Affectation with Public Interest”, Yale Law Journal, 29(8), pp.1089-1112.
[30] Crémer J., de Montjoye Y.-A. and Schweitzer H., (2019), Competition policy for the digital era, Commission européenne, DG Concurrence., Bruxelles, 133p.
[31] Chopra R. and Khan L., (2020), “The Case for ‘Unfair Methods of Competition’ Rulemaking”, University of Chicago Law Review, 87(2), pp.357-379.
[32] Fletcher A., (2020), “Market Investigations for Digital Platforms: Panacea or Complement?”, CCP Working Paper 20-06.
[33] Gaffard J.L., (2008), “Innovation, competition, and growth: Schumpeterian ideas within a Hicksian framework”, Journal of Evolutionary Economics, 18, pp.295-311.
[34] See also for the coordination of investments in a competitive situation:
Richardson G.B., (1960), Information and investment, Oxford University Press, Oxford
Richardson G.B., (1998), The economics of imperfect knowledge, E. Elgar, Cheltenham
[35] Gaffard, J.L. and Quéré, M., (2006), “What’s the aim for competition policy: optimizing market structure or encouraging innovative behaviors?”, Journal of Evolutionary Economics, 16, pp.175-187
[36] Jourdain-Fortier C., (2013), « Hommage à l’école niçoise de droit économique », Revue Internationale de Droit Economique, 2013/4, volume 27, pp.407-408.
[37] Levinthal D.A. and March J.G., (1993), “The Myopia of Learning”, Strategic Management Journal, 14.
[38] Posner R.A., (1969), « Natural Monopoly and Its Regulation: A Reply”?, Stanford Law Review, 22, pp.540-546.
[39] Crain W.M. and Ekelund R.B., (1976), “Chadwick and Demsetz on Competition and Regulation”, Journal of Law and Economics, 19(1), pp.149-162.
[40] Kamepalli S.K., Rajan R.G. and Zingales L., (2020), “Kill Zone”, University of Chicago, Becker Friedman Institute for Economics, Working Paper, No. 2020-19
[41] Posner R.A, (1969), op.cit.
[42] Louis K. Liggett Co. v. Lee, 288 U.S. 517 (1933)