The report distinguishes three types of situations in which the use of algorithms may cause problems:
1. Algorithms used to implement a traditional cartel
To begin with, algorithms can be used to facilitate the implementation of classical cartel agreements, such as price-fixing and market or customer sharing. The algorithm thus only comes into play in a second step to support or facilitate the implementation, monitoring, enforcement or concealment of the cartel. In addition, algorithms can for example be used to monitor whether retailers comply with fixed, minimum or recommended prices. The report concludes that although developing a case-specific understanding of the algorithm might still be advisable, such situations can be dealt with under the current competition rules without any problems. In fact, the UK Authority for Competition and Markets (CMA) and the European Commission have already imposed fines for horizontal price-fixing and resale price maintenance implemented (at least partially) through algorithms.
2. Situations involving a third party
In the second category, a third party, e.g. an external consultant or software developer, provides the same algorithm or somehow coordinated algorithms to competitors. The authorities conclude that there is still limited case law regarding such situations. The relevant question would be whether the competitors are aware of the third party’s anticompetitive acts, or could at least reasonably have foreseen them. In the aforementioned cartel case, the CMA in fact suggested that the software developer that provided the relevant algorithm could perhaps also be fined, but the CMA decided not to investigate this for reasons of priority.
3. Situations involving a self-learning algorithm
In the third situation, there is no human intervention but the anti-competitive outcome is reached solely through the actions of self-learning algorithms. The authorities refer to a number of studies that suggest that this could be possible. From a legal point of view, the authorities emphasise that situations in which an algorithm merely unilaterally observes, analyses, and reacts to the publicly observable behaviour of the competitors’ algorithms might have to be categorised as intelligent adaptations to the market rather than coordination. Such adaptations are allowed under competition law. However, if the algorithms would actually start to exchange secret, competitively sensitive information or would even start concluding formal or informal cartel agreements, the authorities see two potential scenarios through which companies could be punishable for their algorithms’ behaviour. In the first scenario, companies could be held liable simply for introducing and using an algorithm that engages in anti-competitive behaviour. However, under a less interventionist approach, a company could be held liable for the behaviour of its algorithm(s) only if a reasonable standard of care and foreseeability is breached. In any event, the authorities advise that companies think about how they could ensure antitrust compliance when using pricing algorithms.
4. Conclusion
The authorities finish by concluding that it is yet unclear which types of cases competition authorities will face in the future. Consequently they believe that it is not possible yet to predict whether there is a need to reconsider the current legal regime and the methodological toolkit and, if so, in which way.
In fact, we reached a similar conclusion in our book Digital Competition Law in Europe: A Concise Guide (Kluwer). This book, which was first published in September 2019, provides an overview of where European digital competition law currently stands and where it is likely to head in the future. The study once again demonstrates that digital competition is top-of-mind for competition authorities around the globe. You may therefore be ensured that the Digital Competition Team of Loyens & Loeff will continue to stay on top of all these exiting developments. We will continue to inform and advise you on everything that lies ahead…