AI-based applications raise new, so far unresolved legal questions, and consumer law is no exception. The use of self-learning algorithms in Big Data analysis gives companies the opportunity to gain a detailed, individual insight into the customer’s personal circumstances, behavior patterns and personality. On this basis, companies can tailor their advertising, but also their prices and contract terms, to the respective customer profile and – drawing on the findings of behavioral economics – exploit the consumer’s biases and/or her willingness to pay. AI-based insights can also be used for scoring systems to decide whether a specific consumer can purchase a product or take up a service.
The use of AI in consumer markets thus lead to a new form of power and information asymmetry. Usually, consumers do not even know that advertising, information, prices or contract terms have been personalized according to his or her profile. If a certain contract is not concluded or only offered at unfavorable conditions because of a certain score, consumers are usually unable to understand how this score was achieved. This is not only because the algorithms used are well-guarded trade secrets. Rather, the specific characteristics of many AI technologies – such as opacity (“black box effect”), complexity, unpredictability and semi-autonomous behavior – can also make effective enforcement of EU Consumer legislation difficult, as the decision cannot be traced and therefore cannot be checked for legal compliance (European Commission, White Paper on AI, COM(2020) 65 final p. 14).
The use of AI in products and services also creates new risks and liability issues for consumers – due to the connectivity and the high degree of automation – aspects which are at present not explicitly covered by EU legislation (cf. European Commission, Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, COM(2020) 64 final).
Therefore, the European Commission considers in its White Paper on AI possible adjustments to existing EU legislative frameworks also with regard to consumer law (White Paper, p. 14).For “high-risk AI applications”, the Commission is even considering a new legal instrument. This instrument could lay down binding legal requirements, in particular as regards the quality of training data, keeping of records and data for evidence to trace back AI-based actions and decisions, information duties regarding the use of AI systems, requirements regarding the robustness and accuracy of AI systems, human oversight and specific requirements concerning the processing of bio-metric data (White Paper, p. 20 et seq.). In order to enforce these standards, the Commission considers an objective, prior conformity assessment which could include procedures for testing, inspection or certification (White Paper, p. 27).
Based on this risk-based approach, it is essential for the sake of legal certainty that the criteria for “high-risk AI applications” are clearly defined. On this point, the Commission’s White Paper on AI unfortunately remain very vague. On the one hand, the European Commission does not want to classify AI applications as “high risk” until both the sector and the intended use involve significant risks. The sectors identified as high-risk are health, transport, energy and parts of the public sector. Consumer law would therefore not be covered. However, the Commission points out that there may be exceptional instances where the use of AI for certain purposes would be considered always as high-risk, irrespective of the sector concerned, and where the above requirements would nevertheless apply. If this criterion is applied, consumer law would be at least partially covered: Indeed, among the possible exceptions, the White Paper lists not only AI recruiting procedures, bio-metric identification methods and other intrusive surveillance technologies, but also “specific applications affecting consumer rights” (White Paper, p. 18). However, the White Paper does not specify which applications the Commission has in mind.
From the perspective of consumer law, the future discussion should therefore first of all focus on identifying the deficits of the current (national and European) legal framework for the use of AI systems in order to determine the corresponding need for regulation in specific areas and risks. In this regard, the focus should not only be on consumer law. Rather, the European Commission should also take into account neighboring fields of law which are interlinked with consumer law, especially data protection, anti-discrimination and media law as well as competition law.
In any case, it should not be forgotten that AI applications are not per se harmful to consumers. AI-based systems can also be used to strengthen consumer rights. AI based personalization can help to ensure that information given to consumers as well as contracts are tailored to the wishes and needs of the individual consumer. Future “personalized” information – based on customer preferences, needs, capabilities, by way of analysis massive data stored by the business – could pave the way to more individualized products and services avoiding the one-size-fits-all rule. In addition, AI systems can also be used for automated compliance monitoring and for the enforcement of consumer regulations. LegalTech companies, in particular, ensure that consumers can enforce their rights more quickly, easily and cost-effectively than was previously the case.
As with any dual-use technology, the regulation of AI will therefore require a closer look at the opportunities and risks that arise for consumers in particular use cases.