Producer Liability for AI-Based Technologies in the European Union

,


Introduction
The EU is strongly concerned with the regulation of different aspects of robotics and AI as the Report on "Liability for Artificial Intelligence and other emerging digital technologies" prepared by the Expert Group on Liability and New Technologies -New Technologies Formation (28.11.2019), the "Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics" [COM(2020) 64 final, 19.2.2020], the White paper "On Artificial Intelligence -A European approach to excellence and trust" [COM(2020) 65 final, 19.2.2020] and the "Draft Report on Civil Liability Regime for Artificial Intelligence" (INL 2020/2014, May 2020 highlight. One of the most relevant documents is, in my opinion, the former report that I have quoted. As well-known on 16 February 2017 the EU Parliament adopted a Resolution on Civil law Rules on Robotics with recommendations to the Commission [P8_TA(2017)0051]. In this Resolution he asked the Commission to submit a proposal for a legislative instrument providing civil law rules on the liability of robots and AI. 2018 was a very active year in this domain. Indeed, in February, the EPRS (European Parliamentary Research Service) published a study on "A common EU approach to liability rules and insurance for connected and autonomous vehicles" (available at: http://www.europarl.europa.eu/RegData/etudes/STUD/2018/615635/EPRS_STU(2018)615635_EN.pdf). On 25 April 2018, the Commission published a Staff Working Document on "Liability for emerging digital technologies" [SWD(2018) 137 final] accompanying a document on "Artificial Intelligence for Europe" [COM(2018) 237 final]. One of the EU concerns is the subject matter that I am interested in: producer liability for AI-based technology.

Definition of Artificial Intelligence
The starting point is the definition of AI that has been suggested, based on what was stressed by computer science scholars, by the Group of high-level experts in AI (AIHLEG, "A definition of AI: Main capabilities and disciplines", 8. April 2019. Available at: https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligencemain-capabilities-and-scientific-disciplines. Date of consultation: 3. August 2020). as: "Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behavior by analyzing how the environment is affected by their previous actions". On its own, the White Paper simplifying the NTF's AI notion, defines AI as "the combination of data, algorithms and computer power" (White Paper, 19.02.2020).
As a scientific discipline, AI embraces different approaches and techniques such as machine learning (deep learning), machine reasoning (which includes planning, timing, representation of knowledge and reasoning, search and optimization) and robotics (which encompasses control, perception, sensors and actuators as well as the integration of other techniques within the cyber-physical system). Thus, it moves away from "deterministic" systems, where the computational logic does not pose special problems because it is a formal logic "if ... then", to approach unpredictable complex systems, which constantly learn from the environment, adapt to it by analyzing an increasing number of data and make decisions or suggest unpredictable actions to humans, actions almost unthinkable by them (Zech, 2018). All these systems possess the same core element: an algorithm (Barfield, 2018;Ebers & Navas, 2020). These emerging AI technologies are featured as complex, opaque (Wachter, Mittelstadt & Floridi, 2016;Scherer, 2016), open, autonomous, unpredictable, data-driven and vulnerable (NTF, "Liability", 2019; BEUC, 2020).

Some Controversial Issues Regarding Producer Liability for AI-Based Technologies
The Directive 85/374/EEC of 5 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning Liability for Defective products (=PLD;OJL 210, 7.08.1985) is based on the principle that the producer is liable for damages caused by defect in a product he has put into circulation. It is a risk-based liability, that is, strict liability. Although the NTF and the EU Commission reports regarding the evaluation of the directive's performance [Commission Staff Working Document, Evaluation of Council Directive 85/374/EEC, SWD(2018) 157; Report on the Council Directive, COM(2018) 246 final, 7.05.2018] have stated that its regime continues to serve as an effective tool and contributes to enhancing consumer protection, innovation and product safety, some key concepts and rules adopted in 1985 are challenged by the potential risks of emerging digital technologies. Those are, among others, the notion of "product" (3.1.), that of "defect" (3.2.), the "grounds of liability" (3.3.), the notion of "producer" (3.4.) and the "burden of proof" (3.5.). I will present them in the following sections.

The Notion of "Product"
For the PLD products are defined as movable goods, even when incorporated into another movable or immovable object (Art. 2). AI systems challenge this notion of product. Firstly, because in AI systems, products and services interact and it is difficult to shape a crystal-clear distinction. Secondly, it is also questionable if a software is covered by the legal concept of product or just as a product component part. Thirdly, if updates and upgrades or other data feeds are included in the concept of "product" or, finally, whether the legal answer is different depending on handling with embedded or non-embedded software.

Stand-Alone-Software. Updates and Upgrades
If one takes into consideration, for instance, a robot, it should be noted that in most cases the robot is viewed as a "movable good" that, furthermore, may be classified as a "product". When the robot is not a computer program embedded in a good, in which case there is no problem in order to apply the norms of liability based on damages caused by products, but a "virtual robot", e.g. a stand-alone-software, the question that arises is whether those rules can be applied Koch, 2018). This issue has been highlighted by scholars due to the debate about liability for damages that could be caused by the management of AI systems as long as they are based on computer programs Machnikowski, 2016). In general, it is understood that the PLD allows a broad interpretation of the notion of product, including a stand-alone-software as a product that may be defective (BEUC; 2020), when it deals with a standardized software. If we take into consideration a bespoke software, then the more accurate qualification would be software-as-a-service. Actually, there is not a clear cut between a product and a service in this field.
Another aspect is that of whether a stand-alone-software may cause the type of damages covered by the PLD. Damages that can be compensated to the victim based on such norm is the damage caused by death or bodily injury, the damage caused to, or the destruction of any item of property other than the defective product itself, provided that that good is of a type ordinarily intended for private use or consumption and the injured party has used it mainly for his or her own use or private consumption (Art. 9). It is hardly understandable that a stand-alone-software' defects could give place to these types of damages with the exception of health cases. It can cause a pure economic loss as would be the case of high frequency algorithms, which issue orders and counterorders to the market. Nevertheless, this kind of damages do not fall obviously within the scope of the PLD or that of the national rules. In any case, the PLD should cover damages to digital assets, in particular, to data (BEUC, 2020). Therefore, in order to apply the existing rules of producer civil liability in the case of defective products, the software must be a "component" of the corporeal good (AI technology) in question, taking into consideration that the designer of this component can be held liable as "producer" (Art. 3), if the manufacturer of the finished product can prove that the defect is attributable to the design of the product in which the component has been fitted (Art. 6 lit. f). The AI technology is not a finished product but is constantly evolving even once it has been put into circulation in the market. There are constant updates, upgrades and adaptations of the digital content that make the notion of the product, when it comes to emerging technology, does not correspond to the legal concept of the product. These updates and upgrades are computer programs that would fall both within the notion of product and that of a component part.

Embedded Digital Content
If the software is embedded in a good so that the absence of this content prevents the good from performing its functions, Directive ( In this case, the good that embeds the digital content may present a "defect of safety" (Art. 6 para. 1 PLD) related to the design of the software that, in addition, can implies that it cannot be used for the intended purpose. It is the case of the so-called "ineffective products". Apart from a defect that causes damages in accordance with the rules on products liability, it exists a lack of conformity, which permits to the victim to claim for such damage to the seller (Arts. 6 to 8 Directive 2019/771).

The Concept of "Defect"
The second key element of the PLD regime is the concept of "defect". Article 6 para.1 settles that a product is defective, when it does not provide the safety which a person is entitled to expect (Marco, 2007), although from the consumers side the concept of defect should embrace all kind of AI risks in a future PLD review (BEUC, 2020). This abstract notion of defectiveness given by Art. 6 -known as "consumer expectation test" and interpreted by the CJEU as the "reasonable expectations of the public at large"-, shall take all circumstances of the case into account: first of all, the presentation of the product (CJEU Joined Cases C503/14 y C-504/14 Boston Scientific Medizintechnik, GmbH v. AOK Sachsen-Anhalt -die Gesundheitskasse), secondly, the use to which it could reasonably be expected that product would be and, thirdly, the time when the product was put into circulation (Solé, 1997). This point in time, that is the cornerstone for producer liability, arises the question if updates or upgrades are defective, but the product was already put into circulation, who is held liable (NTF, 2019). In my view, updates and upgrades should be considered as products themselves as long as a stand-alone-software is, in a broad meaning of "product", covered by the PLD. Moreover, they are embedded in a good. Regarding the relevant time for checking defectiveness it should be considered the time in which updates and upgrades are put into circulation rather than when the product, that is, the AI technology was put into circulation. Thus, there are different "moments of circulation" to be taken into account. On the other hand, consumers should expect that producers take care of their products on an ongoing basis (BEUC, 2020). In fact, the continuously product's changes impact on its safety. AI systems should integrate safety and security-by-design mechanisms [Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics, COM(2020) 64 final].
The NTF questions also whether, when a sophisticated AI system with self-learning capabilities makes an unpredictable decision deviating in the path, such deviation can be treated as a "defect" (NTF, 2019). It could be treated as defect if when designing the AI systems, the unpredictability has not been contemplated. In this case, it should be evaluated as a design defect. Both scholars and case law are willing to distinguish the types of defects, since, in practice, the victim at the time of proving the defect of the product, which caused the damage, will refer to the time or phase of manufacturing of the good in which the defect is located and what caused it.

Grounds of Liability According with Types of Defects
First of all, I will expose the NTF recommendations (3.3.1.). Then I will present the type of defects (3.3.2.).

NTF Recommendations
From the EU documents being published, it appears that the producer strict liability will remain as the main liability rule but combined -as the NFT suggests-in case of breach of a duty of care with a fault-based liability rule (NTF, 2019). This approach leaves an open question, that is, how to properly combine both grounds of liability in the field of products that cause damages.
Along with this main rule, the NTF Report introduces a new term, that of "operator", which refers to a person who is in control of the risk connected with the operation of the AI system and who benefits from such operation ("Riskmanagement rule"). The term embraces traditional concepts as owner, keeper or user. The Draft Report on Civil Liability Regimes for AI considers the "operator" as "deployer" (Draft Report, 2020). With AI systems there is often more than just one person who "operates" the technology; for instance, the user operates the autonomous vehicle, but there is also another person, who provides support services, updates the software, defines the features of the technology or supervised the machine learning system. The former is considered as the "frontend operator" and the last "backend operator". The experts found that strict liability should be on the one who has more control over the risks posed by the operation. In my opinion, experts were thinking that the frontend operator is usually the user and the backend operator is the producer, the designer, online service provider or anyone else. This backend operator should fall, in the understanding of the Draft Report, under the same liability rules as the producer, manufacturer or developer (Draft Report, 2020).
In addition, operators of AI systems should have to comply with a series of duties of care, whose infringement triggers a fault-based liability. For all categories of "operators", the duties are: a) choosing the right system for the right task and skills, b) monitoring the system and c) maintaining the system (NTF, 2019 Producers, whether or not they incidentally act as operators should have to design, describe and market products in a way effectively enabling operators to comply with those duties and monitor the product after putting it into circulation. Both grounds of liability (strict and fault-based) lead to some questions that are not easy to solve, when the producer is held liable as backend operator and also for the violation of a duty of care. Moreover, the user or frontend operator could be held liable, besides the producer, on the basis of his fault in choosing the system for the task or because of his bad skills to operate the system. Furthermore, a contract between frontend and backend operators is frequently concluded. From the victim's side there is a concurrence of claims against multiple tortfeasors (subjective uncertainty, alternative causes) that national legislators should consider when amending their legal systems.

Types of Defects
In my view, the liability regime suggested by the NTF is far more complicated that that distinguishing three types of defects that are often stressed: the defect of manufacturing (i), the defect of design (ii) and the defect of information (iii).

(i) Manufacturing defects
Manufacturing defects occur when the characteristics of the design have not been taken into account and conversely the copies of the same series do meet (e.g. an incorrect installation of the software in an autonomous vehicle or the presence of viruses in the software that cause accidents). The determining factor is the comparison of the product with other copies of the same series (Solé, 1997). In addition, the consequences of a defect in the manufacture of an autonomous system, such as an autonomous vehicle or drone, can be very serious for both people (the passengers of the autonomous vehicle since we are longer not able to refer to the driver of the same) or for items of property. Verifying that the product departs from the other copies of the same series being irrelevant the standard of conduct used in the production and marketing of the product would be sufficient in order to hold the producer as liable. The manufacturer will hold liable based on strict liability, although he may present as an exception that the state of the scientific and technical knowledge existing at the time of the product was put into ilr.ccsenet.org International Law Research Vol. 9, No. 1; circulation did not allow the existence of the defect to be assessed (Art. 7 lit. e PLD).
In fact, the evaluation report of the Machinery Directive states that the development risks rule concerning intelligent robots and autonomous systems has to be reconsidered in the sense that probably it should not be applied [Commission Staff Working Document, Evaluation of the Machinery Directive, SWD(2018), 161 final, 38]. In this line of thoughts, it is questionable the application of the development risks rule to the updates and updates being the NTF report against (NTF, 2019).

(ii) Design defects
In relation to design defects, consumer's expectations do not contribute to anything new or different that does not already make the duty of the manufacturer to inform about the risks giving instructions to the correct handling of the AI-based technology as, in general, of any product. If all this information exists, consumer expectations should not be disappointed. Instead, taking into account the "benefit / risk test" in design defects it imposes on the part of the manufacturer the duty to develop the safest design even when the risks are known, and the consumer could be warned. Thus, the "reasonable alternative design test" is the most appropriate in case of defects of design. This means the application of a fault-based liability regime. However, taking into account the difficulties of testing the reasonable alternative design for the injured party, it could be suggested the reversal of the burden of proof as a benefit for the victim. So, the manufacturer will have to prove that there was no "reasonable alternative design" [concerning American Law, see: el § 6 (c) Restatement Third; Conk, 2002] at the time when the technology was put into circulation that would prevent that damages would be caused. In this case, it could not be free from liability by claiming as an exception the development risks rule, as it would be sufficient to prove the absence of an alternative design (Marco, 2007).

(iii) Information defects
The progressive sophistication of smart machines involves a more precise information and instructions that the manufacturer must provide to the purchaser of the smart technology. More information, but also more technical, needing even some kind of specific knowledge, by the owner of the intelligent machine, to fully understand the information provided. Information in this context becomes more complex, leading to the fact that the information defect will be, along with the design defect, a more frequent type of defect than the manufacturing defect when dealing with intelligent machines. It must be taken into account that a product can be correctly designed and manufactured and still to present some risks of impossible elimination because there is no manufacturing process or alternative design that allows to manufacture that product more safely according to the state-of-the-art that exist at that time. Therefore, information becomes more and more relevant.
Assessing if there is a defect of information means to question the manufacturer's behavior because he was who supplied the information and decided how to provide it. Information defects make more sense in a system of liability based on fault than in one of strict liability (Salvador & Solé, 1999), which, in order to protect the injured party, should be complemented with the reversal of the burden of proof for the victim, that is, the manufacturer should prove the completeness and readability of the information supplied. This would mean that the manufacturer should be only held liable for the lack of information on reasonably foreseeable risks, so that those unknown because, according to the state of the science and technology, he could not know and, consequently, he could not report the injured party should not be held liable, that is, he should be able to defense himself according to the state-of-the-art rule.

The Notion of "Producer". The "Designer" as "Producer"
The legal concept of "producer" should be reviewed. According to Art. 3 PLD, the producer is liable for damages caused to third parties by a defective product. A model that places liability solely on the producer, even where the defect is not strictly a manufacturing defect and some individually identified persons or a research team have been involved in the design, may disincentivise investment. If, in the case of smart machines, one considers that a range of defects may be due to design, the concept of "producer" should be broadened to encompass the "engineerdesigner" (Machnikowski, 2016;Koch, 2018, Spindler 2015, against: Wagner, 18/2018. However, to the extent that the software is viewed as a fundamental component part of the product, the producer of the finished product may be held not liable if the defect is due to the design of that component. Thus, the designer could be held liable directly as "manufacturer of a component part" of the robot for the damage caused. In the NTF Report the "designer" of the AI system could be considered as "operator"; in particular, a backend operator. If he has the control and benefit of the operation, he could hold directly strict liable [NTF, 2019; EU Commission in the "Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics", COM(2020) 64 final, 19.2.2020].

The Burden of Proof of Causation. Mechanisms to Alleviate It
The burden of proof means that the plaintiff and the defendant shall prove those facts on which they base their claim or their defense respectively. Thus, Art. 4 PLD states that the victim must prove the damage, the defect and the causal link between them. Practical difficulties in proving causation "beyond any reasonable doubt" lead to the adoption of mechanisms that relieve the victim to prove it. One of them is the presumption ("evidentiary burden") that allows the judge to infer from an evidence presented a fact. The "proof of presumptions", as it is known, is widely applied by national courts of justice in civil liability matters. The standard of proof varies, as it is known, according to the liability regime. Therefore, the plaintiff's defense is not the same if the claim is based on fault, in which case the plaintiff bears the burden of proof of the negligence, as, if it is based on strict liability rules, in which case, the plaintiff should only prove the cause, the damage and the causal link. Along with the proof of presumptions, there are some mechanisms, which are integrated into the evidentiary process, and which lead to the certainty of some facts. In the case of products liability, particularly, important is the defect and causation proof. These mechanisms are, on the one hand, the so-called legal presumption and, on the other hand, the well-known by the Courts res ipsa loquitur principle, by virtue of which, there are events that usually do not occur, except when a certain fact intervenes. This principle allows the judge to rely on the damage and the circumstances that surrounded it to infer from its mere existence or from its occurrence the behavior or the causal link between such behavior and the damage caused (Marco, 2007;Luna, 2008). Damages, in any case, must be proven by the plaintiff.
Mechanisms to alleviate the burden of proof can be very useful to overcome the difficulties in order to prove causation between the behavior of an expert system and the damage caused, without reversing the burden of proof aiming to protect the victim (Martín, 2018). The AI's black-box or "logging by design" (NTF, 2019) could play a very significant role.
On its own, the NTF considers, as a general rule, that the victim is to be required to prove what caused the damage, although it admits, depending on the specific case, the reversal of the burden of proof or the adoption of mechanisms that alleviate the proof of the causal link (NTF, 2019). Apart from that, the burden of proving fault should be reversed if there exist disproportionate difficulties and costs of establishing the relevant standard of care and of proving their violation (NTF, 2019; BEUC, 2020).

Outcomes
In terms of liability for damages caused by defects of intelligent machines, it should be carried out an important review. Although the NTF Report "Liability for Artificial Intelligence and other emerging digital technologies" (NTF, 2019) is based on the fact that the application of the PLD has proven effective for consumer protection and product safety, it notices that new technologies challenge key concepts such as product, its updates and upgrades, the defect or the grounds of liability when the producer maintains a certain level of control of the expert system (product) once it has been put into circulation. The concept itself of "time of circulation" should be reviewed.
On the other hand, the so-called development risks exception is in the spotlight. The trend is to prevent the manufacturer from claiming it (BEUC, 2020). It should not be done, but rather distinguishing -as I have stressedtypes of defects in order to decide on which case this exception would not be applied.
Moreover, the rule of "proportional liability" regarding causation and uncertainty in an environment operated by expert systems that interact with humans, must be taken into account when considering a prospective regulation of civil liability for the damage caused. In particular, in order to hold the designer liable as producer. Nevertheless, the Draft Report on Civil Liability regime states the "joint liability or solidarity" as a general rule (Draft Report, 2020).
In any case, future "personalized" information based on customer preferences, needs, capabilities, by way of analysis massive data stored by the manufacturer, can allow to "personalize" liability (Busch & De Franceschi, 2018;Busch, 2019)