Published on

I. Introduction

A. A working definition of artificial intelligence

The term artificial intelligence (AI) is an old one that is subject to constant change. At the commercial level, very simple systems were called AI, while today the technology is mostly used as a synonym for systems based on "deep neural network". Depending on the country or institution, the definition contains different core elements.[1] For the purposes of this article, AI is therefore generally referred to as a software-based system capable of solving complex problems previously reserved for humans. [2] To this end, there are at least two elements that, without defining them in general terms, often have great importance in current legal literature: the ability to learn,[3] and the ability to act with a certain degree of autonomy.[4] In order not to disregard relevant practical aspects, a narrower definition of AI is not used. This article focuses on certain legal aspects of the use of AI that are of interest for clarifying liability issues.

B. Practical problems for liability issues

Regardless of a definition of AI, I believe there are several characteristics commonly attributed to this technology that are relevant to liability cases. These can be grouped into four categories: (i) the ability to be used flexibly; (ii) the ability to act autonomously; (iii) the ability to learn continuously; and (iv) the ability to draw conclusions from a large amount of data.

1. The ability to be applied flexibly

Classical computer programmes are very prescriptive: every possible option must be explicitly programmed into the software in order for it to work. If the computer program encounters an unexpected situation, it no longer works, roughly speaking. In AI-based solutions, on the other hand, the AI learns by example to generalise a task so that it is able to handle an infinite number of different situations flexibly without having to anticipate and pre-programme every single possibility.[5] In the development of AI models [6], a system is trained and tested several times until a satisfactory correctness rate is achieved. However, if perfection is achieved during training, this is almost always a sign of a fundamental error, namely "overfitting". That is, it has learned the examples "by heart" or has learned a very rigid approach and therefore may mishandle future use cases that differ even slightly from the examples used during training.[7] Since this generalisability can ultimately be seen as a very complex statistical approach, the disadvantage is that one has less control over how borderline, new, or exceptional cases are treated. So, while with classical programmes one can (and must) say with certainty in advance how each case will be treated, with AI-based solutions there is always a margin of error that should be taken into account. To mitigate the fact that pure AI is inherently error-prone or potentially unpredictable (although admittedly often less error-prone than the average human), it is therefore coupled with classical coding, which supports its functionality.

2. The ability to act autonomously

One of the basic functions of computerisation and computer programmes has always been to speed up and automate human tasks. Thanks to the aforementioned ability of AI-based systems to deal flexibly with inputs and thus to perform very complex tasks in a way that resemble or even surpass the capabilities of a human, machines are increasingly trusted to autonomously decide on the best course of action to complete the assigned tasks. Because of this notion that the computer programme is autonomous, the causal chain between the actions of the AI-based system and its user is also perceived to be weaker.[8]

3. The ability to learn on an ongoing basis

With traditionally programmed computer programmes, every adjustment must be thought through in advance and explicitly programmed in. If the circumstances in which the software is used change, the computer programme must be adapted manually. AI-powered systems, however, can learn from experience, so the information gathered while performing their assigned tasks can be used to teach these systems to perform the tasks better and adapt to changing circumstances. This further weakens the perceived link between the programmer and the outcome of the AI-based system. [9]

4. The ability to draw inferences from a large volume of data

One of the transformative capabilities of AI technology is the ability to automatically draw inferences and connections from a large volume of data and use this knowledge to accomplish a task. This allows tackling problems that were previously considered too difficult due to their complexity. [10] AI is able to independently find solutions to certain problems or make predictions about the future from data with greater certainty. Since the approach of an AI system differs from that of a human, the same system can achieve a high degree of correctness but still make mistakes that are obvious even to a layperson. It soon became apparent that the data used by AI for this purpose not only has the potential to identify basic causal relationships and scientific rules of the problem being solved, but also contains a large amount of insignificant correlations, embedded biases and misconceptions, as well as bad habits. These affect the functioning of AI systems and can lead to undesirable results. [11] This realisation fundamentally changes the meaning of data. Unlike classical computer programs, where coding lays at the centre, the development of AI-based systems also requires a hitherto little-known maintenance of the underlying data.

II. Liability issues

In practice, AI can be used in very different ways and touches most areas of our lives. If it causes damage, in most cases there is a contractual relationship between the injured party and the responsible party, or the facts of the case are governed by specific legal liability standards, so that the specific circumstances of the individual case must be taken into account. Nevertheless, it is worthwhile to examine the special liability issues of tort liability, as these cover most of the basic legal issues of liability law.

A. Tort liability

Tort liability under Art. 41 CO is considered a catch-all standard for civil liability in tort law. In order to establish liability, the following points must be proven: damage, the unlawfulness of the infliction of damage, the (adequate) causal connection, the wrongful conduct of the tortfeasor and his or her fault.

1. damage and illegality

In the context of tort liability, relevant damage is usually defined as damage to an absolutely protected legal asset, i.e. persons and property. Also relevant are pure financial losses for which there is a protective norm. [12] These two aspects of liability law are not very special in the case of damage caused by AI and will therefore not be discussed further.

2. causal link

a. Autonomy of an AI system

In AI applications, the aspect of identifying the responsible person and thus determining a tortfeasor is particularly difficult. A first aspect to consider is the aspect of autonomy of systems that can act autonomously (i.e. act in a legally relevant way according to their own programming). In these cases, the question arises as to whether an AI system can achieve such a degree of autonomy that no one can be held responsible for its actions. The answer is in principle negative, since at least one person can always be identified who is responsible for the creation of an AI-based system and its capabilities, and one who is responsible for its use. [13] Various scenarios can be described that describe the degree of human control over autonomous AI systems when they are deployed and can assist in attributing liability:[14]  

- Human in the loop: Here, a human is involved in the system as the person in charge and decision-maker. This means that the AI system can be divided into two parts: The first part, which is characterized by the AI, is a decision support system that offers the best course of action or options for action. The person then makes the decision to take or not to take a particular action. The second part of the system is the subsequent execution of the decided action, which should be computable and thus corresponds to the imputability issue of a classical computer programme. If, despite the involvement of a person, the consequences of that person's decision cannot be determined because an AI system is again involved, this second part of the case corresponds to one of the next categories.

- Human on the loop: Here, a human is involved in both implementing and monitoring the activities of the AI system. The AI system can basically carry out the task assigned to it independently. A person does not have to approve or determine every single action, but regularly monitors the activity of the AI-based system and can intervene if necessary when problems are detected. The fundamental difference between "in the loop" and "on the loop" is thus that in the first case the human has a decision-making function, while in the second case he or she has a reactive monitoring function.

- Human off the loop: In this case, the AI system runs completely autonomously in every operation, so that the possibility of a person intervening is limited. In principle, the human does not intervene in the execution of the assigned activity. This is the case when the system acts on the basis of a pre-trained AI, but also when the AI is independently trained from the accumulated experience during the operation.

With "human in the loop", an identifiable person is in the position to determine the action of the AI-based system. This is usually the case when the AI-based system does the preliminary work for the human. This type of use is in practice chosen when the AI is only to perform simple mechanical preliminary work, such as automatically extracting information from a document, in situations where the risk of harm is high, such as when making a diagnosis, or in complex situations where the solution requires a mix of machine and human skills, such as when analysing a complex market situation to make an investment decision. The user's liability may be mitigated or excluded if the program is flawed[15] or if the user is unable to recognise the consequences of their decision or to examine the solution proposed by the AI critically enough. [16] The reasons for the latter possibility often do not lie in the nature of the AI technology used (since it has already contributed before the person makes a decision), but rather in the lack of information, of training of the person who is "in the loop", or the negligence of the person who, despite sufficient training or education, does not have the necessary skills to fulfil their role.

In the case of "human on the loop", the system is basically capable of acting independently, but is supervised by a person. This is usually done to eliminate persistent risks that may arise despite a use in accordance with the intended purpose of the AI system, or in the context of a reasonably foreseeable misuse. [17] The imputability of harm depends on the supervising person's ability to influence it. The supervisor may be at fault if, despite a duty and opportunity to act, he or she fails to interrupt or prevent the misconduct of the AI-based system and to mitigate or even immediately reverse its harm. [18] If, however, damage is caused by the AI misconduct despite dutiful behaviour of the supervising person, the fault and thus a responsible party must be sought elsewhere. This is usually the provider of the AI solution. If there is no evidence that the provider is responsible due to an AI error, the doctrine of general risk theory may come to the rescue, [19] so that the user or the owner of that system, depending on who would have had the best opportunity to avoid the damage, may nevertheless be considered the responsible person.

With "human off the loop", the system is basically left to itself to carry out its task. This constellation is often discussed in the context of autonomous weapons and the associated efforts to ban them, [20] but is better known for the use of autonomous vehicles as well as the automation of certain processes. Here, human intervention during the operation of the AI-based system is in principle not foreseen, unless it is done in order to adjust the target of the machine (think of an autonomous car that allows the destination to be adjusted before the end of the journey). Here, the user of the AI-supported system, i.e. the person who initiated the activity of the AI-supported system, can in principle only be at fault if the intended goal or the use of the fully autonomous AI system itself is in breach of a duty. For example, the use of such a system is in breach of duty if the AI-based system cannot be used safely (e.g. a vehicle is not able to reliably detect and avoid humans), [21] if necessary safety measures are not provided for (e.g. there is no limitation on the transaction capability of an automated trading system) or if the use of the AI system is enabled by persons who are not able to recognise and appropriately eliminate the dangers (e.g., provision of a diagnostic system to persons who are not professionally trained). [22] In my view, the permissibility of the use of a fully autonomous AI-based system cannot simply be assumed when there is no requirement to exercise reasonable care with regard to the potential for harm.

It may be relatively easy to use the aforementioned categorisation to establish the causal link between the AI application and its user or provider. [23] However, it is more difficult in individual cases to develop this causal link from there, so to identify a possible technically faulty cause. In particular, it is necessary to analyse what can happen in the causal chain in order to identify further persons who can potentially be held responsible for harm.

b. Transparency of an AI software

In general, it can be said that the person causing the damage can be identified as either the user or the creator of the programme when the functioning of a classical computer programme depends directly on its coding. [24] An AI application, on the other hand, is the result of the combination of software (which corresponds to classical coding) and the data used to train it. Both factors play a crucial role, and they can often be attributed to at least two different people. This means that the number of possible perpetrators expands in a relevant way, which in turn means that the facts to be clarified become more complex. [25]

On the data side, due to the fact that the individual data only influence a fraction of the functioning of the AI application, it is difficult to determine which data set, and to what extent, caused the system to produce the infringing result. [26] These are difficult to locate and identify. The basis of an AI system may be a pre-trained general model (or models) provided by a third party, for which the datasets used for training are not provided. Then proprietary datasets may be used to adapt the model to one's own purposes. Finally, the model is often trained and updated using newly collected data. For data protection reasons, the data for the continuous training of the system is often not available or only available in encrypted form. [27]

In terms of coding, AI development today, as with all modern software, usually relies on many different publicly available resources that are very complex. [28] Even the code that makes a working application possible can amount to millions of lines of code from different sources, so that in individual cases the traceability of a bug can be almost impossible. [29]

Recently, the discussion about the explainability and comprehensibility of AI applications has developed in such a way that many good practices and services are currently being developed that attempt to make the decision-making factors of an AI transparent and thus explain how they work (so-called "explainable AI" and "interpretable AI"). [30]

Despite this welcome development, it becomes almost impossible for the injured party to prove the actual cause of the damaging event, as this requires unrestricted access to all existing log files, to the coding of the software and, if nothing else is available that ensures the transparency of the AI, to the data used as training. [31]

In the absence of transparency of the system, the doctrine of the general risk theory doctrine comes to the rescue, since it cannot be said exactly what is the cause of the harm. [32] According to this doctrine, the person who creates, maintains or is otherwise legally responsible for a dangerous condition must take all necessary and reasonable protective measures suitable to avoid the impairment of protected legal interests.

If one follows the argumentation that AI, by its very nature, has the potential to act unexpectedly and that this can have negative effects on legally absolutely protected goods (such as property, life or health), then the user or the provider has a duty to mitigate the damage. In my opinion, the injured party is therefore not obliged to prove the causal chain further than the provider in order to assert his claim under Art. 41 CO. He must only be able to prove that the provider did not take the necessary due diligence measures to mitigate the damage.

This approach presupposes that one must make security restrictions despite any great success of an AI-based system. This attitude is confirmed by the discussion about the use of AI. The limit of this liability is the doctrine of permissible risk. According to this doctrine, "a danger to the legal interests of others that does not go beyond the general risk of life cannot be prohibited, but only compliance with a certain minimum degree of care and consideration can be demanded. In the case of a permissible risk, the prohibition of any danger is replaced by the requirement to limit the danger to the minimum that cannot be excluded at all or only with disproportionate effort, if the corresponding activity is to be permitted at all. This concerns the question of which risks are generally to be accepted and not a reduction of the due diligence requirements. A breach of the duty of care is only to be assumed if the perpetrator could have foreseen or should have foreseen a risk to the legal interests of the victim. In answering this question, the standard of adequacy applies. According to this, the conduct of the offender must be suitable, according to the usual course of events and the experiences of life, to bring about or at least favour a result such as the one that occurred. For the occurrence of the result to be attributable to the offender, its mere foreseeability is not sufficient. Rather, the further question arises whether it was also avoidable. For this purpose, a hypothetical causal course is examined and it is checked whether the success would not have occurred if the offender had acted dutifully. For the attribution of the result, it is sufficient if the conduct of the offender was the cause of the result with at least a high degree of probability or with probability bordering on certainty".[33]

3. fault

a. Fault of the user (Human in the loop)

In order to establish fault, a duty of care must have been breached[34]. In this context, the question arises as to who is actually liable between the user and the provider of an AI solution if the breach of the duty of care is the failure to take necessary measures to avoid damage. In particular, the question arises whether a duty to act can be established and how the injured party can prove the breach of this.

1. Human in the loop

In AI-based systems where a person retains the decision-making function, both the objective and subjective sides of fault must be met for the system's actions to be attributable to that person. [35]

On the subjective side, the person must be able to understand the consequences of their actions and act differently. [36] This aspect may be relevant if a person uses an AI product without being able to critically evaluate the options proposed by the AI system. If this is the case, the person is then unable to recognise the potential for harm. However, this only applies as long as this situation is not self-inflicted (so-called fault in assumption), in that the person assumes an activity for which he or she is not sufficiently qualified. [37]  

On the objective side, the user must exercise the care that a reasonable third party would have exercised in the same situation in order to avoid this foreseeable damage. The user would therefore have to exercise reasonable care in selecting the solution and the provider, as well as in applying the AI-based solution. This includes, among other things, a risk analysis of the use of the AI-based system and the implementation of the resulting reasonable measures, which may be very different if the user is a consumer or a large company. [38]   It is important to note that the user does not have to be an AI or IT specialist if he uses external solutions. However, he must be able to explain his requirements to the AI provider to the extent that the latter can initiate the necessary technical steps and ensure adequate training.  [39] Since the user must be in the decision-making position, he must also ensure that he is in a position either to assess the correctness of the functioning of the AI system or to have an appropriate security framework to prevent possible damage. If this is not the case, he may be responsible because of fault in assumption.  [40]  

These requirements can be difficult to meet when implementing an AI system that is supposed to find the best solution better than a human, but for which no comprehensible explanation is available. In these cases, the human should be used as a decision maker to correct errors in the AI system that are obvious to a human but not to the machine. In these cases, it then decides whether the proposed solution is acceptable. The question of the extent to which trust in a machine is exculpatory depends, in my opinion, on both the proven effectiveness of the system and the safety measures applied, including the qualifications of the decision-maker. The greater the risk of a wrong decision, the higher the demands on the supervising person.

2. human on the loop

A similar situation arises when a person only has a monitoring function, but here the person is often not responsible for checking all the individual decisions of the system, but only for recognising a situation that deviates from the intended one and to intervene to avoid possible harm from a wrong decision. Thus, the person does not need to be competent in the same area as the AI system, but only able to recognise harmful behaviour or situations with potential for harm and intervene accordingly.

3. human off the loop

If the AI-supported system is fully autonomous, the user must objectively have taken all reasonable safety measures to avoid harm. What these measures look like in concrete terms depends on the circumstances of use.

On the subjective side, the person must be able to understand the consequences of their actions and act differently. This aspect can be relevant if a person uses an AI product and cannot be expected to take into consideration a malfunction of the AI-supported system. This could be the case, for example, if a drone is equipped with an AI control function and flies against a person because the drone does not recognise them as an obstacle. If this is the case, either product liability or another special liability law that allocates the risk of physical harm differently from normal tort liability often comes into play.

b. Fault of the provider

Of course, it is not only the user who can be at fault, but also the provider who takes responsibility for the development of the AI solution. The provider must ensure that reasonable care has been taken in the development of the AI solution.

The degree of care to be objectively applied depends primarily on the intended use as well as the promised properties.

Since AI is still a new and rapidly evolving science, it is difficult to identify best practices that can be applied in a given case and serve as a base for determining sufficient due diligence in development. [41] Since AI is software, established standards such as ISO standards can be used as a tool. [42] Currently, there is an extensive discussion on the regulation of AI in the EU, among others. In particular, regulation at the European level has been proposed to establish principles for the requirements of sufficient care. This has been flanked by ethical guidelines that set ethical requirements for the development of AI. There are already standards for AI technologies and other standardisation initiatives, such as the ISO initiative, that deal with AI. [43] With all these standards, the question then arises, since they often do not have the force of law, how quickly and to what extent an AI provider or developer should implement these requirements, since tracking and implementing them involves a great deal of effort, which in turn can hinder the development and application of AI-based systems. Again, this is a value judgement issue that depends on the circumstances.

In general, the provider of the AI solution would need to exercise due diligence to ensure that it followed the best practices in place at the time (and those in place today, if it is responsible for maintenance of the software, which is the rule rather than the exception these days) when developing its tool, so that the AI application is not dangerously flawed, and that it has taken all reasonable security measures to mitigate the potential for harm from a flaw of the AI solution.

It should be noted that in the development of AI-based solutions, two assumptions with diametrically opposed effects compete with each other: on the one hand, as seen, the pure AI algorithm is inherently error-prone, which is very often acknowledged, and therefore security measures such as hard-coded or physical boundaries are used. However, an application of the risk minimization principle, which aims to avoid all hazards, comes up against the fact that it is impossible to avoid all possible risks in order to provide viable and useful solutions. Therefore, the principle of permissible risk must be taken into account. The decisive factor in these cases is a question of value that cannot be answered in advance.

The question of the culpable faultiness of an AI application is connected with various problems. On the one hand, the fault to be proven depends on the damage. Thus, in tort liability, due to the limitation of covered damages, a defect is only relevant if it is causal for the damage, which also presupposes that the use of the AI application is in compliance with its intended use. [44] Secondly, as already mentioned, the success of AI applications lies in the fact that they learn a generally valid solution from the learning data and use it flexibly. However, experience teaches that there will always be exceptional cases, so that errors can never be completely avoided. Furthermore, an AI application can be considered faulty under various aspects.

A first aspect that can be used to determine the defectiveness of an AI system is the general success rate in completing the assigned task: the exact threshold of the success rate to determine any defectiveness depends on the circumstances and cannot be determined objectively, but must be considered relatively. In the discussion about AI, humans are often taken as the benchmark. But which human being or group of human beings should be taken as the benchmark? And should the bar for AI applications be set higher than that for humans? And considering that no human is perfect and everyone makes mistakes,[45] is the human the right benchmark for this measurement? These highly sensitive ethical questions depend on the practical scope of application, the stage of development of the technology, and the legal interests at stake. So while one can be less stringent about the ability of an AI-based irrigation system for plants in a flat, one will want to achieve near-absolute safety for vital medical devices. [46]

Assuming that this benchmark has been identified and defined, and that it can be shown on the basis of various events that the actual success rate is lower than the established benchmark, one can speak of a faulty AI solution. However, even if the system is generally considered to be non-defective because each error is within the accepted margin of error, this does not yet mean that the individual error is not due to an attributable breach of a due diligence duty.

A second aspect is therefore the specific success rate: it is difficult to classify individual events within or outside an accepted accuracy. [47] In order to test the success rate of an AI-based system, one must have the necessary statistical data, and if this is lacking, one must prove a more precise breach of duty of care. As mentioned above, the topic of "explainable AI" has become widespread in recent years, so that techniques and services are being developed that attempt to make the most important factors for an AI's decision and its functioning transparent. [48] However, these techniques often need to be applied in advance, as retrospective use is not always possible. A single error may then be the result of a problem in the learning data from which the AI system has learned an incorrect rule (this is known as 'bias'). If it is proven that the harmful result is the consequence of the injection of bad data (e.g. because due care was not taken in the selection and preparation of this data, or because someone was able to inject falsifying data because adequate measures against this were not taken), responsibility can be assigned on the basis of these errors. It should be noted that the AI provider can be held responsible for failing to take the necessary due diligence measures to avoid erroneous data, if the damaging event results from the accumulation of different data sets from different sources, each of which in itself could not have sufficiently influenced the model. [49]

The same applies if an error is found in the code. Here it is necessary to check whether there are deficiencies in the development of the AI architecture (e.g. wrong signals are filtered out or wrongly weighted) or in the coding of a software or process in which the AI solution is embedded (e.g. in the absence of reasonable security limits).

In order to work out the precise criteria for determining the practical requirements for the AI system and the people who interact with it, it is important to remember that the use of these AI-based solutions does not take place in a legal vacuum, so that, depending on the area of application, there are often already developed technology-neutral safeguards and duties of care designed to prevent the infliction of harm. The uniqueness of an AI solution in this respect is that the applicable duties of care often need to be considered before the AI solution is deployed, i.e. during development and planning, rather than only during the operation of the system, as its effects are widespread and therefore more noticeable than in previous methods.

B. Product liability

Damage to physical legal assets caused by an AI-supported system often falls under product liability, as the AI must be embodied in order to achieve such a result. Under product liability law, this liability cannot notably be limited. [50]

Unlike tort liability, however, instead of fault, only a defect in the product must be proven under Art. 4 Product Liability Act. The creator of an AI solution is therefore, as advocated by a large part of the doctrine, often liable as a creator under the Product Liability Act for property damage and personal injury.[51] This is certainly the case if the AI is a component of a movable object. More difficult is the situation when the AI-based software and the object on which the AI runs are separate, so that the the manufacturer of the object cannot bear any responsibility for the software component. This is increasingly the case with the current trend of offering physical platforms for which third parties can develop and offer their software components.[52] Although controversial and despite the potential for improvement in regulation, in these situations some doctrine advocates for considering pure software also as a product. [53]

It should be borne in mind that the Product Liability Act limits liability for property damage to privately used objects, so that in the case of AI-based systems used for business purposes, only death or injury to a person is covered by the Product Liability Act. [54] However, these issues are usually regulated in the contract between the provider and the user.

C. Contractual liability

As mentioned above, in most cases an AI system is also used to fulfil contractual obligations or is even the subject of a contract. The manifestations of such a contract are very different, and each of them can raise particular issues in the use of an AI.

We will examine here only one particular manifestation of the use of AI in the performance of a contractual obligation, which I believe is a good example of the liability issues involved in the use of AI in the performance of a contract, because of its timeliness and the nature of the issues it raises.

Let us assume concretely that a specialist doctor makes a wrong diagnosis because he relied on the recommendation of an AI software in a uncertain case. Here, the doctor is contractually responsible to the patient. The contract between patient and doctor is a mandate, so the proper performance of the contract requires the application of the duty of care owed. [55]

1. as a service provider using AI

The first question that can be asked is whether the use of AI software in fulfilling the mandate violates the required duty of care. It is now widely recognised that AI offers great opportunities in the medical field as well, and hospitals are increasingly using this technology, both in diagnosis and in determining the best therapy. [56] Deontologically, a doctor is obliged to seek another opinion if he or she believes that he or she has reached the "limits of the doctor's capacity" and that the patient's welfare requires it. [57] AI-based diagnostic systems are now often regarded as practical second opinions, so that their use - provided the system has been carefully designed and the user is aware of its limitations - at least does not breach the duty of care.

The diametral question of whether the use of such a tool is necessary is more difficult to answer and can change rapidly due to the speed of progress in this field. The degree of care required depends in particular on the circumstances, such as the nature of the treatment, the risks involved, the degree of discretion, the resources and time available to the doctor in each case, and on the doctor's training and skills. Due to the training obligation of doctors,[58] the increased need for support in the analysis of patient data, as well as the great interest in AI technologies, I believe we will increasingly move in the direction of requiring the most efficient and effective of these applications to be part of the basic equipment for specialists. It will then be reasonable to assume that in these situations a doctor is expected to have a carefully selected AI-enabled tool, which if not used, could be an indication of a breach of duty of care. Depending on the reliability of the system, the need to substantiate the diagnosis would then be higher if the doctor acted against the advice of the system. This, however, only if the advice of the AI system can be considered as an expert opinion. According to case law, reports and expert opinions have probative value if they appear conclusive, comprehensibly substantiated and free of contradictions and there are no indications against their reliability. [59] Based on this obligation to substantiate, the use of an AI-based system could increase the requirements for substantiation of a diagnosis only to the extent that the system can explain its result in a comprehensible manner, as this explanation should then be included in the substantiation of the diagnosis. [60] This is often not the case today, which also explains the delay in the application of AI-based diagnostic systems. In the absence of an explanation on the part of the AI system, the doctor is, in my opinion, only obliged to assure himself of the correctness of his conclusion (a so-called double check) in order to be able to prove sufficient care.

This consideration reaches its limit when the motivation for the result of an AI system start being incomprehensible. This refers to the situation in which the AI system can explain the correctness of its conclusion in principle, but this explanation is so complicated that even the average expert cannot understand it. The user would then no longer be able to contradict the AI system or recognise possible errors. In my opinion, such a constellation is comparable to a medical device, the use of which requires an approval procedure. This procedure may require additional safety measures that deviate from traditional medical art, but which must be fulfilled in order to avoid erroneous results on the part of the AI system.

Thus, although the use of AI today generally constitutes neither proof of adequate care per se nor a breach of duty of care, it would have to be examined in each individual case whether a misdiagnosis is based on a breach of duty of care, which requires the use of an AI-based support system on the part of the physician in individual cases and thus also places increased justification requirements on the diagnosis.

How the provider of an AI-enabled product or service is liable for the outcome and any damages is then a matter of interpretation of the underlying contract between provider and user.

2. as an AI provider

If the AI-based solution is provided to the service provider under a purchase or service contract, the service provider or user is primarily liable for the results of the application to its own contractual partners.

If only the software is sold, the seller is liable both for the existence of warranted characteristics and for ensuring that the object of purchase does not have any defects that impair its usability. However, since, as mentioned, AI systems learn from data, if one buys an AI system that is capable of learning, one basically becomes responsible for the input of data and its consequences. A responsibility of the AI provider for learned behaviour will then be difficult to establish, unless this was explicitly agreed in the contract or was to be expected otherwise according to the state of the art at the time the system was developed.

Due to these problems, as well as other peculiarities of modern software development practice in general, an AI solution is often offered as a service (so-called Software as a Service, or "SaaS"). [61] The AI provider is then able to continuously develop the created solution and ensure the training of the AI based on a larger amount of data. The responsibility for a defectiveness of the AI system then lies with the provider, unless otherwise agreed.

III. Conclusion

The attribution of liability in the event of damage in the case of an AI-based system raises various problems in determining the applicable duties of care due to the special design of this technology. Due to the more flexible application possibility, which is associated with an inherent risk, new special due diligence measures on the part of the user are required when using an AI-based solution, as the user could otherwise be held liable under the doctrine of the general risk theory. If the user has exercised reasonable care, fault may lie not only in the coding of the AI-based solution, but also in the selection and preparation of the data used for its training. Despite advances in this area, it is very difficult to prove fault in these cases.

N.B. This article was published on HAVE 04/2021. The article has been slightly tweaked in certain passages from its original version in order to improve readability.

1 Andrea Bertolini, Artificial Intelligence and Civil Liability, Brussels 2020, 15 ff.

2 Report of the interdepartmental working group on artificial intelligence to the Federal Council, Bern 2019, 7. This definition is closer to the EU proposal for a law on artificial intelligence.

3 Melinda F. Lohmann, A Future Liability Framework for Artificial Intelligence, HAVE 2021, 111 ff, 111 f.

4 Erdem Büyüksagis, Responsabilité pour les systèmes d'intelligence artificielle, HAVE 2021, 12 ff, 12; Silvio Hänsenberger, Die Haftung für Produkte mit lernfähigen Algorithmen, Jusletter 2018, para. 7 f.

5 Arun Das/Paul Rad, Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey, arXiv 2020, 1.

6 An AI model is a file that has been trained to recognise certain types of patterns,, visited 22.9.2021.

7, visited 22.9.2021.

8 Campolo Alexander/Crawford Kate, 'Enchanted Determinism: Power without Responsibility in Artificial Intelligence', Engaging Science, Technology, and Society, January 2020, 1 ff., 12 f.

9 Campolo/Crawford (fn. 8), 12 f.

10 One of the last great achievements of AI is the ability to predict the three-dimensional shape of a protein from its composition,, visited 22.9.2021.

11 Das/Rad (fn. 5), 1.

12 BGE 141 III 527 E. 3.2.

13 Lohmann (fn. 3), 112.

14 Jerome Gurtner, Les nouvelles technologies et la responsabilité des avocats, in Responsabilité civile et nouvelles technologies, in: Responsabilité civile et nouvelles technologies, Genève/Zurich 2019, 45 ff, 73 f.; Jermy Prenio/Jeff ery Yong, Humans keeping AI in check - emerging regulatory expectations in the financial sector, BIS 2021, available at, visited on 22.9.2021.

15 See II.A.2.b.

16 These correspond in particular to a case of interruption of causality due to third-party fault, or lack of the subjective element of fault.

17 See the EU proposal for a law on artificial intelligence, available at ?uri=CELEX:52021PC0206&from=EN, visited on 22.9.2021.

18 In the absence of an express protective norm establishing a guarantor status for the supervising person, one can fall back on the doctrine of the peril theorem in the case of absolutely protected goods: Hardy Landolt, Haftung für rechtmässige Schadenverursachung, HAVE 2014, p. 3 ff, p. 6; Judgement of the Federal Supreme Cour 4A_104/2012 of 3 August 2012 E. 2.1.

19 See II.A.2.b.

20 See, for example, UN work in this regard: the-convention-on-certain-conventional-weapons/background-onlaws- in-the-ccw/, visited 22.9.2021.

21 See II.A.2.b. in discussing the doctrine of the general risk theory.

22 This conclusion is reached in particular by the discussed principles of the general risk theory, product liability and contractual liability.

23 The problem of today's reality, where software is often offered as a service, and where this can often also represent a combination of various independent "microservices", is not addressed here. The fragmentation of offers, which is a consequence of the welcome opportunity for people to use their own data, poses a challenge in determining the facts and the parties involved.

24 The creator of the programme can nowadays be a complex network of independent contributors. See in this respect Christiane Wendehorst, Safety and Liability Related Aspects of Software, Luxembourg 2021, 16 ff.

25 Concerning the discussion about a need to adapt regulation in order to reduce this complexity for reasons of justice and efficiency, see e.g. Lohmann (fn. 3) and Wendehorst (fn. 22).

26 Das/Rad (fn. 5), 9 ff.

27 Wendehorst (footnote 22), 82.

28 Wendehorst (fn. 22), 22 ff.

29 Although it is relatively easy to create a neural network (GPT-2 source code is 174 lines of code, see https://, visited 22.9.2021), the underlying libraries that enable the easy creation of neural networks consist of millions of lines of code. Tensorflow, for example, nearly 3 million lines of coding., visited 22.9.2021.

30 Das/Rad (fn. 5), 1 f.

31 Cf. Lohmann (fn. 3), 17.

32 Landolt (fn. 17), 5; BGE 124 III 297 E. 5b: "The general risk theory is [...] to be applied when the causal connection or unlawful cause between an omission and the damage that has occurred is to be assessed."

33 BGE 134 IV 193 E. 7.2 f.

34 Landolt (footnote 17), 5.

35 LANDOLT (Fn. 17), S. 11.

36 Idem.

37 Judgement of the Federal Supreme Court 6B_1341/2015 ruling of 25.02.2016 E. 4.3.3.

38 The risk analysis referred to here is a manifestation of the objective duty of care, which requires a weighing of the consequences of one's own actions that is appropriate to the circumstances (see Judgement of the Federal Supreme Cour 6B_1341/2015 ruling of 25.02.2016 E. 4.3.1.).

39 Idem.

40 Judgement of the Federal Supreme Cour 6B_1341/2015 ruling of 25.02.2016 E. 4.3.3.

41 Lohmann (fn. 3), 117.

42 E.g. ISO/IEC/IEEE 12207:2017 "Systems and software engineering - Software life cycle processes".

43 See in this regard Nativi Stefano, AI Watch: AI Standardisation Landscape, Luxembourg, 2021, 20 ff.

44 See Gianni Fröhlich-Bleuer, Softwareverträge, Bern 2014, para. 2256.

45 BGE 105 II 284 E. 1.

46 Cf. Wendehorst (fn. 22), 35 ff.

47 Wendehorst (fn. 22), 82 f.  For example, whether a single wrong decision is part of the 1% margin of error or whether it is attributable to something else.

48 For a technical overview of the current status: Das/Rad (fn. 5).

49 Judgement of the Federal Supreme Court 4A_606/2017 of 30 April 2018 E. 3.1.1.

50 Art. 8 PrHG.

51 Art. 1 PrHG.

52 Wendehorst (fn. 22), 65; Lohmann (fn. 3), 115.

53 Lohmann (fn. 3), 115; Fröhlich-Bleuer (fn. 43), para. 2251 ff.; Wendehorst (fn. 22), 65 f.

54 Art. 1 PrHG.

55 Judgement of the Federal Supreme Court 4A_432/2020 of 16 December 2020 E. 6.2.

56 "One important area is imaging, i.e. X-ray, CT or MRI images. Here it is often a question of seeing the "needle in the haystack" in many sectional images of an organ. Today, software can be trained to assess such images as well as experienced radiologists and to recognise possible problem areas. Furthermore, AI can be used for chronic diseases, i.e. diseases that cannot be cured, but for which rapid aggravation can be avoided with the right therapy.", zentren/inst_caim/content/e998130/e998135/e1025836/e1097065/ Gluckspost_KIundArztalsPower-Duo_RaphaelSznitman_Juni2021_ eng.pdf, visited on 22.9.2021.

57 Art. 10 Code of Ethics of the FMH.

58 Art. 40 MedBG and Art. 9 Continuing Education Ordinance.

59 Judgement of the Federal Supreme Court 8C_608/2015 of 17 December 2015 E. 3.3.3.

60 Judgement of the Federal Supreme Court 9C_195/2015 of 24 November 2015 E. 3.3.1.

61 Wendehorst (fn. 22), 81.