Front. Pharmacol. Frontiers in Pharmacology Front. Pharmacol. 1663-9812 Frontiers Media S.A. 1297353 10.3389/fphar.2023.1297353 Pharmacology Review Liability for harm caused by AI in healthcare: an overview of the core legal concepts Bottomley and Thaldar 10.3389/fphar.2023.1297353 Bottomley Dane Thaldar Donrich * School of Law, University of KwaZulu-Natal, Durban, South Africa

Edited by: Athanasios Alexiou, Novel Global Community Educational Foundation (NGCEF), Hebersham, Australia

Reviewed by: Matjaž Perc, University of Maribor, Slovenia

Eike Buhr, University of Oldenburg, Germany

*Correspondence: Donrich Thaldar, ThaldarD@ukzn.ac.za
14 12 2023 2023 14 1297353 19 09 2023 27 11 2023 Copyright © 2023 Bottomley and Thaldar. 2023 Bottomley and Thaldar

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

The integration of artificial intelligence (AI) into healthcare in Africa presents transformative opportunities but also raises profound legal challenges, especially concerning liability. As AI becomes more autonomous, determining who or what is responsible when things go wrong becomes ambiguous. This article aims to review the legal concepts relevant to the issue of liability for harm caused by AI in healthcare. While some suggest attributing legal personhood to AI as a potential solution, the feasibility of this remains controversial. The principal–agent relationship, where the physician is held responsible for AI decisions, risks reducing the adoption of AI tools due to potential liabilities. Similarly, using product law to establish liability is problematic because of the dynamic learning nature of AI, which deviates from static products. This fluidity complicates traditional definitions of product defects and, by extension, where responsibility lies. Exploring alternatives, risk-based determinations of liability, which focus on potential hazards rather than on specific fault assignments, emerges as a potential pathway. However, these, too, present challenges in assigning accountability. Strict liability has been proposed as another avenue. It can simplify the compensation process for victims by focusing on the harm rather than on the fault. Yet, concerns arise over the economic impact on stakeholders, the potential for unjust reputational damage, and the feasibility of a global application. Instead of approaches based on liability, reconciliation holds much promise to facilitate regulatory sandboxes. In conclusion, while the integration of AI systems into healthcare holds vast potential, it necessitates a re-evaluation of our legal frameworks. The central challenge is how to adapt traditional concepts of liability to the novel and unpredictable nature of AI—or to move away from liability towards reconciliation. Future discussions and research must navigate these complex waters and seek solutions that ensure both progress and protection.

artificial intelligence liability Africa healthcare harm section-at-acceptance ELSI in Science and Genetics

香京julia种子在线播放

    1. <form id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></form>
      <address id=HxFbUHhlv><nobr id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></nobr></address>

      1 Introduction

      Modern artificial intelligence (AI) is the cornerstone of the fourth industrial revolution. Successes in data availability, algorithm design, and processing power (Craglia et al., 2018) have empowered AI systems to make dramatic impacts in disparate sectors including transportation, education, agriculture, public services, finance, and healthcare (Artificial Intelligence for Africa: An Opportunity for Growth, Development, and Democratisation, 2018).

      The varying degrees of autonomy with which AI systems can operate distinguish it from other emerging technologies. The advantage of AI lies in its ability to process massive amounts of varied information, and thereby perform valuable functions or draw useful conclusions inspired by its interpretation of the information. However, the essence of its usefulness is also its most challenging feature. For example, machine learning is a common approach to AI system design in medicine. Instead of programming the system for all possible scenarios with specific instructions, when using machine learning, developers set a broad goal which the system uses to form its own instructions to achieve the goal through repeated experiments and self-research (Rachum-Twaig, 2020). As it processes information, the AI system adjusts the parameters by which it judges inputs to produce more accurate outputs, effectively programming itself (Townsend, 2020). These approaches usually produce more accurate systems and they also require less human control (Grimm et al., 2021). Alarmingly, this and similar approaches to AI system design remove the human element at key stages of development in a way which may complicate inquiries into the attribution of responsibility and liability. This becomes especially pronounced where the AI system is so complex that its operations become inscrutable to humans. These so-called “black-box” algorithms lack the transparency to fully audit how they came to the conclusions they did. In response to this issue, some developers have endeavoured to design and create ‘explainable’ AI systems and ways of ensuring transparency which would foster an environment of accountability and responsibility and create better evidence when determining liability (Ali et al., 2023).

      Determining responsibility will be important in dealing with the social challenges of AI integration. Perc et al. (2019) investigates how AI systems will likely have to choose between acting in favour of one party’s interest over another in certain contexts and how this may influence how the technology evolves. Developers may be incentivised to produce systems which favour owners’ interests above users’ in order to drive sales. The solution may be to require that AI systems act in the interests of the broader community; however, this policy may create its own issues in that it will potentially disincentivise people from buying AI systems which will not protect their interests outright and therefore lead to a lower adoption and investment in AI systems overall. This approach may then fail to fully realise the safety gains which can be had by increased AI usage. Of course, as Perc et al. (2019) consider, another approach may be to leave such decisions for the AI system to decide itself, or simply leave it to chance. This approach, however, suffers from a lack of clear answers to questions of responsibility and liability for the outcomes of decisions. Robust regulation and thoughtful juristic approaches to AI challenges will be necessary to provide adequate responses to responsibility for actions in these cases. This will be vital to supporting the benefits of AI integration whilst properly addressing the risks of the technology. Specifically, in healthcare, AI systems show impressive potential to increase the overall efficiency of healthcare systems and to manage disease outbreaks (Owoyemi et al., 2020). Furthermore, these systems can increase the reach of initiatives, while supplementing an already overburdened sector (Pepper and Slabbert, 2011). However, healthcare institutions deal with patients who are at their most vulnerable, where an incorrect decision could prove fatal. In addition, healthcare practitioners are required to abide by particularly high ethical and legal standards which AI systems may not easily conform to. In particular, the black box nature of some algorithms may prevent physicians from providing enough information to their patients about their treatments to satisfy requirements of informed consent, the emergent abilities of AI systems also raise questions as to how they will be considered in relation to the usual standard of care expected of physicians, and medical liability may need to be redefined for AI use.

      Many jurisdictions already have laws and regulations which would encompass AI technologies; however, the specific challenges of AI may mean that these regulations do not provide desirable results when they are relied upon. As a response to this, many jurisdictions outside Africa have begun drafting specific AI law and regulations (Sallstrom et al., 2019). Providing a proper response to the issues posed by AI use in healthcare is essential to providing legal certainty to all stakeholders. This will allow them to order their interactions with AI systems and create an environment of trust in relation to AI use (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). This trust will be important for the future of AI as a lack of trust could permanently harm the reputation of AI in healthcare, or lead to additional costs through inefficient regulation or repeated amendment (Floridi et al., 2018).

      The aim of this article is to set the stage for legal development and policy initiatives in Africa by exploring the legal concepts relevant to the attribution of liability for AI harm. First, we begin by describing current developments and the use of AI in healthcare in Africa in Section 2. Then we discuss the concept of liability broadly in Section 3. In Section 4, we describe how AI presents novel challenges to liability determination, particularly the concept of personal liability. In Section 5, we review the different approaches to determining liability. We provide our concluding thoughts in Section 6.

      2 Artificial intelligence in healthcare in Africa

      AI systems in healthcare can perform tasks normally requiring human physicians (Joshi. and Morley, 2019). Most current uses are in diagnosis and screening; however, future systems could scan images, discover new drugs, optimise care pathways, predict positive treatment outcomes, and provide preventative advice (Joshi and Morley, 2019). Increased use of AI allows physicians to focus on tasks where, given the current state of technology, they cannot be replaced. Furthermore, AI could further broaden public health initiatives by increasing access and tracking disease outbreaks, while lowering the cost of care (Joshi and Morley, 2019).

      For example, DeepMind’s AlphaFold is an AI system which accurately predicted the protein structures of the COVID-19 virus, being an important aspect of creating a vaccine (Jumper et al., 2020). This use could greatly reduce vaccine response times in the future. IBM’s Watson for oncology is another system which has been able to analyse genomic data of patients in light of medical data from vastly more journals than a person could process, so providing more personalised treatments with high accuracy rates (Chung and Zink, 2018).

      While other jurisdictions are considering policy-level AI implementation in healthcare systems (Joshi,and Morley, 2019), Africa has had relatively little meaningful interaction with AI in healthcare both academically (Tran et al., 2019) and clinically (Owoyemi et al., 2020) and, currently, African countries are at a nascent stage in their AI regulatory policies (Townsend et al., 2023). This is despite AI’s utility in developing countries where AI systems could lead to better utilisation of resources and enable new, effective treatments and treatment management systems (Sallstrom et al., 2019). Furthermore, AI systems can provide overarching and effective treatment options that improve standards of living, improve direct patient care, maximise supply-chain efficiencies, reduce administrative tasks, and streamline and improve compliance measures (Sallstrom et al., 2019).

      Even though relatively limited, there has been some AI system use in Africa. In South Africa, Vantage, a machine learning-based system developed by BroadReach Healthcare, was used to assess clinics’ performance and provide staffing and operational recommendations in HIV clinics in KwaZulu-Natal (Singh, 2020). Further, DrConnect, an application by Discovery Health, provides personal assessments of medical symptoms and advice and remote support using AI technology, by using information from wearable devices such as smartwatches, to give medical and lifestyle advice (Singh, 2020). In Ghana, MinoHealth AI labs have used AI systems for automated diagnostics, forecasts, and prognostics. Also, BareApp is using specialised AI technology to diagnose skin disease and suggest treatments (Eke et al., 2023). In Uganda, AI is being merged with other technologies to develop a specialised system in the management of female chronic diseases (Eke et al., 2023). In Nigeria, Ubenwa is using AI to improve the diagnosis of birth asphyxia in low-resource settings (Owoyemi et al., 2020). Also in Nigeria, AI is proving effective in the identification of fake drugs (Owoyemi et al., 2020).

      These examples illustrate the growing use and development of AI systems in Africa. However, as this use grows, it will be vital that African countries position themselves to take full advantage of AI’s benefits. Legal regulation will be especially important in directing AI system use and development by providing legal certainty by the formation of proper policies and regulations. A main concern though will be the determination of liability for AI harm.

      3 Understanding liability

      The nature of emerging technologies is that we need time to understand them and develop policies and regulations which will encourage equitable use (Calo, 2015). AI in healthcare is no different. While AI has the potential to positively influence healthcare, its implementation must necessarily be coupled with appropriate safeguards to minimise risks of harm (European Commission, Directorate-General for Justice and Consumers, 2019). Specific to AI, unforeseeable risks may still arise in apparently well-trained systems where performance is being improved (World Health Organisation, 2021). As it currently stands, when risks arise, our existing policies and regulations will be the basis of determining who is responsible and liable for the harm caused. Assessing whether these policies and regulations are sufficient to properly determine responsibility will be important, as the determination of responsibility plays an important role in determining the basis of legal liability for AI conduct and garnering trust in AI usage more broadly. Currently, this will largely depend on civil liability rules.

      Generally, civil liability provides the dual purpose of providing a means for victims of harm to be compensated, while also providing an economic incentive for those held liable to avoid continuing harmful conduct (Buiten et al., 2021). Accordingly, these rules are an important means of protecting patients and providing clarity to businesses on how they may innovate and operate their products (Buiten et al., 2021). However, the varying complexity of AI systems, system updates, algorithms which change from environmental input, and cyber-security concerns may make it difficult to justify claims for compensation and to provide clear pathways for victims to bring claims (European Commission, Directorate-General for Justice and Consumers, 2019). It is also unclear whether the rationale behind current liability regimes will be effective in dealing with AI harm. For example, where AI systems make decisions, it may be difficult for a plaintiff to find a suitable defendant or for a court to determine the standard of care to be expected from an AI system. Therefore, it is currently unclear how current liability regimes will consider AI harm in healthcare.

      Proper liability policy formation will consider the outcomes of current liability rules but, in addition, it must necessarily consider the impact which the policy will have on the development and use of AI in the future. This means tailoring policy towards managing AI-specific risks while encouraging positive uses. For example, a lack of legal certainty and fear of unreasonable legal penalties for relying on AI recommendations may discourage healthcare practitioners from using AI systems as active participants in treatment, relegating AI systems’ role to the mere confirmation of decisions made by healthcare practitioners (World Health Organisation, 2021). On the contrary, removing penalties may encourage AI systems use; however, this position may be tenable only where existing issues of accountability and responsibility are properly considered.

      Of particular concern in healthcare should be determining how an AI system will form part of the standard of care. Such a determination will be essential for providing sufficient information for physicians and patients to make decisions about relying on the technology (World Health Organisation, 2021). The decision of the physician is important as he/she will also likely be responsible for the proper operation, monitoring and maintenance of the technology (Bertolini and Episcopo, 2021), and their decision could be consequential for their employer through vicarious liability (World Health Organisation, 2021).

      A concern specific to Africa is that many policy frameworks which would guide the development of AI systems are created in environments outside of Africa. Moreover, a lack of access to high quality data sets and limitations in infrastructure could lead to the use of algorithms which are predominantly developed outside of Africa. These could be potentially prejudicial as they may not be properly designed to work in low-resource environments (World Health Organisation, 2021). Therefore, liability policies will need to consider that developers are situated outside of Africa, and that algorithms are adapted for, rather than designed for, the African context.

      The role of an AI policy framework should be to prevent AI harm and to promote AI innovation, following a risk-based, rights-preserving, agile, adaptive, and innovation-supporting regulatory approach (Townsend et al., 2023). Robust and effective regulation will provide important guiding principles for the development and implementation of AI systems in healthcare in Africa (World Health Organisation, 2021). Legal certainty will provide routes for compensation for patients and ensure accountability and responsibility through integration and innovation in the healthcare system.

      4 Challenging our understanding of liability: AI and personhood

      AI systems’ successful imitation of qualities normally associated with humans has bolstered the inquiry into AI personhood (Abbott and Sarch, 2019). A crucial development in support of AI personhood has been the ability to program generalised goals into AI systems. This approach is markedly different to traditional software as the AI system is programmed to decide what steps it would take to achieve its goal, instead of being programmed with specific, step-by-step instructions (Bostrom and Yudkowsky, 2014). This goal-directed behaviour is what powered IBM’s Deep Blue chess robot. Programmers surpassed their own chess skills by encoding the rules of the game into Deep Blue and relying on its superior processing power to find ways of “winning” which the developer would not be able to do (Bostrom and Yudkowsky, 2014). Should this be enough to draw the necessary philosophical conclusions on AI personhood, it is clear that the legal implications would be substantial (Solum, 1992). Where AI systems are considered persons, even in limited form, they may be held responsible for their actions in their own capacity.

      However, the utility of recognising AI personhood should not replace thoughtful policy formation. An AI system fulfilling roles normally delegated to humans does not mean that personhood necessarily follows (Thaldar and Naidoo, 2021). This may be illustrated by the recent granting of a patent in South Africa where the sole inventor was AI. Although some would consider “inventing” to be a human characteristic, without the ability to fully contain human emotion and capacity to engage in relationships, it is difficult to see such an AI system as more than a “special species of legal object that has the ability to invent” (Thaldar and Naidoo, 2021). As AI becomes more autonomous, legal rules can be developed to allow for special treatment of AI systems, which would be comparable to the legal rules that provide for the special treatment of animals (Thaldar and Naidoo, 2021).

      While it is generally agreed that current AI systems are not capable of being considered legal persons, more sophisticated, generalised, and autonomous systems may change this assumption (Solum, 1992). Current systems can be changed, created, or completely deleted like any other software, but where AI systems enjoy a degree of personhood, our relationship with them may become far more complicated. Legally, the granting of AI personhood would aid plaintiffs of AI harm in that they could gather evidence from the AI system through its examination as a witness (Chung and Zink, 2018). However, this benefit may be somewhat limited in systems that lack transparent reasoning.

      More definitively, some scholars insist that a separate legal personality for AI systems will never be necessary (European Commission, Directorate-General for Justice and Consumers, 2019). They contend that even fully autonomous systems’ actions are better attributed to individuals or other legal persons than to the system itself (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019).

      An important consideration is that AI systems’ lack of abstract thought limits their comparison to human personhood and decision-making, particularly in healthcare. Whereas human decision-making in healthcare is largely justified by morality, AI systems lack moral input in decision-making (Chung and Zink, 2018). Moral considerations become vitally important in healthcare and resource-scarce environments where circumstances require difficult decisions to be justified, usually with reference to moral ideals. Therefore, we suggest that in lacking moral capacity, AI systems could be limited in how they could be held accountable if they were considered persons or could lack prerequisites to make decisions in moral contexts.

      For scholars who consider AI more than a tool, the lack of moral input is an issue they contend with (Bashayreh et al., 2021). Dignum (2017) suggests that even AI systems acting as assistants may inherit a moral framework for decision-making through incorporating the values of their engineers. However, a mere copy of an engineer’s morals may not necessarily lead to satisfactory results as AI systems may not apply moral lessons to their environments in the same way as humans (Bostrom and Yudkowsky, 2014). Dignum contends that identifying and analysing these imbued values will nevertheless improve system performance (Dignum, 2017). This would also ensure that incorporated morals are interpreted in an acceptable way, meaning that, as these systems become more autonomous and powerful, moral assessment may become an essential component of their decision-making, especially in a field such as healthcare (Dignum, 2017).

      Accordingly, there is some possibility of future AI systems bearing some form of personhood (Solum, 1992). However, conferring even a limited form of personhood on AI systems presents further practical difficulties. For example, as is commonly suggested, a limited form of personhood may be imbued on AI systems through the extension of the principal-agent relationship. In determining responsibility, however, the standards which apply when adjudicating AI system conduct and under what circumstances AI systems would be considered liable for their conduct would remain unclear. This will be discussed further in the section on the principal-agent relationship below.

      Further, a final practical issue of attributing liability directly to AI systems is that it leaves no clear pathways for compensation of victims (Bashayreh et al., 2021). As AI systems are currently incapable of ownership, there are no assets that a victim could claim. To remedy this situation, some scholars have suggested the introduction of an insurance scheme funded by developers from which victims may claim (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). However, such a scheme may not adequately replace clear and fair liability rules and could lead to high administrative costs, so defeating the cost-saving benefits of a clear claim process (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). Furthermore, there is a lack of guidance on the value of AI insurance policies as there are no standards against which to assess risk or begin a cost analysis (Bertolini and Episcopo, 2021).

      5 Approaches to attributing liability

      The subsections below discuss the main approaches to the attribution of liability for harm caused by AI systems in healthcare. Section 5.1 broadly considers the extension of the principal–agent relationship to include AI systems and the consequences of such an extension. Section 5.2 deals with AI as a product and how consumer protection law standards may be applied to AI system harm. We then comment on current fault-based liability regimes as they apply to AI systems in Section 5.3. This leads to a discussion of efforts to use strict liability to attribute liability for AI harm in Section 5.4. In Section 5.5, we consider an approach to AI harm focusing on improving AI system use in healthcare through reconciliatory forums.

      5.1 Principal–agent relationship

      Most current AI systems in healthcare act as assistants to healthcare practitioners (Joshi and Morley, 2019). Accordingly, some scholars have suggested extending principal–agent rules to govern liability (Rachum-Twaig, 2020). This approach is mostly modelled after the doctor–medical student relationship whereby a medical student performs tasks under the authority and supervision of a doctor; however, the doctor attracts liability for harm which occurs during the student’s duties (Chung and Zink, 2018). IBM’s Watson operated under a similar regime, whereby the system would assist physicians in making decisions and providing recommendations; however, the physician carried responsibility for the final decision (Chung and Zink, 2018). This approach would ensure that there is always an identifiable human part of the decision-making process and would be in line with an AI design philosophy called “human-in-the-loop” systems (HITL) (Dignum, 2017). HITL ensures proper oversight of system decisions, while creating a clear party to hold accountable by making a human ultimately responsible for decisions (Dignum, 2017).

      Although this approach provides a justification for attributing liability to a specific person, it may disincentivise practitioners from following system recommendations as they would bear the risk of harm. The tension arises where the physician may not be able to understand how the system came to its decisions and therefore be unable to assess the risk of harm himself or herself. He or she will likely, however, justify considering AI recommendations based on AI’s profound ability to consider vastly more information than he or she could. This could potentially lead to increased costs of medical care and slower treatments as practitioners seek alternative means of validating their decision to follow or reject AI system recommendations. This may be so until there is guidance as to AI systems’ position in the standard of care. Should AI systems form part of the standard of care, there may be an expectation for physicians to follow AI recommendations, potentially only until they have a clear professional duty to act otherwise.

      Furthermore, similar to criticism of AI personhood, critics of HITL argue that there is a difficulty in determining the correct standard against which to compare the conduct of the AI system (Kingston, 2016). Initial systems may be comparable to humans; however, as systems begin to outperform humans, another standard may need to be considered (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). In addition, as systems become more sophisticated, there remains uncertainty as to how disagreements between AI system recommendations and human practitioner recommendations should be resolved. Current norms suggest that claims for damages will favour standard care pathways, even where AI systems recommend non-standard treatments (Tobia et al., 2021). This seems to be true, regardless of the outcome of treatment and healthcare practitioners are more likely to attract liability where they do not follow these standards (Price et al., 2019). This initial bias against non-standard care could limit growth of AI technology use in healthcare, which could limit future AI development as there will be a lack of testing in a medical environment and a lack of opportunity to build trust (World Health Organisation, 2021).

      Importantly, healthcare practitioners could be less willing to implement recommendations for AI systems which deviate from standard care procedures where they face liability for acting on AI recommendations. However, as AI systems become commoner in healthcare, the bias against their inclusion could shift, especially where AI systems become part of the standard of care (World Health Organisation, 2021). The attribution of liability to the developer of the system may follow if they are in the best position to prevent harmful outcomes as the creator of the system (Lövtrup, 2020).

      5.2 Product liability

      Townsend et al. (2023) found that eleven out of twelve African countries surveyed provide for strict liability of harmful or defective goods in their consumer protection laws. Therefore, anyone in the supply chain could in principle be held strictly liable for AI harm to the patient. However, are these consumer protection laws sufficiently equipped to deal with AI-specific risks? Core to consumer protection law is the concept of a product defect. To attenuate strict liability, it must be proven that a product had a defect. However, the inherent unpredictability of AI systems makes it difficult to define what constitutes a defect in the context of AI (Bashayreh et al., 2021). The South African Supreme Court of Appeal held that a consumer who is claiming in terms of South Africa’s Consumer Protection Act (South African Government, 2009) must prove not only the existence of a defect, but also that the defect is material (Motus Corporation, 2021). Furthermore, it is difficult to prove that a defect caused harm (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019), or that the developer was responsible for the defect (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). When using multiple systems together, as is common in healthcare, attributing fault may be impossible (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). Modern regulations were drafted before the AI boom, and therefore are unlikely to have properly considered AI-specific issues (Lövtrup, 2020). Accordingly, patients who have suffered harm caused by AI are likely to face a considerable evidentiary burden when seeking resolution through product liability law.

      In the United States, software has generally been considered a tool and courts have been hesitant to extend product liability to healthcare software developers (Gerke et al., 2020). In Europe, the “developmental risk defence” allows a producer to avoid liability on the basis that scientific knowledge at the time of production was unable to detect the existence of a defect in the product (Holm et al., 2021). Sihlahla et al. (2023) note that in South Africa, a healthcare practitioner or a healthcare establishment sued in terms of the Consumer Protection Act (South African Government, 2009) for harm caused by AI would have a complete defence if they can show that they could not reasonably have been expected to have discovered the defect.

      5.3 Fault-based remedies

      Generally, fault-based liability is based on a person’s intentional or negligent conduct which causes harm wrongfully and culpably (Mukheibir et al., 2010). Liability is attributed based on a determination of who should justly compensate for the damages of the plaintiff (Marchisio, 2021). Currently, there is no case law to guide the application of fault-based liability principles, particularly in cases where the AI suffers from an unknown flaw which was not reasonably foreseeable (Donnelly, 2022).

      Accordingly, key elements of such remedies are difficult to prove in AI system cases, specifically causation and fault. Causation is difficult to prove as it may be difficult to show a flawed algorithm was the cause of harm (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). Similar to product law, it may be difficult to determine what a flaw is, or at what point the flaw was created if the system was developed by multiple parties (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). Even where a flaw is identified, demonstrating foreseeability for negligence-based claims is still difficult (Holm et al., 2021). Furthermore, establishing vicarious liability would be complicated as, currently, there is no means of determining whether the AI system “acted negligently” or what degree of control a medical practitioner should exert over an AI system (Donnelly, 2022). Accordingly, where there is no causation on the part of the physician, a patient may be left with no recourse (Donnelly, 2022).

      Fault-based liability is an important means of deterrence (Buiten et al., 2021). Defendants who are penalised are incentivised to prevent harm in the future (Marchisio, 2021). This is justified as the defendant should be the one best oriented to assess and avoid risk (Marchisio, 2021). However, AI systems’ necessary unpredictability may make it impossible for a particular party to act to prevent harm as it would be unforeseeable.

      Therefore, it has been suggested that liability, by rule, be shared among the technical and medical stakeholders as part of their joint contribution to the risk of harm in the use of the system (Smith and Fotheringham, 2020). This could be in the form of joint and several liability or proportional liability (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019) using the person’s choice to develop or implement the system as the justification to establish causation (Bashayreh et al., 2021).

      An extension of this idea is a risk-sharing approach (Bashayreh et al., 2021). Owners and developers would bear liability proportionate to the risk each has accepted in their role in the AI lifecycle, operating to the exclusion of cases of wilful misconduct or gross negligence (Bashayreh et al., 2021). Importantly, developers would need to disclose all risks and potential deficiencies of the system, including the degree to which the system’s decision can be explained and all the built-in values of the system (Bashayreh et al., 2021). In addition, owners would disclose their intended use of the product and the environment it will be deployed in (Bashayreh et al., 2021). In the event of harm, liability could be portioned by a court adjudicating on the facts with relevant disclosures.

      The creation of responsibilities at different stages of the AI system’s lifecycle remains a common approach to justifying liability in fault-based approaches in literature. Current fault-based standards already attach responsibilities to people based on special relationships they may have with an object, such as where the person is in control of a potentially dangerous animal or thing (Marchisio, 2021). Where the animal acts unpredictably, the person controlling it could be held liable (Bashayreh et al., 2021). Failure to fulfil responsibilities to protect others from harm in this type of relationship will justify the attribution of liability. This approach may be useful in AI through the prescription of minimum rules to establish wrongfulness and fault (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). Where these standards are not upheld, the burden of proof may shift in favour of the victim. Therefore, Rachum-Twaig (2020) suggests the creation of “safe harbours.” Safe harbours act as points in the AI lifecycle where a party is responsible for ensuring certain minimum standards. Where the party fails to uphold these standards, they are more likely to incur liability and current fault-based remedies can be employed. Approaches like this form part of a movement towards risk-based liability replacing the foreseeability element of many fault-based regimes (Calo, 2015).

      5.4 Strict liability

      The clear issues that arise in justifying attribution of liability to certain stakeholders has encouraged some scholars to suggest no-fault or “strict” liability systems as better means of attributing liability (Holm et al., 2021). No-fault liability makes it significantly easier for victims to claim compensation by providing clear pathways to settle claims and removing the necessity of proving fault (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). This eases the burden on claimants who are already the victim of harm when reporting errors and provides better hope of reconciliation (Holm et al., 2021). No-fault systems also separate the compensation and liability claims (Holm et al., 2021). They remove the necessity of victims to access information to prove fault, which is a particular concern with inscrutable AI systems. The occurrence of harm is made the centre of the claim instead of proving fault.

      Concerns raised about this approach have focused on the future development of AI systems (European Commission, Directorate-General of Communications Networks, Content and Technology, 2019). First, strict liability would subject stakeholders to material burdens with no fair opportunity to avoid them (Abbott and Sarch, 2019). Normally, strict liability applies for unexpected harms, but where AI systems are implemented, it is difficult to determine how unexpected harms would be defined, as the systems are necessarily programmed to be unpredictable (European Commission, Directorate-General for Justice and Consumers, 2019). Second, stakeholders would be at risk of reputational damage resulting from the occurrence of harm which is otherwise not foreseeable (Abbott and Sarch, 2019). Therefore, stakeholders would be subject to significant burdens without an opportunity to take effective measures against the realisation of these harms.

      To ease the potential economic impact stakeholders may experience under strict liability, it has been suggested that a stakeholder-funded scheme be created to compensate victims of AI harm (European Commission, Directorate-General for Justice and Consumers, 2019). This may further simplify the pathways for victims to claim; however, a mixed fund would lead to innocent parties effectively being held liable for harm they did not cause (European Commission, Directorate-General for Justice and Consumers, 2019). Furthermore, the burden on blameworthy parties would be eased as they would pay only a portion of any damages claims for harm caused by their systems. This reduction would add to the already perceived loss of the deterrent effect as litigation is no longer available to claimants (European Commission, Directorate-General for Justice and Consumers, 2019). One suggested solution is to model the New Zealand approach to medical matters whereby no-fault systems have been implemented in certain medical matters, but claims are limited to unusual injuries (European Commission, Directorate-General for Justice and Consumers, 2019).

      Practically, strict liability could potentially be more expensive than litigation when administrative costs are coupled with more patients being eligible to claim (Holm et al., 2021). Also, a strict liability system may not be capable of being applied cross-jurisdictionally or globally (Rachum-Twaig, 2020). This has led some scholars to suggest that a mixture of fault and no-fault rules could provide equitable AI regulation (Marchisio, 2021).

      5.5 Reconciliation

      The adversarial nature of the approaches to liability outlined above may be counter-productive to the proper regulation of AI technology—at least during its nascent stage. Naidoo et al. (2022) argue that instead of prioritising questions such as “Who acted?” and “Was the act wrongful?,” which causes persons involved to be antagonistic and defensive, the focus should shift to (a) learning how to better use AI in healthcare, and to (b) actively developing guidelines for AI developers and healthcare professionals who are using AI systems. The authors suggest that (a) and (b) can best be attained by establishing a sui generis dispute resolution institution for harm caused by AI in healthcare. This institution would replace litigation in the courts, hold broad investigative powers to access all relevant information, resolve disputes through reconciliation, award financial redress to victims of AI-driven harm in healthcare, and—importantly—learn and develop guidelines. In essence the authors argue for reconciliation to replace litigation as they view reconciliation as more conducive to the learning element of a regulatory sandbox.

      This approach could draw inspiration from current alternative dispute resolution structures, principally, the South African Commission for Conciliation for Conciliation, Mediation and Arbitration (CCMA). The compensation structure could draw lessons and inspiration from the operation of the South African Road Accident Fund which compensates victims of accidents on public roads for bodily harms. The basis of this system could encompass a more inquisitive approach to litigation, whereby all parties are enabled to share information with the institution taking a more active role in discovery through its investigative powers. A thoughtful use of the institution’s powers to adjudicate the matter can help to ensure that power disparities between the parties could be mitigated whilst providing for a just outcome.

      The guidelines developed by the sui generis dispute resolution institution can over time either become customary law in the field, or be solidified in legislation—depending on the preferences and traditions of the relevant jurisdiction. This would signal that AI technology and the regulation thereof has reached a stage of maturity, at which stage the sui generis dispute resolution institution would have served its purpose, and a return to a liability-based approach can be considered.

      6 Conclusion

      The assimilation of AI technologies in the African healthcare sector is an unprecedented juncture in the continent’s journey towards equitable and advanced medical care. As AI solutions make inroads into African medical establishments, they bring along a multitude of autonomy and opacity issues, challenging the longstanding ethical pillars and legal norms ingrained in the diverse cultures of the continent. The quintessential medico-legal principle of informed consent is now juxtaposed against the intricate algorithms of AI, challenging the very essence of transparency and patient understanding. Similarly, the increasing autonomy of AI systems amplifies the intricacies of liability, pushing the boundaries of traditional legal frameworks.

      In this article, we tried to provide the reader with an overview of the legal concepts relevant to the issue of AI and liability in healthcare. We started with the contemplation of AI personhood. While captivating, we suggest that it poses substantial challenges in an African context, particularly when addressing tangible redress mechanisms for AI-induced mishaps. Next, the principal-agent framework, although providing a modicum of accountability, could inadvertently stifle the AI adoption rate by placing considerable responsibilities upon local medical practitioners. While product liability law offers another plausible approach, it struggles to categorise the continually evolving nature of AI in the static confines of conventional product definitions. Alternative strategies, such as risk-based liability may offer clearer paths in contexts where fault determination proves onerous. Yet, they too grapple with ensuring specificity and justice. Strict liability, offering more transparent compensation mechanisms, raises concerns about economic implications, reputational risks and, most critically, the challenge of harmonising such policies across Africa’s diverse legal landscapes.

      An approach based on reconciliation rather than liability potentially provides the best environment for a regulatory sandbox; however, reconciliation in the context of AI-driven harm in the healthcare context lacks the same level of scholarship as the approaches based on liability. We suggest that reconciliation offers much potential and deserves more academic attention.

      In distilling these insights, it is evident that Africa’s AI journey in healthcare is not solely a scientific or medical transition. It also requires profound legal reflection and evolution.

      Author contributions

      DB: Conceptualization, Writing–original draft, Writing-review and editing. DT: Conceptualization, Funding acquisition, Supervision, Writing–review and editing.

      Funding

      The authors declare financial support was received for the research, authorship, and/or publication of this article. The first author wishes to acknowledge the support of the National Research Foundation of South Africa (Grant Numbers: 131307). The opinions, findings and conclusions or recommendations expressed in the publication are the author(s) alone, and the NRF accepts no liability whatsoever in this regard. The second author acknowledges the support by the US National Institute of Mental Health and the US National Institutes of Health (award number U01MH127690). The content of this article is solely the authors responsibility and does not necessarily represent the official views of the US National Institute of Mental Health or the US National Institutes of Health.

      The authors acknowledge the use of ChatGPT4 from Open AI to improve the language and readability of the abstract, introduction and conclusion sections of this article.

      Conflict of interest

      The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

      Publisher’s note

      All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

      References Abbott R. Sarch A. (2019). Punishing artificial intelligence: legal fiction or science fiction. UC Davis Law Rev. 53, 323384. 10.2139/SSRN.3327485 Ali S. Abuhmed T. El-Sappagh S. Muhammad K. Alonso-Moral J. M. Confalonieri R. (2023). Explainable artificial intelligence (XAI): what we know and what is left to attain trustworthy artificial intelligence. Inf. Fusion 99, 101805. 10.1016/j.inffus.2023.101805 Artificial Intelligence for Africa: An Opportunity for Growth, Development, and Democratisation (2018). Access partnership. Hatfield Campus, Pretoria: University of Pretoria. Available at: https://www.up.ac.za/media/shared/7/ZP_Files/ai-for-africa.zp165664.pdf. Bashayreh M. Sibai F. N. Tabbara A. (2021). Artificial intelligence and legal liability: towards an international approach of proportional liability based on risk sharing. Inf. Commun. Technol. Law 30 (2), 169192. 10.1080/13600834.2020.1856025 Bertolini A. Episcopo F. (2021). The expert group’s report on liability for artificial intelligence and other emerging digital technologies: a critical assessment. Eur. J. Risk Regul. 12, 644659. 10.1017/err.2021.30 Bostrom N. Yudkowsky E. (2014). “The ethics of artificial intelligence,” in Cambridge handbook of artificial intelligence. Editors Ramsey W. Frankish K. (Cambridge, UK: Cambridge University Press), 316334. 10.1017/CBO9781139046855.020 Buiten M. de Streel A. Peitz M. (2021). EU liability rules for the age of artificial intelligence. SSRN Electron. J. 10.2139/ssrn.3817520 Calo R. (2015). Robotics and the new cyberlaw. Calif. L. Rev. 103, 513563. 10.2139/ssrn.2402972 Chung J. Zink A. (2018). Hey Watson – can I sue you for malpractice? Examining the liability of artificial intelligence in medicine. Asia Pac. J. Health L. Ethics 11, 30. Craglia M. Annoni A. Benczúr P. Bertoldi P. Delipetrev B. T. De Prato G. (2018). Artificial intelligence a European perspective. Luxembourg: Publications Office of the European Union. Dignum V. (2017). Responsible autonomy. Available at: http://arxiv.org/abs/1706.02513 (Accessed January 19, 2021). Donnelly D.-L. (2022). First do No harm: legal principles regulating the future of artificial intelligence in health care in South Africa. Potchefstroom Electron. Law J. 25 (1), 143. 10.17159/1727-3781/2022/v25i0a11118 Eke D. O. Chintu S. S. Wakunuma K. (2023). “Towards shaping the future of responsible AI in africa,” in Responsible AI in Africa social and cultural studies of robots and AI. Editors Eke D. O. Wakunuma K. Akintoye S. (Cham: Springer International Publishing), 169193. 10.1007/978-3-031-08215-3_8 European Commission, Directorate-General for Justice and Consumers (2019). Liability for Artificial Intelligence and other emerging digital technologies. Luxembourg: Publications Office. Available at:. 10.2838/573689 European Commission, Directorate-General of Communications Networks, Content and Technology (2019). Ethics guidelines for trustworthy AI. Luxembourg: Publications Office. Available at: https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html. Floridi L. Cowls J. Beltrametti M. Chatila R. Chazerand P. Dignum V. (2018). AI4People – an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. (Dordr) 28, 689707. 10.1007/s11023-018-9482-5 Gerke S. Minssen T. Cohen G. (2020). “Ethical and legal challenges of artificial intelligence-driven healthcare,” in Artificial Intelligence in healthcare (elsevier). Editors Bohr A. Memarzadeh K. (Cambridge, MA: Academic Press), 295336. 10.1016/B978-0-12-818438-7.00012-5 Grimm P. W. Grossman M. R. Cormack G. V. (2021). Artificial intelligence as evidence. Northwest. J. Technol. Intellect. Prop. 19. Holm S. Stanton C. Bartlett B. (2021). A new argument for No-fault compensation in health care: the introduction of artificial intelligence systems. Health Care Anal. 29, 171188. 10.1007/s10728-021-00430-4 Joshi I. Morley J. (2019). Artificial Intelligence: how to get it right. Putting policy into practice for safe data-driven innovation in health and care. London, United Kingdom: NHSX. Jumper J. Tunyasuvunakool K. Kohli P. Hassabis D. the AlphaFold Team (2020). Computational predictions of protein structures associated with COVID-19. DeepMind. Available at: https://www.deepmind.com/open-source/computational-predictions-of-protein-structures-associated-with-covid-19 (Accessed September 29, 2021). Kingston J. K. C. (2016). “Artificial intelligence and legal liability,” in Research and development in intelligent systems XXXIII. Editors Bramer M. Petridis M. (Cham: Springer International Publishing), 269279. 10.1007/978-3-319-47175-4_20 Lövtrup M. (2020). In brief: artificial Intelligence in healthcare. Swed. Counc. Med. Ethics 2. Available at: https://smer.se/wp-content/uploads/2020/06/smer-2020-2-in-brief-artificial-intelligence-in-healthcare.pdf. Marchisio E. (2021). In support of "no-fault" civil liability rules for artificial intelligence. SN Soc. Sci. 1, 54. 10.1007/s43545-020-00043-z Motus Corporation (2021). Ltd and another v wentzel (1272/2019) [2021] ZASCA 40 (13 april 2021). Mukheibir A. Niesing L. Perumal D. (2010). in The law of delict in South Africa. Editors Loubser M. M. Midgley R. (Cape Town, South Africa: Oxford University Press Southern Africa). Naidoo S. Bottomley D. Naidoo M. Donnelly D. Thaldar D. W. (2022). Artificial intelligence in healthcare: proposals for policy development in South Africa. South Afr. J. Bioeth. Law 15 (1), 1116. 10.7196/SAJBL.2022.v15i1.797 Owoyemi A. Owoyemi J. Osiyemi A. Boyd A. (2020). Artificial Intelligence for healthcare in africa. Front. Digit. Health 6 (2), 65. 10.3389/fdgth.2020.00006 Pepper M. S. Slabbert M. N. (2011). Is South Africa on the verge of a medical malpractice litigation storm? S. Afr. J. Bioeth. Law 4, 2935. Perc M. Ozer M. Hojnik J. (2019). Social and juristic challenges of artificial intelligence. Palgrave Commun. 5, 61. 10.1057/s41599-019-0278-x Price W. N. Gerke S. Cohen I. G. (2019). Potential liability for physicians using artificial intelligence. JAMA 322 (18), 17651766. 10.1001/jama.2019.15064 Rachum-Twaig O. (2020). Whose robot is it anyway? liability for artificial-intelligence-based robots. Univ. Ill. Law Rev. 2020, 11411176. Sallstrom L. Morris O. Mehta H. (2019). Artificial intelligence in Africa’s healthcare: ethical considerations. Observer Res. Found. Issue Brief 312, 12. Sihlahla I. Donnelly D.-L. Townsend B. Thaldar D. (2023). Legal and ethical principles governing the use of artificial intelligence in radiology services in South Africa. Dev. World Bioeth., 111. 10.1111/dewb.12436 Singh V. (2020). AI and data in South Africa’s health sector. Policy Action Netw. 6. Smith H. Fotheringham K. (2020). Artificial intelligence in clinical decision-making: rethinking liability. Med. Law Int. 20, 131154. 10.1177/0968533220945766 Solum L. B. (1992). Legal personhood for artificial intelligence. North Carol. Law Rev. 70, 1231. South African Government (2009). Consumer protection act. Available at: https://www.gov.za/sites/default/files/32186_467.pdf. Thaldar D. Naidoo M. (2021). AI inventorship: the right decision? S. Afr. J. Sci. 117. 10.17159/sajs.2021/12509 Tobia K. Nielsen A. Stremitzer A. (2021). When does physician use of AI increase liability? J. Nucl. Med. 62, 1721. 10.2967/jnumed.120.256032 Townsend B. A. (2020). Software as a medical device: critical rights issues regarding artificial intelligence software-based health technologies in South Africa. J. South Afr. Law/Tydskrif vir die Suid-Afrikaanse Reg (4), 747762. Townsend B. A. Sihlahla I. Naidoo M. Naidoo S. Donnelly D.-L. Thaldar D. W. (2023). Mapping the regulatory landscape of AI in healthcare in Africa. Front. Pharmacol. 14, 1214422. 10.3389/fphar.2023.1214422 Tran B. Vu G. Ha G. Vuong Q.-H. Ho M.-T. Vuong T.-T. (2019). Global evolution of research in artificial intelligence in health and medicine: a bibliometric study. J. Clin. Med. 8, 360. 10.3390/jcm8030360 World Health Organisation (2021). Ethics and governance of artificial intelligence for health: WHO guidance. Geneva, Switzerland: World Health Organisation. Available at: https://www.who.int/publications/i/item/9789240029200.
      ‘Oh, my dear Thomas, you haven’t heard the terrible news then?’ she said. ‘I thought you would be sure to have seen it placarded somewhere. Alice went straight to her room, and I haven’t seen her since, though I repeatedly knocked at the door, which she has locked on the inside, and I’m sure it’s most unnatural of her not to let her own mother comfort her. It all happened in a moment: I have always said those great motor-cars shouldn’t be allowed to career about the streets, especially when they are all paved with cobbles as they are at Easton Haven, which are{331} so slippery when it’s wet. He slipped, and it went over him in a moment.’ My thanks were few and awkward, for there still hung to the missive a basting thread, and it was as warm as a nestling bird. I bent low--everybody was emotional in those days--kissed the fragrant thing, thrust it into my bosom, and blushed worse than Camille. "What, the Corner House victim? Is that really a fact?" "My dear child, I don't look upon it in that light at all. The child gave our picturesque friend a certain distinction--'My husband is dead, and this is my only child,' and all that sort of thing. It pays in society." leave them on the steps of a foundling asylum in order to insure [See larger version] Interoffice guff says you're planning definite moves on your own, J. O., and against some opposition. Is the Colonel so poor or so grasping—or what? Albert could not speak, for he felt as if his brains and teeth were rattling about inside his head. The rest of[Pg 188] the family hunched together by the door, the boys gaping idiotically, the girls in tears. "Now you're married." The host was called in, and unlocked a drawer in which they were deposited. The galleyman, with visible reluctance, arrayed himself in the garments, and he was observed to shudder more than once during the investiture of the dead man's apparel. HoME香京julia种子在线播放 ENTER NUMBET 0016lfstem.com.cn
      www.haoqian.net.cn
      gzmrw.com.cn
      lcgydyj.com.cn
      www.fyinpk.com.cn
      sdqdfc.com.cn
      uqsboc.com.cn
      ruochong.com.cn
      srtonf.com.cn
      www.mj56xds.com.cn
      处女被大鸡巴操 强奸乱伦小说图片 俄罗斯美女爱爱图 调教强奸学生 亚洲女的穴 夜来香图片大全 美女性强奸电影 手机版色中阁 男性人体艺术素描图 16p成人 欧美性爱360 电影区 亚洲电影 欧美电影 经典三级 偷拍自拍 动漫电影 乱伦电影 变态另类 全部电 类似狠狠鲁的网站 黑吊操白逼图片 韩国黄片种子下载 操逼逼逼逼逼 人妻 小说 p 偷拍10幼女自慰 极品淫水很多 黄色做i爱 日本女人人体电影快播看 大福国小 我爱肏屄美女 mmcrwcom 欧美多人性交图片 肥臀乱伦老头舔阴帝 d09a4343000019c5 西欧人体艺术b xxoo激情短片 未成年人的 插泰国人夭图片 第770弾み1 24p 日本美女性 交动态 eee色播 yantasythunder 操无毛少女屄 亚洲图片你懂的女人 鸡巴插姨娘 特级黄 色大片播 左耳影音先锋 冢本友希全集 日本人体艺术绿色 我爱被舔逼 内射 幼 美阴图 喷水妹子高潮迭起 和后妈 操逼 美女吞鸡巴 鸭个自慰 中国女裸名单 操逼肥臀出水换妻 色站裸体义术 中国行上的漏毛美女叫什么 亚洲妹性交图 欧美美女人裸体人艺照 成人色妹妹直播 WWW_JXCT_COM r日本女人性淫乱 大胆人艺体艺图片 女同接吻av 碰碰哥免费自拍打炮 艳舞写真duppid1 88电影街拍视频 日本自拍做爱qvod 实拍美女性爱组图 少女高清av 浙江真实乱伦迅雷 台湾luanlunxiaoshuo 洛克王国宠物排行榜 皇瑟电影yy频道大全 红孩儿连连看 阴毛摄影 大胆美女写真人体艺术摄影 和风骚三个媳妇在家做爱 性爱办公室高清 18p2p木耳 大波撸影音 大鸡巴插嫩穴小说 一剧不超两个黑人 阿姨诱惑我快播 幼香阁千叶县小学生 少女妇女被狗强奸 曰人体妹妹 十二岁性感幼女 超级乱伦qvod 97爱蜜桃ccc336 日本淫妇阴液 av海量资源999 凤凰影视成仁 辰溪四中艳照门照片 先锋模特裸体展示影片 成人片免费看 自拍百度云 肥白老妇女 女爱人体图片 妈妈一女穴 星野美夏 日本少女dachidu 妹子私处人体图片 yinmindahuitang 舔无毛逼影片快播 田莹疑的裸体照片 三级电影影音先锋02222 妻子被外国老头操 观月雏乃泥鳅 韩国成人偷拍自拍图片 强奸5一9岁幼女小说 汤姆影院av图片 妹妹人艺体图 美女大驱 和女友做爱图片自拍p 绫川まどか在线先锋 那么嫩的逼很少见了 小女孩做爱 处女好逼连连看图图 性感美女在家做爱 近距离抽插骚逼逼 黑屌肏金毛屄 日韩av美少女 看喝尿尿小姐日逼色色色网图片 欧美肛交新视频 美女吃逼逼 av30线上免费 伊人在线三级经典 新视觉影院t6090影院 最新淫色电影网址 天龙影院远古手机版 搞老太影院 插进美女的大屁股里 私人影院加盟费用 www258dd 求一部电影里面有一个二猛哥 深肛交 日本萌妹子人体艺术写真图片 插入屄眼 美女的木奶 中文字幕黄色网址影视先锋 九号女神裸 和骚人妻偷情 和潘晓婷做爱 国模大尺度蜜桃 欧美大逼50p 西西人体成人 李宗瑞继母做爱原图物处理 nianhuawang 男鸡巴的视屏 � 97免费色伦电影 好色网成人 大姨子先锋 淫荡巨乳美女教师妈妈 性nuexiaoshuo WWW36YYYCOM 长春继续给力进屋就操小女儿套干破内射对白淫荡 农夫激情社区 日韩无码bt 欧美美女手掰嫩穴图片 日本援交偷拍自拍 入侵者日本在线播放 亚洲白虎偷拍自拍 常州高见泽日屄 寂寞少妇自卫视频 人体露逼图片 多毛外国老太 变态乱轮手机在线 淫荡妈妈和儿子操逼 伦理片大奶少女 看片神器最新登入地址sqvheqi345com账号群 麻美学姐无头 圣诞老人射小妞和强奸小妞动话片 亚洲AV女老师 先锋影音欧美成人资源 33344iucoom zV天堂电影网 宾馆美女打炮视频 色五月丁香五月magnet 嫂子淫乱小说 张歆艺的老公 吃奶男人视频在线播放 欧美色图男女乱伦 avtt2014ccvom 性插色欲香影院 青青草撸死你青青草 99热久久第一时间 激情套图卡通动漫 幼女裸聊做爱口交 日本女人被强奸乱伦 草榴社区快播 2kkk正在播放兽骑 啊不要人家小穴都湿了 www猎奇影视 A片www245vvcomwwwchnrwhmhzcn 搜索宜春院av wwwsee78co 逼奶鸡巴插 好吊日AV在线视频19gancom 熟女伦乱图片小说 日本免费av无码片在线开苞 鲁大妈撸到爆 裸聊官网 德国熟女xxx 新不夜城论坛首页手机 女虐男网址 男女做爱视频华为网盘 激情午夜天亚洲色图 内裤哥mangent 吉沢明歩制服丝袜WWWHHH710COM 屌逼在线试看 人体艺体阿娇艳照 推荐一个可以免费看片的网站如果被QQ拦截请复制链接在其它浏览器打开xxxyyy5comintr2a2cb551573a2b2e 欧美360精品粉红鲍鱼 教师调教第一页 聚美屋精品图 中韩淫乱群交 俄罗斯撸撸片 把鸡巴插进小姨子的阴道 干干AV成人网 aolasoohpnbcn www84ytom 高清大量潮喷www27dyycom 宝贝开心成人 freefronvideos人母 嫩穴成人网gggg29com 逼着舅妈给我口交肛交彩漫画 欧美色色aV88wwwgangguanscom 老太太操逼自拍视频 777亚洲手机在线播放 有没有夫妻3p小说 色列漫画淫女 午间色站导航 欧美成人处女色大图 童颜巨乳亚洲综合 桃色性欲草 色眯眯射逼 无码中文字幕塞外青楼这是一个 狂日美女老师人妻 爱碰网官网 亚洲图片雅蠛蝶 快播35怎么搜片 2000XXXX电影 新谷露性家庭影院 深深候dvd播放 幼齿用英语怎么说 不雅伦理无需播放器 国外淫荡图片 国外网站幼幼嫩网址 成年人就去色色视频快播 我鲁日日鲁老老老我爱 caoshaonvbi 人体艺术avav 性感性色导航 韩国黄色哥来嫖网站 成人网站美逼 淫荡熟妇自拍 欧美色惰图片 北京空姐透明照 狼堡免费av视频 www776eom 亚洲无码av欧美天堂网男人天堂 欧美激情爆操 a片kk266co 色尼姑成人极速在线视频 国语家庭系列 蒋雯雯 越南伦理 色CC伦理影院手机版 99jbbcom 大鸡巴舅妈 国产偷拍自拍淫荡对话视频 少妇春梦射精 开心激动网 自拍偷牌成人 色桃隐 撸狗网性交视频 淫荡的三位老师 伦理电影wwwqiuxia6commqiuxia6com 怡春院分站 丝袜超短裙露脸迅雷下载 色制服电影院 97超碰好吊色男人 yy6080理论在线宅男日韩福利大全 大嫂丝袜 500人群交手机在线 5sav 偷拍熟女吧 口述我和妹妹的欲望 50p电脑版 wwwavtttcon 3p3com 伦理无码片在线看 欧美成人电影图片岛国性爱伦理电影 先锋影音AV成人欧美 我爱好色 淫电影网 WWW19MMCOM 玛丽罗斯3d同人动画h在线看 动漫女孩裸体 超级丝袜美腿乱伦 1919gogo欣赏 大色逼淫色 www就是撸 激情文学网好骚 A级黄片免费 xedd5com 国内的b是黑的 快播美国成年人片黄 av高跟丝袜视频 上原保奈美巨乳女教师在线观看 校园春色都市激情fefegancom 偷窥自拍XXOO 搜索看马操美女 人本女优视频 日日吧淫淫 人妻巨乳影院 美国女子性爱学校 大肥屁股重口味 啪啪啪啊啊啊不要 操碰 japanfreevideoshome国产 亚州淫荡老熟女人体 伦奸毛片免费在线看 天天影视se 樱桃做爱视频 亚卅av在线视频 x奸小说下载 亚洲色图图片在线 217av天堂网 东方在线撸撸-百度 幼幼丝袜集 灰姑娘的姐姐 青青草在线视频观看对华 86papa路con 亚洲1AV 综合图片2区亚洲 美国美女大逼电影 010插插av成人网站 www色comwww821kxwcom 播乐子成人网免费视频在线观看 大炮撸在线影院 ,www4KkKcom 野花鲁最近30部 wwwCC213wapwww2233ww2download 三客优最新地址 母亲让儿子爽的无码视频 全国黄色片子 欧美色图美国十次 超碰在线直播 性感妖娆操 亚洲肉感熟女色图 a片A毛片管看视频 8vaa褋芯屑 333kk 川岛和津实视频 在线母子乱伦对白 妹妹肥逼五月 亚洲美女自拍 老婆在我面前小说 韩国空姐堪比情趣内衣 干小姐综合 淫妻色五月 添骚穴 WM62COM 23456影视播放器 成人午夜剧场 尼姑福利网 AV区亚洲AV欧美AV512qucomwwwc5508com 经典欧美骚妇 震动棒露出 日韩丝袜美臀巨乳在线 av无限吧看 就去干少妇 色艺无间正面是哪集 校园春色我和老师做爱 漫画夜色 天海丽白色吊带 黄色淫荡性虐小说 午夜高清播放器 文20岁女性荫道口图片 热国产热无码热有码 2015小明发布看看算你色 百度云播影视 美女肏屄屄乱轮小说 家族舔阴AV影片 邪恶在线av有码 父女之交 关于处女破处的三级片 极品护士91在线 欧美虐待女人视频的网站 享受老太太的丝袜 aaazhibuo 8dfvodcom成人 真实自拍足交 群交男女猛插逼 妓女爱爱动态 lin35com是什么网站 abp159 亚洲色图偷拍自拍乱伦熟女抠逼自慰 朝国三级篇 淫三国幻想 免费的av小电影网站 日本阿v视频免费按摩师 av750c0m 黄色片操一下 巨乳少女车震在线观看 操逼 免费 囗述情感一乱伦岳母和女婿 WWW_FAMITSU_COM 偷拍中国少妇在公车被操视频 花也真衣论理电影 大鸡鸡插p洞 新片欧美十八岁美少 进击的巨人神thunderftp 西方美女15p 深圳哪里易找到老女人玩视频 在线成人有声小说 365rrr 女尿图片 我和淫荡的小姨做爱 � 做爱技术体照 淫妇性爱 大学生私拍b 第四射狠狠射小说 色中色成人av社区 和小姨子乱伦肛交 wwwppp62com 俄罗斯巨乳人体艺术 骚逼阿娇 汤芳人体图片大胆 大胆人体艺术bb私处 性感大胸骚货 哪个网站幼女的片多 日本美女本子把 色 五月天 婷婷 快播 美女 美穴艺术 色百合电影导航 大鸡巴用力 孙悟空操美少女战士 狠狠撸美女手掰穴图片 古代女子与兽类交 沙耶香套图 激情成人网区 暴风影音av播放 动漫女孩怎么插第3个 mmmpp44 黑木麻衣无码ed2k 淫荡学姐少妇 乱伦操少女屄 高中性爱故事 骚妹妹爱爱图网 韩国模特剪长发 大鸡巴把我逼日了 中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片 大胆女人下体艺术图片 789sss 影音先锋在线国内情侣野外性事自拍普通话对白 群撸图库 闪现君打阿乐 ady 小说 插入表妹嫩穴小说 推荐成人资源 网络播放器 成人台 149大胆人体艺术 大屌图片 骚美女成人av 春暖花开春色性吧 女亭婷五月 我上了同桌的姐姐 恋夜秀场主播自慰视频 yzppp 屄茎 操屄女图 美女鲍鱼大特写 淫乱的日本人妻山口玲子 偷拍射精图 性感美女人体艺木图片 种马小说完本 免费电影院 骑士福利导航导航网站 骚老婆足交 国产性爱一级电影 欧美免费成人花花性都 欧美大肥妞性爱视频 家庭乱伦网站快播 偷拍自拍国产毛片 金发美女也用大吊来开包 缔D杏那 yentiyishu人体艺术ytys WWWUUKKMCOM 女人露奶 � 苍井空露逼 老荡妇高跟丝袜足交 偷偷和女友的朋友做爱迅雷 做爱七十二尺 朱丹人体合成 麻腾由纪妃 帅哥撸播种子图 鸡巴插逼动态图片 羙国十次啦中文 WWW137AVCOM 神斗片欧美版华语 有气质女人人休艺术 由美老师放屁电影 欧美女人肉肏图片 白虎种子快播 国产自拍90后女孩 美女在床上疯狂嫩b 饭岛爱最后之作 幼幼强奸摸奶 色97成人动漫 两性性爱打鸡巴插逼 新视觉影院4080青苹果影院 嗯好爽插死我了 阴口艺术照 李宗瑞电影qvod38 爆操舅母 亚洲色图七七影院 被大鸡巴操菊花 怡红院肿么了 成人极品影院删除 欧美性爱大图色图强奸乱 欧美女子与狗随便性交 苍井空的bt种子无码 熟女乱伦长篇小说 大色虫 兽交幼女影音先锋播放 44aad be0ca93900121f9b 先锋天耗ばさ无码 欧毛毛女三级黄色片图 干女人黑木耳照 日本美女少妇嫩逼人体艺术 sesechangchang 色屄屄网 久久撸app下载 色图色噜 美女鸡巴大奶 好吊日在线视频在线观看 透明丝袜脚偷拍自拍 中山怡红院菜单 wcwwwcom下载 骑嫂子 亚洲大色妣 成人故事365ahnet 丝袜家庭教mp4 幼交肛交 妹妹撸撸大妈 日本毛爽 caoprom超碰在email 关于中国古代偷窥的黄片 第一会所老熟女下载 wwwhuangsecome 狼人干综合新地址HD播放 变态儿子强奸乱伦图 强奸电影名字 2wwwer37com 日本毛片基地一亚洲AVmzddcxcn 暗黑圣经仙桃影院 37tpcocn 持月真由xfplay 好吊日在线视频三级网 我爱背入李丽珍 电影师傅床戏在线观看 96插妹妹sexsex88com 豪放家庭在线播放 桃花宝典极夜著豆瓜网 安卓系统播放神器 美美网丝袜诱惑 人人干全免费视频xulawyercn av无插件一本道 全国色五月 操逼电影小说网 good在线wwwyuyuelvcom www18avmmd 撸波波影视无插件 伊人幼女成人电影 会看射的图片 小明插看看 全裸美女扒开粉嫩b 国人自拍性交网站 萝莉白丝足交本子 七草ちとせ巨乳视频 摇摇晃晃的成人电影 兰桂坊成社人区小说www68kqcom 舔阴论坛 久撸客一撸客色国内外成人激情在线 明星门 欧美大胆嫩肉穴爽大片 www牛逼插 性吧星云 少妇性奴的屁眼 人体艺术大胆mscbaidu1imgcn 最新久久色色成人版 l女同在线 小泽玛利亚高潮图片搜索 女性裸b图 肛交bt种子 最热门有声小说 人间添春色 春色猜谜字 樱井莉亚钢管舞视频 小泽玛利亚直美6p 能用的h网 还能看的h网 bl动漫h网 开心五月激 东京热401 男色女色第四色酒色网 怎么下载黄色小说 黄色小说小栽 和谐图城 乐乐影院 色哥导航 特色导航 依依社区 爱窝窝在线 色狼谷成人 91porn 包要你射电影 色色3A丝袜 丝袜妹妹淫网 爱色导航(荐) 好男人激情影院 坏哥哥 第七色 色久久 人格分裂 急先锋 撸撸射中文网 第一会所综合社区 91影院老师机 东方成人激情 怼莪影院吹潮 老鸭窝伊人无码不卡无码一本道 av女柳晶电影 91天生爱风流作品 深爱激情小说私房婷婷网 擼奶av 567pao 里番3d一家人野外 上原在线电影 水岛津实透明丝袜 1314酒色 网旧网俺也去 0855影院 在线无码私人影院 搜索 国产自拍 神马dy888午夜伦理达达兔 农民工黄晓婷 日韩裸体黑丝御姐 屈臣氏的燕窝面膜怎么样つぼみ晶エリーの早漏チ○ポ强化合宿 老熟女人性视频 影音先锋 三上悠亚ol 妹妹影院福利片 hhhhhhhhsxo 午夜天堂热的国产 强奸剧场 全裸香蕉视频无码 亚欧伦理视频 秋霞为什么给封了 日本在线视频空天使 日韩成人aⅴ在线 日本日屌日屄导航视频 在线福利视频 日本推油无码av magnet 在线免费视频 樱井梨吮东 日本一本道在线无码DVD 日本性感诱惑美女做爱阴道流水视频 日本一级av 汤姆avtom在线视频 台湾佬中文娱乐线20 阿v播播下载 橙色影院 奴隶少女护士cg视频 汤姆在线影院无码 偷拍宾馆 业面紧急生级访问 色和尚有线 厕所偷拍一族 av女l 公交色狼优酷视频 裸体视频AV 人与兽肉肉网 董美香ol 花井美纱链接 magnet 西瓜影音 亚洲 自拍 日韩女优欧美激情偷拍自拍 亚洲成年人免费视频 荷兰免费成人电影 深喉呕吐XXⅩX 操石榴在线视频 天天色成人免费视频 314hu四虎 涩久免费视频在线观看 成人电影迅雷下载 能看见整个奶子的香蕉影院 水菜丽百度影音 gwaz079百度云 噜死你们资源站 主播走光视频合集迅雷下载 thumbzilla jappen 精品Av 古川伊织star598在线 假面女皇vip在线视频播放 国产自拍迷情校园 啪啪啪公寓漫画 日本阿AV 黄色手机电影 欧美在线Av影院 华裔电击女神91在线 亚洲欧美专区 1日本1000部免费视频 开放90后 波多野结衣 东方 影院av 页面升级紧急访问每天正常更新 4438Xchengeren 老炮色 a k福利电影 色欲影视色天天视频 高老庄aV 259LUXU-683 magnet 手机在线电影 国产区 欧美激情人人操网 国产 偷拍 直播 日韩 国内外激情在线视频网给 站长统计一本道人妻 光棍影院被封 紫竹铃取汁 ftp 狂插空姐嫩 xfplay 丈夫面前 穿靴子伪街 XXOO视频在线免费 大香蕉道久在线播放 电棒漏电嗨过头 充气娃能看下毛和洞吗 夫妻牲交 福利云点墦 yukun瑟妃 疯狂交换女友 国产自拍26页 腐女资源 百度云 日本DVD高清无码视频 偷拍,自拍AV伦理电影 A片小视频福利站。 大奶肥婆自拍偷拍图片 交配伊甸园 超碰在线视频自拍偷拍国产 小热巴91大神 rctd 045 类似于A片 超美大奶大学生美女直播被男友操 男友问 你的衣服怎么脱掉的 亚洲女与黑人群交视频一 在线黄涩 木内美保步兵番号 鸡巴插入欧美美女的b舒服 激情在线国产自拍日韩欧美 国语福利小视频在线观看 作爱小视颍 潮喷合集丝袜无码mp4 做爱的无码高清视频 牛牛精品 伊aⅤ在线观看 savk12 哥哥搞在线播放 在线电一本道影 一级谍片 250pp亚洲情艺中心,88 欧美一本道九色在线一 wwwseavbacom色av吧 cos美女在线 欧美17,18ⅹⅹⅹ视频 自拍嫩逼 小电影在线观看网站 筱田优 贼 水电工 5358x视频 日本69式视频有码 b雪福利导航 韩国女主播19tvclub在线 操逼清晰视频 丝袜美女国产视频网址导航 水菜丽颜射房间 台湾妹中文娱乐网 风吟岛视频 口交 伦理 日本熟妇色五十路免费视频 A级片互舔 川村真矢Av在线观看 亚洲日韩av 色和尚国产自拍 sea8 mp4 aV天堂2018手机在线 免费版国产偷拍a在线播放 狠狠 婷婷 丁香 小视频福利在线观看平台 思妍白衣小仙女被邻居强上 萝莉自拍有水 4484新视觉 永久发布页 977成人影视在线观看 小清新影院在线观 小鸟酱后丝后入百度云 旋风魅影四级 香蕉影院小黄片免费看 性爱直播磁力链接 小骚逼第一色影院 性交流的视频 小雪小视频bd 小视频TV禁看视频 迷奸AV在线看 nba直播 任你在干线 汤姆影院在线视频国产 624u在线播放 成人 一级a做爰片就在线看狐狸视频 小香蕉AV视频 www182、com 腿模简小育 学生做爱视频 秘密搜查官 快播 成人福利网午夜 一级黄色夫妻录像片 直接看的gav久久播放器 国产自拍400首页 sm老爹影院 谁知道隔壁老王网址在线 综合网 123西瓜影音 米奇丁香 人人澡人人漠大学生 色久悠 夜色视频你今天寂寞了吗? 菲菲影视城美国 被抄的影院 变态另类 欧美 成人 国产偷拍自拍在线小说 不用下载安装就能看的吃男人鸡巴视频 插屄视频 大贯杏里播放 wwwhhh50 233若菜奈央 伦理片天海翼秘密搜查官 大香蕉在线万色屋视频 那种漫画小说你懂的 祥仔电影合集一区 那里可以看澳门皇冠酒店a片 色自啪 亚洲aV电影天堂 谷露影院ar toupaizaixian sexbj。com 毕业生 zaixian mianfei 朝桐光视频 成人短视频在线直接观看 陈美霖 沈阳音乐学院 导航女 www26yjjcom 1大尺度视频 开平虐女视频 菅野雪松协和影视在线视频 华人play在线视频bbb 鸡吧操屄视频 多啪啪免费视频 悠草影院 金兰策划网 (969) 橘佑金短视频 国内一极刺激自拍片 日本制服番号大全magnet 成人动漫母系 电脑怎么清理内存 黄色福利1000 dy88午夜 偷拍中学生洗澡磁力链接 花椒相机福利美女视频 站长推荐磁力下载 mp4 三洞轮流插视频 玉兔miki热舞视频 夜生活小视频 爆乳人妖小视频 国内网红主播自拍福利迅雷下载 不用app的裸裸体美女操逼视频 变态SM影片在线观看 草溜影院元气吧 - 百度 - 百度 波推全套视频 国产双飞集合ftp 日本在线AV网 笔国毛片 神马影院女主播是我的邻居 影音资源 激情乱伦电影 799pao 亚洲第一色第一影院 av视频大香蕉 老梁故事汇希斯莱杰 水中人体磁力链接 下载 大香蕉黄片免费看 济南谭崔 避开屏蔽的岛a片 草破福利 要看大鸡巴操小骚逼的人的视频 黑丝少妇影音先锋 欧美巨乳熟女磁力链接 美国黄网站色大全 伦蕉在线久播 极品女厕沟 激情五月bd韩国电影 混血美女自摸和男友激情啪啪自拍诱人呻吟福利视频 人人摸人人妻做人人看 44kknn 娸娸原网 伊人欧美 恋夜影院视频列表安卓青青 57k影院 如果电话亭 avi 插爆骚女精品自拍 青青草在线免费视频1769TV 令人惹火的邻家美眉 影音先锋 真人妹子被捅动态图 男人女人做完爱视频15 表姐合租两人共处一室晚上她竟爬上了我的床 性爱教学视频 北条麻妃bd在线播放版 国产老师和师生 magnet wwwcctv1024 女神自慰 ftp 女同性恋做激情视频 欧美大胆露阴视频 欧美无码影视 好女色在线观看 后入肥臀18p 百度影视屏福利 厕所超碰视频 强奸mp magnet 欧美妹aⅴ免费线上看 2016年妞干网视频 5手机在线福利 超在线最视频 800av:cOm magnet 欧美性爱免播放器在线播放 91大款肥汤的性感美乳90后邻家美眉趴着窗台后入啪啪 秋霞日本毛片网站 cheng ren 在线视频 上原亚衣肛门无码解禁影音先锋 美脚家庭教师在线播放 尤酷伦理片 熟女性生活视频在线观看 欧美av在线播放喷潮 194avav 凤凰AV成人 - 百度 kbb9999 AV片AV在线AV无码 爱爱视频高清免费观看 黄色男女操b视频 观看 18AV清纯视频在线播放平台 成人性爱视频久久操 女性真人生殖系统双性人视频 下身插入b射精视频 明星潜规测视频 mp4 免賛a片直播绪 国内 自己 偷拍 在线 国内真实偷拍 手机在线 国产主播户外勾在线 三桥杏奈高清无码迅雷下载 2五福电影院凸凹频频 男主拿鱼打女主,高宝宝 色哥午夜影院 川村まや痴汉 草溜影院费全过程免费 淫小弟影院在线视频 laohantuiche 啪啪啪喷潮XXOO视频 青娱乐成人国产 蓝沢润 一本道 亚洲青涩中文欧美 神马影院线理论 米娅卡莉法的av 在线福利65535 欧美粉色在线 欧美性受群交视频1在线播放 极品喷奶熟妇在线播放 变态另类无码福利影院92 天津小姐被偷拍 磁力下载 台湾三级电髟全部 丝袜美腿偷拍自拍 偷拍女生性行为图 妻子的乱伦 白虎少妇 肏婶骚屄 外国大妈会阴照片 美少女操屄图片 妹妹自慰11p 操老熟女的b 361美女人体 360电影院樱桃 爱色妹妹亚洲色图 性交卖淫姿势高清图片一级 欧美一黑对二白 大色网无毛一线天 射小妹网站 寂寞穴 西西人体模特苍井空 操的大白逼吧 骚穴让我操 拉好友干女朋友3p