This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
香京julia种子在线播放
Artificial intelligence (AI) is a pivotal player in the emergence of the Fourth Industrial Revolution (‘4IR’). Although no harmonised definition of AI exists, we take the broadly functionalist perspective that AI is to enable a machine or a mechanical device to function or behave in a manner that would be called intelligent were a human to behave in that manner (
While the disrupting power of such technologies brings with it unprecedented opportunities for the transformation of society and healthcare in particular, there are also concerns about the way in which AI is designed, developed, and deployed. These concerns range from issues concerning data quality and privacy to explainability and transparency of the algorithms, and issues of social and distributive justice (
Research studies have been conducted on the mapping of global AI ethics guidelines, on the ethical challenges presented by AI-driven technologies in healthcare, and on emergent ethical and rights-based approaches to values and principles for AI adoption and global AI governance. However, limited, if any, research has been done on ascertaining the current AI regulatory landscape in the Global South, and in Africa, in particular (
This article aims to complement current literature by mapping the regulatory landscape of AI in the health context in Africa, with reference to 12 selected countries. By mapping the regulatory landscape, we mean conducting a scoping review of the most relevant regulatory instruments–that is, those we identify and provide references to as regulatory instruments. We consider
First, we investigate whether the selected countries have
Our investigation follows the style of scoping reviews described in
To ensure a comprehensive and systematic search process, searches of various websites as described in
From the search results, the researchers downloaded digital copies of all documents relevant to one or more of the study’s five themes. The criteria for inclusion in the scoping review were that the document be one of the following types: 1) national statute currently in force; 2) gazetted regulations; 3) draft Bill, 4) published government policy or strategy document; 5) ethics code/guideline/policy by health sector regulatory body or international or regional legislative, regulatory or policy instrument; and 6) applicable in one or more of the 12 jurisdictions. The study excluded private sector documents, documents not publicly available on the internet, and draft documents under discussion. The extracted documents were saved in a shared Google drive folder, and were classified in sub-folders by country and thematic area. Duplicates and documents replaced/repealed by a more recent document were then manually removed. A total of 118 documents (listed in
While a full and comprehensive account of the regulatory position in the relevant areas of the law is offered, we do not claim to have captured every provision that may find relevance to AI in health or in health research. The consequences of AI adoption are far-reaching, touching many areas of the law. We have therefore narrowed our enquiry to the areas of the law that are most relevant. It was also not our intention to capture the numerous and varied ethics instruments and other non-regulatory governance measures that may find application to AI in healthcare.
There are no
Regulatory frameworks form part of a tool to assess the maturity of AI within health. Thus, the absence of clear AI regulatory guidelines and policies may, in certain instances, impede the uptake of AI in the healthcare sector (
The World Health Organisation has implemented an integrated African Health Observatory initiative, together with National Health Observatories aimed at providing an informative digital health platform (
An analysis of statutes governing medical devices in the selected African countries shows that no single piece of legislation explicitly mentions AI or algorithms within the definition of a medical device. Furthermore, when compared to the definition of AI provided by the OECD, the definition of software included in current medical device regulations does not specifically and adequately address novel features of AI software. From the scrutiny of the provisions, ‘software’ is included in the definition of medical devices in three jurisdictions (South Africa, Kenya, and Uganda) (
Reported cases demonstrate that AI has been instantiated in clinical practice in certain countries under investigation. For instance, South Africa used an AI-driven chest x-ray diagnosis application during the COVID pandemic (
Most countries under study have standalone digital health policies in place, except for Rwanda and The Gambia where such policies are embedded in the broader healthcare policy. Kenya is the only country that has a standalone E-health Bill that regulates digital health. The implementation and monitoring processes of digital health are, however, sporadic and partly a result of a lack of infrastructure and resources (
However, more research is needed as telemedicine solutions are increasingly leveraging AI, as well as new modalities of delivering healthcare services in under-resourced areas, such as Chatbots and mobile applications, to assist community health workers. Regulation that is outdated and not context-specific and culturally appropriate can thus also act as a barrier to digital technology adoption and innovation. In South Africa, for example, there has been a low uptake of telemedicine by healthcare practitioners (
The development and adoption of AI in healthcare relies heavily on availability and access to high-quality clinical health data gained from digital health and health research (
In sum, the regulation of AI adoption in healthcare in the countries under study is undeveloped. None of the studied countries have adopted a proactive approach to the development of legislation governing AI in healthcare. The immaturity of AI in healthcare regulatory systems is exacerbated by further impediments including the lack of financial resources, diminished computing resources and structural infrastructure, and inadequate technical expertise. Unfortunately, these factors stand to delay the implementation of digital health in low- to middle-income countries–including the countries under study (
There is a close link between data and AI. AI systems rely on vast quantities of accurate, complete, representative, and quality datasets to train, test, and validate the system. Data that is typically personal - and sometimes sensitive or special category data - is typically ‘research’ data. AI systems also collect, generate, process, and share data–often on a large scale. Good AI regulation is thus intrinsically shaped by good data regulation. The increasing use and processing of such datasets informs many possible privacy challenges, including issues associated with collection, standardisation, anonymity, transparency, data ownership, and the changing conceptions of informed consent.
AI-enhanced technologies pose risks to data privacy in two ways. First, in the unlawful collection, use, and sharing of a person’s personal data, and second in not providing persons with access, control, and autonomy over their data and data use. Legal tensions focus on the increasing requirement to access curated quality datasets and the inherent sensitivity of data, in particular personal information and also the implicit vulnerability to its unethical or unlawful source, use, and disclosure. The use and processing of personal data, and in particular sensitive health data and electronic health records, are well described, as are securing and protecting large-scale data sets against unauthorised collection, access, processing, storage, and distribution (
Regional developments in Africa have primarily been instantiated through the African Union Convention on Cyber Security and Personal Data Protection, which was adopted in June 2014, and which introduced substantive claims to information privacy in Africa (
The AU Convention sought to harmonise African cyber legislation and to elevate the rhetoric of ‘protection of personal privacy’ to an international level. Moreover, it establishes a normative framework consistent with the African legal, cultural, economic, and social environment, and seeks to balance the use of information and communication technologies with the protection of the privacy of individuals, while guaranteeing the free flow of information across borders. The AU Convention enjoins state parties to establish legal and institutional frameworks for data protection and cybersecurity, encompassing three central issues: electronic transactions, personal data protection, and cybercrimes (
A further development leading to data protection integration, strengthening collaboration in Africa, and facilitating cross-border data transfers occurred in February 2022 with the endorsement of the AU Data Policy Framework (
In addition, subregional frameworks and agreements as created by the Economic Community of West African States (ECOWAS), the East African Community (EAC), the Economic Community of Central African States (ECCAS/CEMAC), the Intergovernmental Authority on Development (IGAD), and the Southern African Development Community (SADC), have contributed to the protection of the right to privacy and to promoting cyber security and fightingcybercrime (
If Africa once lagged in the development of data protection laws, it has recently remedied this position. Until recently, few, if any, data protection policies had been developed in Africa (
Data protection in Rwanda is governed by law No 058/2021 of 2021 relating to the protection of personal data and privacy. Interestingly, Rwandan law contains a provision in Article 19 giving the data subject the right to request a data controller or data processor to stop processing their personal data which ‘causes or is likely to cause loss, sadness or anxiety to the data subject’ and a provision in Article 25 permitting a data subject to designate an heir to their personal data. In South Africa, data is protected by the Protection of Personal Information Act No 4 of 2013, which came into effect on 1 July 2020, Uganda by the Data Protection and Privacy Act of 2019, and Zimbabwe by the Data Protection Act No 5 of 2021. Tanzania enacted its first Personal Data Protection Law in late 2022, in terms of which provision is made for conducting transfer impact assessments and the stipulation that data collectors submit their privacy policies to the Tanzanian Data Protection Commission for approval.
Although not all countries have specific data protection legislation in place, all countries under investigation have data or privacy protection in some form or another, often embedded in other legislation. Cameroon, for example, has no specific law relating to data protection, although a degree of protection is provided by law No 2010/012 of 21 December 2010 Relating to Cyber security and Cyber criminality in Cameroon, by Law No 2006/018 of 29 December 2006 to Regulate Advertising in Cameroon, and by Law No 2010/013 of 21 December 2010 Regulating Electronic Communications in Cameroon. Moreover, the Constitution of the Republic of Cameroon provides for the privacy of all correspondence and Decree No 2013/0399/PM of 27 February 2013 for modalities of the consumers’ protection in the electronic communication sector states that “consumers in the electronic communication sector have the right to privacy … in the consumption of technologies, goods and services in the electronic communication sector.” Cameroon has ratified certain instruments that protect privacy, including the sub-regional CEMAC Directive No 07/08-UEAC-133-CM-18.
In The Gambia, certain data protection and privacy rules relating primarily to information and communications service providers are provided for in their Information and Communications Act, 2009 and the 2019 Data Protection and Privacy Policy sets out the legal framework for data protection and privacy. Although Malawi does not have any specific data protection laws, a Data Protection Bill, 2021, has been drafted. It promotes data security and provides for data protection and related matters, while the Electronic Transactions and Cyber Security Act 33 of 2016 contains data protection-related provisions. We have included a comprehensive list of data protection laws in
The debate about AI has focused on data protection requirements and soft law ethics instruments. While general AI regulation remains necessary, it is also vital to address the use of and relationship between AI software as goods that can be sold and the patient as a consumer in respect of the AI product or a healthcare service provided using the AI. Traditional fault-based liability regimes are difficult to implement in relation to harm caused by AI technologies as healthcare practitioners are required to foresee an error and take reasonable steps to meet the required standard of care (
All 12 countries provide for consumer protection in relation to the sale of goods. Botswana, Cameroon, The Gambia, Kenya, Malawi, South Africa and Zimbabwe have enacted standalone statutes regulating consumer protection. The position is different elsewhere, where it is regulated alongside (Nigeria and Rwanda) or embedded in (Tanzania) fair competition legislation. While both Ghana (
Eleven out of the twelve countries provide for strict product liability of harmful or defective goods in their consumer protection regimes. This means that anyone in the supply chain for the AI product (the goods) can be held strictly liable for harm to the patient (the consumer) if the product does not perform safely or as intended. It is not necessary to prove that the harm arose from any negligence (fault) on the part of the developer or the doctor. Cameroon deviates from this general trend, as the imposition of product liability is negligence-based, that is, a determination of fault is necessary to impose liability (
Within current legislation, liability may be wholly or partly imposed on a number of different parties in the distribution chain, such as: the supplier, producer, manufacturer, importer, distributor, trader, seller, retailer, or provider of services (The Gambia, Malawi, and Nigeria). In South Africa, for example, the term supplier is wide enough to include the developer of the AI product and the healthcare establishment or practitioner providing a service using the AI product. Where health researchers intend to commercialise an AI product that they have developed, they too would need to be aware of the legal obligations imposed by consumer protection legislation. In addition, Rwanda’s legislation contains a unique provision in terms of which strict product liability for unsafe or defective goods supplied by an enterprise is imposed upon the regulatory body that approved the product for sale.
A consideration of what types or aspects of technology may be included in the definition of goods is necessary. This becomes especially relevant to AI, given the recent CJEU finding that where the supply of software by electronic means is accompanied by a grant of perpetual licence, this will constitute the sale of goods (
Definitions of what constitutes a consumer also vary. Seven countries–Botswana, Cameroon, Malawi, Nigeria, Tanzania, Uganda, and Zimbabwe–provide for the explicit exclusion of persons who purchase goods and services for the purpose of reuse in production and manufacture of any other goods or services for sale, and in Rwanda the Act applies only to goods ordinarily acquired for personal and domestic use. This is particularly noteworthy, given that statistically-based machine learning models used in the healthcare context will invariably be acquired for reuse in the production/manufacturing of other goods (e.g., drug discovery) and services (e.g., disease prediction, patient diagnoses, population health monitoring). Thus, those acquiring data-driven AI technologies for the purposes of health research or use in healthcare practice–where the objective is the sale of a good/service–are not themselves defined as consumers and are thus unlikely to find much protection under consumer legislation. In ensuring compliance with legislation, eight countries–Cameroon, The Gambia, Malawi, Nigeria, Rwanda, South Africa, Tanzania, and Zimbabwe–allow the relevant consumer protection authority to issue a recall on any goods considered a risk to the public or harmful to human or public health. The Gambia and Tanzania differ in that the supplier or relevant party of the distribution chain is responsible to recall harmful or defective goods. Furthermore, both The Gambia and Malawi provide for an additional safeguard against harmful technology, goods, and services. Here producers or suppliers are intended to attach easily noticeable warnings to products considered harmful or hazardous to human health with the aim that use take place under the strongest possible safety conditions.
In addition, electronic communications and transactions and the protection of e-consumers are regulated in a number of jurisdictions in other legislation. These statutes, which do not refer in specific terms to AI, also do not contain any provisions that could clarify the attribution of liability or address many of the other significant consumer protection concerns that arise from the use of AI in healthcare. In addition, some jurisdictions have laws regulating cybercrimes, content control measures and service provider liability. These safeguards also do not directly address the issue of providing civil redress to individual consumers harmed by an AI application in the healthcare setting.
Before one can engage with research, one must first understand the regulatory environment. Importantly, this includes the schemes of protection for any fruits of research. This would be intellectual property. In this section we outline the mechanisms and bodies which are relevant in obtaining such protection. Multiple layers of intellectual property (‘IP’) protection can apply to a single AI product or process. For this research study we focused on only two IP rights: patents and copyright. These IP rights inform data flow, affect AI research and development, and are critical for AI innovation. Patents generally apply to product inventions (such as AI technologies embedded within products, for example, smartwatches). Copyright applies to literary works, which includes the datasets used to test, train, and validate AI systems. Regional IP frameworks were identified, as was national legislation in each of the selected African countries to denote the relevant avenues of protection and the mechanisms of protection which operate at each level.
The current members of the African Regional Intellectual Property Organisation (ARIPO) include Botswana, The Gambia, Ghana, Kenya, Malawi, Rwanda, the United Republic of Tanzania, Uganda, and Zimbabwe (
All of the countries under study have enacted patent and copyright statutes which are similar in many ways. The legislation is captured in
Patent protection is available in all selected African countries for AI applications such as core inventions relating to novel advances in model architectures or to the techniques themselves. Other patentable innovations include: novel ways of generating a training set or model; trained models (the most common being AI as a tool to solve a particular problem); and smart AI-enhanced products and health monitoring devices.
All jurisdictions have yet to establish authorities or oversight mechanisms mandated to regulate AI. However, regulatory bodies and authorities overseeing data protection, ICTs, and medical devices will play a role in the regulation of AI systems and application in healthcare. The establishment of such authorities is set out in
Three of the twelve countries have established relevant committees to guide the uptake of emerging technologies, each of which has produced 4IR strategy documents. In 2018, the Kenyan Cabinet Secretary for ICT appointed the
In addition, Rwanda and South Africa have established Centres for the Fourth Industrial Revolution–multi-stakeholder initiatives intended to focus on data governance, AI and machine learning (
This work demonstrates that in the 12 selected African countries, AI in healthcare, including in health research, is regulated. However, a diverse and fragmented progress indicates that significant work is yet to be done. Certain selected African countries have made limited progress and all of the 12 selected African countries are at an early stage in their AI regulatory journey. Notwithstanding regulatory developments, where found, development is often either of general application to all technology or adapted from other older digital technology types.
Encouragingly, certain sectors that inform AI development such as data protection have seen increased development in recent years. This is to be welcomed as exchanging and sharing knowledge, data, and efficiencies between African countries is transformative and can help to build common AI capacity across Africa. This is of particular importance in health research. We have identified the AI-relevant regulators and regulations–and instances where regulatory bodies and regulation are either absent or require strengthening. What is now required is a concerted effort by those regulators to engage with each other, and with health sector stakeholders and health researchers, to address gaps and deficiencies through domestic legal reform and policy development.
Importantly, where a regulatory framework exists, its role, we suggest, should be two-fold: to both prevent AI-related harm and to promote AI innovation across Africa. However, whether extant regulation achieves this and is suitable in the selected target countries for the purposes of AI adoption remains unclear. Where digital health policies and professional guidelines are absent or inadequate they need to be adopted or amended to enable responsible development and deployment of AI both in face-to-face patient care and telemedicine solutions, without stifling innovation. On AI innovation, AI generative tools promise to produce value. However, questions arise about whether these products qualify for intellectual property rights given that there is argument over whether they are created by a human or AI. African countries can certainly benefit by providing guidance on this important matter. In addition, there is limited African scholarship on AI ethics and policy, which makes for important and necessary future research in Africa.
Accordingly, Africa stands to gain from the proliferation of international and sector-specific ethical standards, guidelines, and policies, developed in a response to create “trustworthy,” “transparent,” and “responsible” AI (
Africa can certainly draw on these perspectives and benefit from more general and broader policy guidelines and regulation on AI, and specifically on AI in healthcare and health research. The African Union too can play a role in directing such initiatives. The post-colonial reach of digitised data and AI create challenges to Africa’s quest for digital sovereignty. However, Africa and indeed most of its nation states have been slow to agree on key digital and data governance measures. For example, as the uptake of the African Union Convention on Cyber Security and Personal Data Protection has demonstrated, progress is often both long and slow (
We identify the role of the local community and African society in establishing principles and in participation and engagement in regulatory policy-making. The AI ecosystem is global, necessitating greater international collaboration and agreement of standards, frameworks, and guidance. Thus, the need exists to align the African position with international standards. However, while the Global North can inform African regulatory development and work at a global level to implement effective AI standards for safety, for example, and can bind countries to certain rules (
Notwithstanding the emerging global approaches, we recommend that AI regulation in Africa is best served by being pro-innovation while addressing the many AI practices that carry unacceptable or high risk to health, safety, and human rights infringements. A framework for AI regulation in Africa, we suggest, should follow a cautious, yet proactive and balanced regulatory approach–one that is risk-based, rights-preserving, agile, adaptive, and innovation-supportive. In addition, we suggest that an effective African governance approach should include various governance tools–a combination of hard and soft law-including: 1) mechanisms to capture AI due diligence; 2) principles of transparency, explainability, and accountability; 3) be human-centric; and 4) make provision for AI auditing, assessment, and review. We recommend that an African approach be both risk-based and rights-based. This is premised on the understanding that AI systems have certain characteristics (
Regulators in Africa have an increasing responsibility to address the immediate and significant concerns of algorithmic bias and fairness in the adoption of AI in Africa. AI stands, not only to potentially produce biased outcomes, but also to amplify and perpetuate patterns of general systemic and structural social bias, such as race- and gender-discrimination (
Better or worse futures in the region will be determined, we suggest, in large part by clearly understanding and articulating the perspectives of previously marginalized and silenced voices and allowing them to be part of the AI conversation. Zimmermann et al. argue that “algorithmic injustice is not only a technical problem, but also a moral and political one, and that addressing it requires deliberation by all of us as democratic citizens.” Accordingly, accountability for addressing these injustices becomes shared, rather than that only “offloaded and outsourced to tech developers and private corporations” (
The overarching idea too is that the higher the risk level, the greater the need for obligations to be placed on the AI system (and those developing and deploying it) and for human protection. Due regard should also be given to those activities that should be prohibited or otherwise curtailed, for example, amongst others, those outlined in the EU AI Act, that is, the use of systems that manipulate human behaviour and/or exploit persons’ vulnerabilities and social scoring systems. While AI systems pose many immediate risks to Use short dashes in order to be consistent with the rest of the paper also pose broader, longer-term social harms and large-scale, highly consequential risks that are often difficult to predict ex ante (
The original contributions presented in the study are included in the article/
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
We acknowledge the support by the US National Institute of Mental Health and the US National Institutes of Health (award number U01MH127690). The work of BT was funded by the UKRI project EP/V026747/1 “Trustworthy Autonomous Systems Node in Resilience.” The content of this article is solely our responsibility and does not necessarily represent the official views of the US National Institute of Mental Health or the US National Institutes of Health.
The completion of the tables in the Annex was fact checked by advice received from: Keneilwe P. Mere (Moribame Matthews, Botswana); Eleng Mugabe (Desai Law Group, Botswana); Hyacinthe Fansi (NFM Avocats Associés, Cameroon); Naa Asheley Ashittey (ÁELEX Ghana Unlimited, Ghana); Susan-Barbara Kamapley (Bentsi-Enchill, Letsa and Ankomah, Ghana); Benedict Nzioki (African Law Partners, Kenya); Frances Obiago (ÁELEX Nigeria Unlimited, Nigeria); Sumbo Akintola (Aluko and Oyebode, Nigeria); Zackiah Nandugwa (K-Solutions and Partners, Rwanda); Karl Blom (Webber Wentzel, South Africa); and Ronald Mutasa (Manokore Attorneys, Zimbabwe). We gratefully acknowledge the assistance of the academic collaborators on the DSI-Africa Law project: Dr Paul Ogendi (University of Nairobi, Kenya); Dr Peter Munyi (University of Nairobi, Kenya); Dr Lukman Abdulrauf (University of Ilorin, Nigeria); Dr Aishatu Adaji (University of Ilorin, Nigeria); Ms Elisabeth Anchancho (University of KwaZulu-Natal, South Africa) and Ms Amy Gooden (University of KwaZulu-Natal, South Africa). In addition, we acknowledge the inputs from the project research assistants: Kiara Munsamy (University of KwaZulu-Natal, South Africa); Jodie de Klerk (University of KwaZulu-Natal, South Africa); Roasia Hazarilall (University of KwaZulu-Natal, South Africa); and Naseeba Sadak (University of KwaZulu-Natal, South Africa). All errors and omissions remain the authors’ responsibility.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
The Supplementary Material for this article can be found online at: