Text
E-book Responsible AI in Africa : Challenges and Opportunities
In the last few years, a growing and thriving AI ecosystem has emerged in Africa. Within this ecosystem, there are local tech spaces as well as a number of internationally driven technology hubs and centres estab-lished by big tech companies such as Twitter, Google, Facebook, Alibaba Group, Huawei, Amazon and Microsoft have significantly increased the development and deployment of AI systems in Africa. While these tech spaces and hubs are focused on using AI to meet local challenges (e.g. poverty, illiteracy, famine, corruption, environmental disasters, terrorism and health crisis), the ethical, legal and socio-cultural implications of AI in Africa have largely been ignored. To ensure that Africans benefit from the attendant gains of AI, ethical, legal and socio-cultural impacts of AI need to be robustly considered and mitigated. On the global level, a number of national, regional and interna-tional bodies, think-tanks, research institutions and private companies have developed or are in the process of developing ethical principles and guidelines for AI (Jobin et al., 2019; Ulnicane et al., 2021). These emerging principles such as transparency, justice and fairness, non-maleficence, responsibility and privacy that shape global AI ethics discourse are informed by ethical perspectives and traditions from Western Europe, North America and East Asia (Gupta and Heath, 2020). Ethical narratives, perceptions and principles from the Global South, particu-larly Africa, are glaringly missing from the global discussion of AI ethics. There is a general belief that socio-cultural and political contexts shape expectations of AI and the challenges and risks it poses. It is there-fore safe to assume as Hargety and Rubinov (2019) suggested that AI ethics concepts such as ‘bias’, ‘human rights’, ‘privacy’, ‘justice’, ‘sol-idarity’, ‘trust’, ‘transparency’, ‘openness’ and ‘fairness’ mean different things to different people. The meaning and scope of these concepts emerge from cultural contexts in which they are discussed. Citing the example of Nordic AI policies, Robinson (2020) notes the fundamental influence cultural values have on the way these concepts are conceptu-alised in national and regional policies. As he pointed out, cultural values contribute to value-laden technology policies in a way that can address societal concerns and interests that are different in different places. This is at the heart of responsible AI—the idea of developing AI systems that will not only be compliant to laws (including human rights provisions) but that are socially/culturally sensitive and acceptable as well as be ethically responsible.
Tidak tersedia versi lain