Text
E-book Beyond Data : Human Rights, Ethical and Social Impact Assessment in AI
All AI applications rely on large datasets, to create algorithmic models, to trainthem, to run them over huge amounts of collected information and extract infer-ences, correlations, and new information for decision-making processes or otheroperations that, to some extent, replicate human cognitive abilities.These results can be achieved using a variety of different mathematical andcomputer-based solutions, which are included under the umbrella term of AI.1Although they differ in their technicalities, they are all data-intensive systems and itis this factor that seems to be the most characteristic, rather than their human-likeresults.We already have calculators, computers and many other devices that performtypical human tasks, in some cases reproducing our way of thinking or acting, asdemonstrated by the spread of machine automation over the decades. The revolu-tion is not so much the‘intelligent’machine, which we had already (e.g. expertsystems), but the huge of information these machines can now use to achieve theirresults.2No human being is able to process such an amount of information in thesame way or so quickly, reach the same conclusions (e.g. disease detection throughdiagnostic imaging) with the same accuracy (e.g. image detection and recognition)as AI.These data-intensive AI systems thus undermine a core component of theindividual’s‘sovereignty’over information:3the human ability to control, manageand use information in a clear, understandable and ex post verifiable way.This is the most challenging aspect of these applications, often summed up withthe metaphor of the black box.4Neither the large amounts of data–we have always had large datasets5–nor data automation for human-like behaviour are the mostsignificant new developments. It is the intensive nature of the processing, the size ofthe datasets, and the knowledge extraction power and complexity of the process thatis truly different.If data are at the core of these systems, to address the challenges they pose anddraft some initial guidelines for their regulation, we have to turn to thefield of lawthat most specifically deals with data and control over information, namely dataprotection.Of course, some AI applications do not concern personal data, but the provisionsset forth in much data protection law on data quality, data security and datamanagement in general go beyond personal data processing and can be extended toall types of information. Moreover, the AI applications that raise the biggest con-cerns are those that answer societal needs (e.g. selective access to welfare ormanaging smart cities), which are largely based on the processing of personal data.This correlation with data protection legislation can also be found in the ongoingdebate on the regulation of AI where, both in the literature and the policy docu-ments,6fair use of data,7right to explanation,8and transparent data processing9areput forward as barriers to potential misuse of AI.Here we need to ask whether the existing data protection legislation with its longand successful history10can also provide an effective framework for thesedata-intensive AI systems and mitigate their possible adverse consequences.
Tidak tersedia versi lain