Revue Européenne du Droit
AI vs Human Dignity: When Human Underperformance is Legally Required
Issue #4
Scroll

Issue

Issue #4

Auteurs

Ginevra Le Moli

Revue européenne du droit, Summer 2022, n°4

1. Introduction 

When mathematics Professor John McCarthy and his colleagues introduced the term ‘Artificial Intelligence’ (AI) in 1956, the AI problem was defined as ‘making a machine behave in such a way that would be called intelligent if a human were so behaving’ 1 . This definition considered human intelligence as the measure of and standard for what AI does 2 . Today, massive public and private investment is going into AI tools 3 , which are being actively incorporated into a range of decision-making processes in areas such as legal processes and medical diagnosis. Both AI practical achievements as well as AI ‘ideological role as a technological paradigm for the reconstruction of capitalism’ have attracted interest in advanced capitalist societies 4 . The restructuring of the global capitalist system has indeed been enhanced by technological developments in telecommunications, microelectronics and computers 5 , which, in turn, has reduced the need to rely on human labour 6

However, despite AI’s increasing use in commercial, military, and scientific applications, AI systems are being deployed in the absence of specific and effective national and international legal regulatory frameworks 7 . Yet, AI does raise important normative challenges in various areas, such as human rights, global health and international labor law, among others. In a time where AI systems are bound to be a key component of capital, this essay examines when is human action normatively required as a matter of human dignity, and this irrespective of whether AI systems may outperform humans in a given task. I argue that delegation of decision-making powers to AI is normatively constrained and that human dignity provides an organizing framework to guide the delegation process.

Whereas AI is being increasingly deployed, from driving vast investment to computational techniques 8 , its public perception is far from uniformly positive 9 . For its enthusiasts, AI will eliminate the need for fallible human intellect, and ‘will worry about all the really important problems for us (for us, not with us)’ 10 . By extending ‘mathematical formalization into the realm of social problems’, AI is said to have ‘brought with it a sense of new-found power, the hope of technical control of social processes to equal that achieved in mechanical and electronic systems’ 11 . Thus, optimists promote the opportunity to democratize legal services and render decision-making more efficient and foreseeable 12 . Others, instead, remain sceptical 13 and denounce a range of risks, including intrusive social control, arbitrariness, and inequality 14

From an ethical perspective, AI’s systems raise the familiar tension between two forms of normativity: consequentialism, i.e. normativity based on the end-results of an action, and deontologism, i.e. rules-based normativity. This essays highlights that a consequentialist ethics of performance must not undermine the normative reasons why human ‘underperformance’ is ultimately desirable. There is a pressing need to build what some call a ‘good AI society’, crafted by the public and private effort to adopt a holistic approach based on universal values and foundations 15 . In this context, human rights have a major role to play. As noted by the UN Secretary-General in a 2018 report, ‘AI tools, like all technologies, must be designed, developed and deployed so as to be consistent with the obligations of States and the responsibilities of private actors under international human rights law’ 16 . Although such role is not always straightforward 17 , the normative concept of human dignity provides a legal key. This essay will focus on this particular perspective and, first, present examples that illustrate cases in which AI is considered to provide better performance, together with related challenges and, subsequently, examine why, irrespective of the high performance that AI can offer, the legal implications of human dignity constrain AI deployment in decision-making.

2. AI Technologies’ Performance and Normative Challenges 

Despite challenges posed by data-driven machine learning-based algorithms 18 , evidence of better performance has led to an increase use of AI in various sectors. 

In the legal sector, AI is adopted more and more by agencies and courts for different purposes 19 . First, AI is used to organize information. One example is ‘eDiscovery’, a method of document investigation used by the courts in the United States and the United Kingdom 20 which is considered faster and more precise than manual file research. Moreover, AI tools are also adopted to advise professionals. The Solution Explorer system in the Civil Resolution Tribunal (CRT) in British Columbia, Canada 21 , was set up to deal with disputes relating to strata, subsidised housing and personal injury resulting from collisions. The Solution Explorer is the first step in CRT dispute resolution process. It provides people with clear legal information, as well as free self-help tools to solve their dispute without having to file a CRT claim. Thirdly, AI systems are used to better predict possible outcomes in judgments 22 . For instance, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system is adopted by courts in the United States to predict recidivism in criminal cases 23 or, in New Zealand, a computer-based prediction model supports in addressing claims under the accident compensation scheme 24 . In addition, there is a growing use of live facial recognition (LFR) which is considered necessary, for instance, when the nature of the crime requires it, as ‘the social needs associated with preventing murder will be much higher than those associated with detecting petty theft.’ 25

Another broad application of AI concerns welfare and healthcare. Governments and private sector organizations are digitalizing the welfare state to ‘automate, predict, identify, surveil.’ 26 Nudging people’s behavior using AI applications has grown in the health care and education areas. For instance, the University of Pennsylvania has set up the Penn Medicine Nudge Unit in 2016, which uses ‘default options to increase generic prescribing and reduce opioid prescribing, using active choice to increase influenza vaccination, and using peer comparison feedback to increase statin prescribing and reduce unnecessary antibiotic prescribing.’ 27 Similarly, E-learning and EduTech are nudging cognitive technologies in the field of learning and education 28

Yet another area where AI-enabled applications are increasingly is intelligence, surveillance and reconnaissance (ISR), including cyber defence 29 . For example, NATO uses AI applications in the ISR context to ‘identify patterns and trends in support of situational awareness and operational decision-making’ 30 , as well as in the cyber defense context, in preemptive patching and taking of corrective action faster and with more accuracy. 

The increasing use of AI in these and other contexts is driven by considerations of performance. Yet, even when such higher performance is recognized, the use of AI has come under criticism, sometimes on grounds of occasional underperformance (e.g. rigidity or arbitrariness) but other times precisely because of the normative implications of high performance in performing certain tasks, including discrimination and intrusiveness. As regards the use of AI by courts, there is a potential for arbitrariness and discrimination 31 , together with issues of legal accuracy 32 , lack of transparency over algorithm-based methods and an increase in the justice divide inter partes 33 . Moreover, bias in data may produce discrimination 34 , in both predictive policing tools 35 and in LFR applications 36 . Public criticism of LFR uses has led to legal challenges in court, as it happened in the United Kingdom 37 and in the United States 38 . Moreover, the transformation of social protection and assistance such as, for instance, by way of automated eligibility assessments, calculation of benefits, fraud detection, and risk scoring, has given rise to various concerns 39 : lack of accuracy 40 ; challenges to the right to social security, whether by reducing welfare budgets, the beneficiary pool, or enhancing sanctions 41 , and access to social services due to digital illiteracy 42 ; or loss of reasoning, proportionality and discretion 43 . Similarly, the use of AI technologies in military contexts can pose difficulties, such as ‘fueling pressure for inappropriately accelerated action’ 44 with the result of going against traditional decision-making processes. 

3. Human Dignity as a Legal Constraint on AI decision-making

In order to determine the human rights implications of AI decision-making, a human rights assessment would have to encompass an evaluation of AI systems’ compliance with fundamental rights 45 . Such risk assessments cover the respect for human dignity, freedom of the individual, respect for democracy, justice and the rule of law, equality, non-discrimination and solidarity and citizens’ rights 46 . In particular, even though respect for human dignity today is challenged by the rapid development of new technologies, its general meaning remains intact, i.e. every human being possesses an ‘intrinsic value’, which should never be endangered or repressed by others. Violations of human dignity are acts ‘incompatible with the dignity and worth of the human person’ 47 . Such acts are internationally condemned because they are harmful practices, a ‘denial of dignity’ 48 , carrying ‘a negative impact’ on dignity and moral integrity 49 . In this sense, human dignity is protected in constitutional 50 , regional and international legal frameworks 51 and it can play a central role in constraining the extent of delegation to AI systems, as shown by its uses in various instances 52 .

Human dignity is indeed a guiding principle in international law and it acts as a ‘mother-right’ (operating as a source of rights and as a qualification of rights) and as a source of obligations 53 . In particular, the dignity of the individual is today considered as ‘the fundamental guiding principle of international human rights law’ 54 . There is an established ‘core human rights principle[s] of human dignity’ 55 . For this reason, human rights instruments not only equally ‘reaffirm’, in the words of the UN Charter of the United Nations, ‘faith … in the dignity’ 56 of the human person, but also acknowledge human dignity as the foundation of four different concepts. Human dignity, in primis, ‘is the foundation of freedom, justice and peace in the world’, as declared for the first time by the Universal Declaration of Human Rights (UDHR) 57 . Moreover, all human beings have ‘equality in dignity’ 58 , in line with Article 1 of the UDHR 59 , and they have the right to pursue their ‘spiritual development’ in a condition of dignity 60 . For this reason, ‘social progress and development’ are founded on respect for dignity 61 . Thus, there is an overall recognition that human dignity grounds the concepts and principles of ‘freedom, justice and peace’, of ‘equality’, of ‘spiritual development’ and of ‘social progress and development’. Even when an international instrument is silent on this principle, human dignity still remains of ‘central importance’ for personal autonomy 62 and as a foundational objective. It is generally agreed that respecting human dignity is ‘a fundamental and universally applicable rule’ 63

By way of illustration, the Human Rights Committee has deplored incompatibility with this principle in various contexts, such as with regard to the prohibition of torture and ill-treatment 64 , since ‘[t]he humane treatment and the respect for the dignity of all persons deprived of their liberty is a basic standard of universal application 65 , or the misuse of scientific and technical progress 66 . The ECtHR has instead referred to conduct incompatible with the principle of human dignity in the assessment of infringements, such as discrimination on account of gender 67 , ethnicity 68 or race 69 .

The operativeness of the principle of human dignity in legal practice shows that considerations of efficiency and effectiveness justifying resort to AI technologies remain subject to important legal constraints, and that underperformance may be normatively required to ensure the protection of human dignity. Human dignity presupposes that people deserve to be treated with respect. AI systems must be designed and set up in a way that protects human beings 70 , their physical and mental health but also their cultural sense of identity 71 . Importantly, at the EU level, it has been recognized by the Data Protection Authorities that human dignity may be undermined in the context of data processing 72   in various ways. There is a direct connection between data protection and AI systems because the latter are increasingly used to guide or control information gathering systems as well as to process the data. First, human dignity can be endangered by constant and intrusive monitoring, such as video surveillance (or other similar technologies) 73 as well as data-intensive systems gathering mobility data 74 . Second, human dignity is considered harmed when video-surveillance or other monitoring systems are installed in areas where high privacy is instead expected, such as in changing rooms or toilets 75 , due to the personal embarrassment that it may cause. Thirdly, the use of sensitive data can endanger human dignity, as it has been recognized in cases of invasive information requests by employers 76 , of use of wearable and Internet of Things (IoT) devices to collect sensitive data (e.g. health data) or profiling information 77 , and biometric data collection 78 . Finally, human dignity can also be affected by the publication of personal information which can cause distress to affected individuals, like in the case of disclosure of evaluation judgements, such as ratings of employees 79 or exams’ results 80 , private debt reports 81 , or the use of services of the so-called reputation economy 82

Similarly, there is a risk of discrimination, and a related impact on human dignity, arising from the use of algorithms in different contexts 83 . For instance, an automated algorithmic-based social security system, such as the one implemented in the UK, despite improving the cost-efficiency of the payment system, imposes digital barriers in the accessibility of social security and may thus exclude individuals without (or with low) digital literacy 84 . This, in turn, can affect vulnerable people’s fundamental human rights, such as work, food and housing 85 . Moreover, predictive analytics, which may also be adopted in child safeguarding 86 ,  can raise issues of privacy and discrimination 87 .

Thus, these examples illustrate how compatibility and consistency with human dignity are viewed as benchmarks or standards for the assessment of the compatibility of technology-assisted surveillance. It is therefore directly relevant to understand the consistency of AI technologies with fundamental rights.

4. Concluding observations

This essay has attempted to show how human dignity can act as a limit and legal constraint in the decision to delegate a task to an AI tool (or not), in order to facilitate human rights compliance. In an AI context, a human dignity-approach balances the tension between consequentialism and deontologism, in favour of the latter and of human action, irrespective of the high performance and results that could in principle be ensured by AI technologies’ use. It also allows to guarantee that the AI system’s operation does not generate unfairly biased outputs and is as inclusive as possible. Importantly, human dignity sets a standard of deployment and plays a role in the decision-making process, thereby establishing the framework within which possible harm can be assessed.

Notes

  1. J. McCarthy, M.L. Minsky, N. Rochester, and C.E. Shannon, ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August, 31, 1955’, in (2006) 7(4) AI Magazine, 11. 
  2. Ibid. See also S. J. Russell and P. Norvig, Artificial intelligence: a modern approach, Prentice Hall, 2009, 3rd ed., 2-5. See also J.A. Perez, F. Deligianni, D. Ravi, G.-Z. Yang, ‘Artificial Intelligence and Robotics’, UK-RAS, 2018, 2-4.
  3. See, e.g., W. Alschner, ‘The Computational Analysis of International Law’, in R. Deplano and N. Tsagourias, Research Methods in International Law, Edward Elgar, 2021, 224-228; A. Deeks, ‘High-Tech International Law’, (2020) 88(3) Geo. Wash. L. Rev., 574-653.
  4. B.J. Berman, ‘Artificial Intelligence and the Ideology of Capitalist Reconstruction’, (1992) 6 AI & Soc 103, 103, see also 110. According to Christopher Freeman, ‘a technological revolution represents a major change of paradigm, affecting almost all managerial decisions in many branches of the economy’, and the new ‘techno-economic paradigm’ is ‘a new set of guiding principles, which become the managerial and engineering ‘common-sense’ for each major phase of development’, see C. Freeman, ‘Prometheus Unfound’, (October 1984), 16(5) Futures 494, 499. 
  5. Berman, supra n 4, 107.
  6. See R. Kaplinsky, Automation: the Technology and the Society, Longman. 1984 ; H. Shaiken, Work Transformed: Automation and Labour in the Computer Age, Lexington Books, 1985; H. Schiller, Information and the Crisis Economy, Oxford University Press, 1986; V. Mosco, The Pay-Per Society: Computers and Communication in the Information Age, Garamond Press, 1989; T. Roszak, The Cult of Information, Pantheon, 1986, 44-45. See also G. Raunig, Dividuum: Machinic Capitalism and Molecular Revolution, MIT Press, 2016.
  7. A. Deeks, ‘Introduction To The Symposium: How Will Artificial Intelligence Affect International Law?’, (2020)114 AJIL Unbound 138, 138.
  8. J. Kaplan, ‘Artificial Intelligence: What Everyone Needs to Know’, Oxford University Press, 2016; J. Copeland, ‘The Essential Turing: The Ideas that Gave Birth to the Computer Age’, Oxford University Press, 2005.
  9. Berman, supra n 4.
  10. M. Boden, ‘The Social Impact of Thinking Machines’, in T. Forester (ed.), The Information Technology Revolution, MIT Press, 1985, 103.
  11. P. Edwards, ‘The Closed World: Systems Discourse, Military Strategy and Post WWII American Historical Consciousness’, (1988) 2(3) AI & Society 245, 252.
  12. R. Susskind, Tomorrow’s Lawyers: An Introduction To Your Future, Oxford University Press, 2017, 2nd ed..
  13. H. Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason, MIT Press, 1992. See also Berman, supra n 4.
  14. See F. Pasquale, ‘A Rule of Persons, Not Machines: The Limits of Legal Automation’, (2019) 87(1) Geo. Wash. L. Rev. 1.
  15. C. Cath et al., “Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach, Science and Engineering Ethics” (2017) 24(2) Science and Engineering Ethics 505, 507–508; J. Turner, Robot Rules. Regulating Artificial Intelligence, Palgrave Macmillan 2019, 209-210. See also M. Latonero, ‘Governing Artificial Intelligence: Upholding Human Rights & Dignity’, Report, Data & Society, 10 October 2018.
  16. UN GA, Promotion and protection of the right to freedom of opinion and expression, Note by the Secretary-General, A/73/348, (2018, August 29), at paras. 19–20 ; see also UN HRC, Report of the Special Rapporteur to the General Assembly on AI and its impact on freedom of opinion and expression, (2018).
  17. See D. Murray, ‘Using Human Rights Law to Inform States’ Decisions to Deploy AI’, (2020) 114 AJIL Unbound 158, 158; T.L. Van Ho and M.K. Alshaleel, ‘The Mutual Fund Industry and the Protection of Human Rights’, (2018) 18(1) Hum. Rts. L. Rev. 1.
  18. R. Braun, ‘Artificial Intelligence: Socio-Political Challenges of Delegating Human Decision-Making to Machines’, IHS Working Paper 6, April 2019, 3-4. See also T. Gillespie, The relevance of algorithms, in T. Gillespie, P. Boczkowski, and K. Foot (eds.), Media technologies: Essays on communication, materiality, and society, MIT Press, 2014, 167-193; B. Lepri, et al., ‘FairTransparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges’, (2017) 31 Philosophy & Technology 611; M. Willson, ‘Algorithms (and the) everyday’, (2016) Information, Communication & Society 137; R.V. Yampolskiy, Artificial Intelligence Safety and Security, CRC Press, 2018.
  19. A.D. Reiling, ‘Courts and Artificial Intelligence’ (2020) 11(2) IJCA 8.
  20. In the United States: this methodology was considered valid for the first time in Anti-Monopoly, Inc. v. Hasbro, Inc., 1995 WL 649934 (S.D.N.Y., Nov. 3, 1995); Da Silva Moore v. Publicis Groupe & MSL Group, No. 11 Civ. 1279 (ALC) (AJP) (S.D.N.Y., Feb. 24, 2012); on the technology assisted review (TAR) of documents, see  Rio Tinto PLC v. Vale S.A., et al., 2015 WL 872294 (S.D.N.Y., Mar. 2, 2015); Hyles v. City of New York, et al., No. 10 Civ. 3119 (AT) (AJP) (S.D.N.Y., Aug. 1, 2016). In the United Kingdom, see High Court of Justice Chancery Division, U.K. (2016), Pyrrho Investments Ltd v. MWB Property Ltd [2016] EWHC 256 (Ch).
  21. British Columbia Civil Resolution Tribunal (2019), <https://civilresolutionbc.ca>.
  22. D. Katz et al., ‘A General Approach for Predicting the Behavior of the Supreme Court of the United States’, (2017) 12(4) PLoS ONE; M. Medvedeva, M. Vols, M. Wieling, ‘Judicial Decisions of the European Court of Human Rights: Looking into the Crystal Ball, (2018) Proceedings of the Conference on Empirical Legal Studies in Europe 1. On possible attempts by parties to get advantage by adopting data-driven legal research and prediction, see Deeks, supra n 3, 600-622; N. Aletras et al., ‘Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective’, (2016) 2 PeerJ Computer Science 1.
  23. B. Green, “‘Fair’ Risk Assessments: A Precarious Approach for Criminal Justice Reform”, 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning, 2018.
  24. J. Yoshikawa, ‘Sharing the Costs of Artificial Intelligence: Universal No-Fault Social Insurance for Personal Injuries’, (2018-2019) 21 Vand. J. Ent. & Tech. L. 1155.
  25. Murray, supra n 17, 160-161. See Joined Cases C-203/15, C-698/15, Tele2 Sverige AB v Post-och telestyrelsen and Secretary of State for the Home Department v.Watson and Others, ECLI:EU:C:2016:970, para. 102 (Dec. 21, 2016).
  26. Report of the Special Rapporteur on Extreme Poverty and Human Rights, A/74/48037, para. 8 (Oct. 11, 2019) [hereinafter Alston Report].
  27. L. Devillers, F. Fogelman-Soulié, and R. Baeza-Yates, ‘AI and Human Values. Inequalities, Biases, Fairness, Nudge, and Feedback Loops’, in B. Braunschweig and M. Ghallab (eds.), Reflections on Artificial Intelligence for Humanity, Springer, 2021, 83; see also J. Harrison, M. Patel, ‘Designing nudges for success in health care’, (2020) 22(9) AMA J. Ethics E796-801.
  28. M. Damgaard, H. Nielsen, ‘Nudging in education’, (2018) Econ. Educ. Rev. 64, 313–342.
  29. S. Hill, ‘AI’s Impact on Multilateral Military Cooperation Experience From NATO’, (2020) 114 AJIL Unbound 147, 150.
  30. Ibid.
  31. See D. Boyd, K. Levy and A. Marwick, ‘The Networked Nature of Algorithmic Discrimination’, Open Technology Institute, October 2014; BW Goodman, ‘A Step Towards Accountable Algorithms?: Algorithmic Discrimination and the European Union General Data Protection’, 29th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016, 3–4; Green, supra n 23; L. McGregor, D. Murray and V. Ng, ‘International Human Rights as a Framework for Algorithmic Accountability’, (2019) 68(2) Int’l & Comp. L.Q. 309.
  32. T. Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-Making’, (2018) 41(4) UNSW L.J. 1114.
  33. M Langford, ‘Taming The Digital Leviathan: Automated Decision-Making and International Human Rights’, (2020) AJIL Unbound 141, 144.
  34. Devillers et al., supra n 27, 81-82. See also E. Pitoura et al., ‘On measuring bias in online information’, (2018) 46(4) ACM SIGMOD Record 16.
  35. R. Richardson, J. Schultz, K. Crawford, ‘Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice’ (2019) NYULR 192.
  36. See also M. Merler, N. Ratha, R.S. Feris, J.R. Smith, Diversity in faces, 29 January 2019, arXiv:1901.10436; J. Buolamwini, T. Gebru, ‘Gender shades: intersectional accuracy disparities in commercial gender classification’, Conference on Fairness, Accountability and Transparency, 2018, 77-91. 
  37. See R (on the application of Edward Bridges) v The Chief Constable of South Wales [2019] EWHC 2341, 4. Sept., 2019. 
  38. Janecyk v. International Business Machines (IBM), Case 1:20-cv-00783 (N.D. Ill.); Vance v. IBM, Case 1: 20-cv-577 (N.D. Ill.); Mutnick v. Clearview AI, et al., Case 1:20-cv-00512 (N.D. Ill.); Hall v. Clearview AI, et al., Case 1:20-cv-00846 (N.D. Ill). See also Murray, supra n 17, 160-161. 
  39. Alston Report, supra n 26, paras. 1, 78. J. Redden, ‘Democratic Governance in an Age of Datafication: Lessons from Mapping Government Discourses and Practices’, (2018) 2 Big Data & Society 1; C. Sheppard and J. Raine, Parking Adjudications: The Impact of New Technology, inM. Harris and M. Partington (eds.), Administrative Justice in the 21st Century, Hart Publishing, 1999, 326-334. See also Langford, supra n 33, 142.
  40. See T. Carney, ‘Robo-Debt Class Action Could Deliver Justice for Tens of Thousands of Australians Instead of Mere Hundreds’, The Conversation (Sept. 17, 2019) ; Alston Report, supra n 26, para. 28.
  41. UN Comm. on Econ., Soc. & Cultural Rights, General Comment No. 19 on the Right to Social Security, E/C.12/GC/19, para. 11 (Feb. 4, 2008).
  42. P. Larkin, “Universal Credit, ‘Positive Citizenship’, and the Working Poor: Squaring the Eternal Circle?”, (2018) 81 Mod. L. Rev. 114; M. Burton, ‘Justice on the Line? A Comparison of Telephone and Face-to-Face Advice in SocialWelfare Legal Aid’, 40 J. Soc. Welfare & Family L. 195 (2018).
  43. C. Harlow & R. Rawlings, ‘Proceduralism and Automation: Challenges to the Values of Administrative Law’, in E. Fisher, J. King and A. Young eds., The Foundations And Future of Public Law (in Honour of Paul Craig), Oxford University Press 2019.
  44. Hill, supra n 29, 150.
  45. Independent High-Level Expert Group (HLEG) on AI, ‘Ethics Guidelines for Trustworthy AI’, European Commission, 2020, 15–16. See also, ‘Algorithms and Human Rights. Study on the human rights dimensions of automated data processing techniques and possible regulatory implications’, Council of Europe study DGI, 2017, 12, 40.
  46. HLEG AI, supra n 45, 9–11.
  47. Vienna Declaration and Programme of Action, Part I, 15 (para 2); Art. 1, UNESCO, Convention Against Discrimination in Education, 14 December 1960, Art. 1, 1(d); Preamble, Supplementary Convention on the Abolition of Slavery.
  48. Joint General Recommendation (GR) 31 CEDAW/GC 18 CRC, para 16.
  49. Ibid., para 15.
  50. See A. Barak, Human Dignity: The Constitutional Value and the Constitutional Right, Cambridge University Press 2015.
  51. See G. Le Moli, Human Dignity in International Law, Cambridge University Press, 2021, 216-260. See also Marcus Düwell, Jens Braarvig, Roger Brownsword and Dietmar Mieth (eds), The Cambridge Handbook of Human Dignity. Interdisciplinary Perspectives, Cambridge University Press, 2014.
  52. Ibid.
  53. See Le Moli, supra n 51.
  54. General Comment (GC) 13, The right to education (Art. 13 of the Covenant), E/C.12/1999/10, 8 December 1999, CESCR, para 41. See also GC n. 8, The right of the child to protection from corporal punishment and other cruel or degrading forms of punishment (arts. 19; 28, para 2; and 37, inter alia), CRC/C/GC/8, 2 March 2007, CRC, para 16.
  55. GR 35, Combating racist hate speech, CERD/C/GC/35, 26 September 2013, CERD, para 10.
  56. Preamble, Charter of the UN, 24 October 1945, 1 UNTS XVI (UN Charter). 
  57. Preamble, para 1, UDHR.
  58. The formulation ‘all human beings are born free and equal in dignity’ can be found in various documents, see Le Moli, supra n 51.
  59. See also, by way of example, Fifty-first session (1997), GR 23 (XXIII) on the rights of indigenous peoples, CERD, para 4; GC 10 (2007), Children’s rights in juvenile justice, CRC/C/GC/10 25 April 2007, para 13.
  60. Preamble, C122 Employment Policy Convention, ILO 1964; in an equal wording, see Preamble, Discrimination (Employment and Occupation) Convention, ILO 1958 (No. 111); Preamble, Indigenous and Tribal Populations Convention, ILO 1957 (No. 107).
  61. See Art. 2, Declaration on Social Progress and Development, GA resolution 2542 (XXIV) of 11 December 1969; GC 13, supra n 54, para 4.
  62. GC 36, Art. 6 of the ICCPR, CCPR/C/GC/36, 30 October 2018, CCPR, para 9.
  63. See, for instance, GC 21, Art. 10 (Humane treatment of persons deprived of their liberty), HRI/GEN/1/Rev.9 (Vol. I), 13 March 1993, CCPR, para 4.
  64. See GC 20, Art. 7 (Prohibition of torture, or other cruel, inhuman or degrading treatment or punishment), 30 September 1992, CCPR, para 2; A.H.G. and M.R. v Canada (CCPR/C/113/D/2091/2011), 5 June 2015, para 10.4.
  65. See, for instance, GC 8, Article 9 (Right to Liberty and Security of Persons), 30 June 1982, CCPR, para 1.
  66. See, for instance, GC 17, The right of everyone to benefit from the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he or she is the author (Art. 15, paragraph 1 (c), of the Covenant), E/C.12/GC/17, 12 January 2006, CESCR, para 35.
  67. S.A.S. v. France, [GC], No. 43835/11, Judgment, 1 July 2014, para 120.
  68. Perincek v. Switzerland, No. 27510/08, 15 October 2015, para 155 and 280.
  69. Abdu v. Bulgaria, No. 26827/08, 11 March 2014, para 38; see Ananyev and Others v. Russia, No. 42525/07 and 60800/08, 10 January 2012, paras 139–142
  70. D.E. Harasimiuk, T. Braun, Regulating Artificial Intelligence, Routledge, 2021, 63.
  71. E. Hilgendorf, ‘Problem Areas in the Dignity Debate and the Ensemble Theory of Human Dignity’ in D. Grimm, A. Kemmerer, C. Millers (eds.), Human Dignity in Context. Explorations of a Contested Concept, Hart Publishing, 2018, 325 ff.
  72. See Council of Europe, Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, ETS n. 108, 28 January 1981, Preamble.
  73. On workers’ video surveillance, see Information Commissioner’s Office (ICO), The employment practices code, 2011, Part. 3; Garante per la protezione dei dati personali (GPDP), 4 April 2013, n. 2439178; GPDP, 30 October 2013, n. 2851973; Commission de la protection de la vie privée (hereinafter CPVP), avis, n. 8/2006, 12 April 2006; Commission Nationale de l’Informatique et des Libertés (CNIL) n. 2014-307, 17 July 2014. On invasive monitoring activities, see GPDP, 25 January 2018, n. 7810766.
  74. Such as, for example, GPS, Radio Frequency Identification (RFID) technologies, Wi-Fi tracking devices, “event data recorder” devices or Intelligent Transport Systems. See Agencia Española de Protección de Datos (AEPD), resolución R/01208/2014; CNIL n. 2010-096, 8 April 2010; CPVP, recommandation n. 01/2010, 17 March 2010; ICO, ‘Wi-filocation analytics’ (2016); Article 29-Data Protection Working Party (ART29WP), ‘Working document on data protection issues related to RFID technology’, WP 105 (2005); ART29WP, ‘Opinion 03/2017 on Processing personal data in the context of Cooperative Intelligent Transport Systems (C-ITS)’, WP 252 (2017).
  75. See, for instance, GPDP, 4 December 2008, n. 1576125; GPDP, 24 February 2010, n.; AEPD, expediente n. E/01760/2017; ICO, ’In the picture: A data protection code of practice for surveillance cameras and personal information’ (2017); ART29WP, ‘Opinion 4/2004 on the Processing of Personal Data by means of Video Surveillance’, WP 89 (2004). 
  76. Such as health conditions, religious beliefs, criminal records, and drug and alcohol use. See GPDP, 21 July 2011, n. 1825852; GPDP, 11 January 2007, n. 1381620. 
  77. See ART29WP, ‘Opinion 8/2014 on the on Recent Developments on the Internet of Things’, WP 223 (2014).
  78. See GPDP, 1 August 2013, n. 384, n. 2578547; CNIL n. 2008-492, 11 December 2008; CPVP, avis n. 17/2008, 9 April 2008; ART29WP, ‘Opinion 3/2012 on developments in biometric technologies’, WP193 (2012).
  79. See GPDP, 13 December 2018, n. 500, n. 9068983.
  80. See ICO, ‘Publication of exam results by schools’ (2014); ART29WP, ‘Opinion 2/2009 on the protection of children’s personal data’, WP 160 (2009); GPDP, ‘Scuola: Privacy, pubblicazione voti online è invasiva’, n. 9367295 (2020).
  81. See GPDP, 28 May 2015, n. 319, n. 4131145; AEPD, procedimiento n. A/00104/2017.
  82. For instance, platforms which display and manage product and service reviews, as well as tax or criminal information. See GPDP, 24 November 2016, n. 488, n. 5796783.
  83. See supra n 31. 
  84. UN Special Rapporteur on Extreme Poverty and Human Rights Philip Alston, ‘Statement on Visit to the United Kingdom’, 16 November 2018.
  85. Ibid.
  86. On Xantura’s Early Help Profiling System (EHPS), see London Councils, ‘Understanding who the most vulnerable people in your locality are’, at <https://www.londoncouncils.gov.uk/our-key-themes/our-projects/london-ventures/current-projects/childrens-safeguarding> [last access 20/01/2022].
  87. N. McIntyre and D. Pegg, ‘Councils Use 377,000 People’s Data in Efforts to Predict Child Abuse’, The Guardian, 16 September 2018.
+--
voir le planfermer
citer l'article +--

citer l'article

APA

Ginevra Le Moli, AI vs Human Dignity: When Human Underperformance is Legally Required, Aug 2022,

notes et sources +