This blog series takes a look at issues we will be discussing at our upcoming Istanbul Innovation Days (IID), an annual gathering of partners to explore innovative approaches to development and policy making.
In March this year, the UN Independent International Fact-Finding Mission on Myanmar reported that hate speech and incitement to violence on social media, particularly on Facebook - substantively contributed to the escalation of violence against Rohingya Muslims.
Artificial intelligence is improving our lives in many areas, from faster diagnoses of illnesses to smarter homes to increasing road safety through self-driving cars. But should we worry about its impact on the respect and protection of our human rights?
The right to equality and non-discrimination: While AI helps personalize online content and navigate through the Internet, at the same time it prioritizes conversations with close friends and prevents us from hearing a diversity of opinions. This reinforces biases, enables radicalization and incentivizes inflammatory content and disinformation. The use of automation in decision making can also result in further polarization and amplification of existing bias and discrimination against certain groups—often marginalized and vulnerable communities, as in the case of Myanmar.
Right to privacy: AI driven systems rely on vast amount of data including personal data, which may contain sensitive information such as sexual orientation, family relationships, religious views, political associations and health conditions of individuals. During the process of collection, people are rarely asked to confirm their consent to share this data and often lack ability to control how that data is used. Hence, AI raises fundamental questions of accountability for outcomes, as well as accountability for collection and use of massive quantities of personal data.
Freedom of expression: AI algorithms can help us push data, but they can also be manipulated to restrict access to online information and restrain free speech. States may arbitrarily block or censor digital content in the name of national security to an extent that such restriction can threaten freedom of expression. In his recent report to the General Assembly, David Kaye, UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, describes how AI negatively impacts fulfillment and protection of rights to expression and opinion.
A new human rights framework?
With the wider use of technologies like AI, experts call for ethical standards and new legislative frameworks ensuring citizen’s protection from potential threats. Hence, new specific regulatory frameworks are in development:
- The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems, launched on May 16, 2018 at RightsCon, reaffirms the role of human rights law and standards in protecting individuals and groups from discrimination and non-equality in developing ethical frameworks for machine learning.
- Following personal data protection concerns, the European Union has adopted a new data protection framework called General Data Protection Regulation (GDPR) to improve data privacy regulation across every sector including healthcare, banking and many others.
- UN’s principles on Business and Human Rights provide a set of standards, which also needs to be applied to AI, for preventing and addressing human rights violations linked to business activities.
- Council of Europe European Commission for the efficiency of justice (CEPEJ) has recently set up the multidisciplinary team of experts who prepare the guidelines for the ethical use of algorithms within justice systems, including predictive justice.
At the end of the day it’s not about AI. It’s about our fundamental values that should be respected while using AI. This calls us to reflect on whether international human rights law is fit for this purpose, as we celebrate 70th anniversary of Universal Declaration on Human Rights (UDHR)?