Comparative Legal Analysis of the Role of Artificial Intelligence in Human Rights Protection: Prospects for Europe and the Middle East
Susanna Suleimanova[1]

Abstract
Artificial intelligence’s threats to human rights can offset its significant benefits for human welfare. This makes it essential to analyse the current status and existing practices in developing the regulatory framework for artificial intelligence (AI).
This paper aims to conduct a comparative legal analysis of the role of AI in ensuring human rights in Europe (in the example of the European Union) and the Middle East (in the example of Israel). The article uses comparative legal, formal legal and descriptive methods. The analysis shows that AI may harm the enjoyment of several human rights.

Existing legislative initiatives (in particular, The EU Artificial Intelligence Act (AI Act), the Council of Europe’s AI Convention) do not fully protect human rights from the impact of artificial intelligence due to existing gaps in the regulation of the private sector and national security, as well as the effect on the transparency of decisions in criminal law.
The main problem is the inadequate regulation of the development and use of AI in national security and the private sector. This creates loopholes through which AI can cause significant harm to human rights and lead to violations. Further research can determine how the shortcomings identified in this paper may affect human rights and what safeguards can be put in place.
Keywords: artificial intelligence, human rights, offences, Artificial Intelligence Act (AI Act), the Council of Europe’s AI Convention, European Union, Israel
- Introduction
Digitalisation fundamentally changes people’s lives, as new technologies promote progress and innovation that improves social welfare (Habibi & Zabardast, 2020; Myovella et al., 2020; Kwilinski et al., 2022). At the same time, there are growing concerns about the negative impact of new technologies on human rights, democracy and the rule of law (Almeida et al., 2020; Shneiderman, 2020; Završnik, 2020).
The main problems include interference with the privacy of individuals, increased surveillance, threats to individual autonomy, spreading disinformation, and influencing the electoral process (Nagy, 2023). These threats increase support for policies establishing general principles and regulating technologies like artificial intelligence (AI). AI technologies such as language models and generative AI have quickly taken over the world.
However, more transformative technologies are currently being developed – AI agents – systems that perform complex tasks with high autonomy and limited human supervision (Kolt, 2024). This pace of development underscores the need for increased attention to AI regulation. Still, government measures in various countries often differ, and establishing standard rules causes significant controversy (Geist, 2021).
This makes conducting a comparative analysis of legislative practices in different countries regarding AI regulation essential. In 2019, countries adopted ethical principles for AI at the international level (the OECD AI Principles, the non-binding G20 AI principles) (Leslie et al., 2021).
However, growing concerns about the dangers posed by AI are forcing countries to implement clearly defined, legally binding frameworks for the development and use of AI (Ben-Israel et al., 2020; Gstrein, 2022; Ravia & Hammer, 2023). The human rights-based approach to the development and use of AI, which is universal, has the potential to become the leading global driver that lays the foundation for AI regulation (Salgado-Criado & Fernández-Aller, 2021).
However, implementing this approach is complicated by conflicting interests of different stakeholders, national priorities, competition, uncertainty about the future of AI, and the lack of representation of certain social groups in the discussion of AI regulation.
The purpose of the study is to conduct a comparative legal analysis of the role of AI in ensuring human rights in Europe (in the example of the European Union – EU (European Union, 1950)) and the Middle East (in the example of Israel). Objectives of the study: identify which human rights are affected by AI;to analyse the EU’s approach to ensuring human rights in the context of enhanced AI development; use Israel as an example to analyse the Middle Eastern approach to ensuring human rights in the context of enhanced AI development.

- Literature review
The role of AI in the field of human rights protection is a hot topic for research. Smuha (2020) notes that human rights can act as a significant ‘moral compass’, forming the basis for the AI regulatory system. Rodrigues (2020) concludes that AI can significantly impact legal and ethical aspects. The researcher suggests that the primary source of problems is ignoring potential consequences at the design stage.
Imran et al. (2023) note that the development of AI could pose a significant threat to the rule of law due to the slow adaptation of traditional law to rapid technological developments. Latonero (2018) is convinced that integrating values into a sociotechnical system will always be challenging. Putting human rights at the centre of the AI debate will not solve all problems, and gaps between principles, rights, development, design, deployment and use will remain.
The emergence of new “digital” human rights in the context of AI development is a topical issue, intensifying discussions around adopting such rights. Often, researchers disagree on how offline and online human rights should be related. Dror-Shpoliansky and Shany (2021) express their view on the widespread concept of human rights regulation in cyberspace. This concept assumes that the same human rights that a person has offline should be protected online.
However, the researchers have identified and provided evidence that the specifics of cyberspace call into question the feasibility of this approach. Adequate protection of rights in cyberspace cannot be ensured by relying solely on existing international human rights law, thus necessitating adaptation. Muller (2020) expands on the new or adapted human rights list.
They include the right to autonomy, agency and supervision over AI; the right to transparency and explanation of the results provided by AI; the right to psychological, physical and moral integrity in the context of AI development; the right to privacy and protection from mass surveillance using AI, the right to protection from online tracking, etc.
Shaelou and Razmetaeva (2024) propose to supplement the list of digital human rights with the right not to be subject to automatic decisions, the right not to be manipulated, the right to influence one’s digital footprint, the right to meaningful human contact, the right to be neutrally informed online, the right not to be evaluated, analysed or trained. Also, Allegri (2022) notes the “right to be forgotten in the digital world”.
Ulnicane (2022) examines the EU’s position on regulating the use of AI. According to the study, the EU repeats the competition discourse despite emphasising a regulatory and ethical approach to AI development. From the researcher’s point of view, such a discourse can be dangerous and lead to political and financial priority at the expense of other areas. In particular, the AI Act, published in April 2021, has sparked a wave of criticism from ethicists.
Gstrein (2022) discusses its “laying down harmonised rules on AI” (Artificial Intelligence Act – AI Act). This law lays down a framework for regulating the development and use of AI in various possible scenarios. The central topic of discussion in this article is that the AI Act focuses mainly on the standardisation and harmonisation of the single market. At the same time, many experts expected the law to be more focused on ethical aspects.
Van Kolfschooten and Shachar (2023) identify the advantages and disadvantages of another new regulatory instrument on AI – the Draft Convention on AI, Human Rights, Democracy and the Rule of Law (from now on referred to as the COE AI Convention). Scientists note that the COE AI Convention and the AI Act exist in different legal frameworks. The former is designed to protect fundamental human rights, while the latter aims to optimise AI products. Thus, although not without drawbacks, the AI Convention can fill ethical gaps.

Rizk (2020) explores the challenges and opportunities accompanying AI development in MENA countries. The researcher notes that AI technologies can empower citizens. However, control over these technologies can also strengthen the power of established regimes. In particular, the region is characterised by a more significant expansion of economic freedoms than civil liberties.
Paltieli (2022) examines the creation of Israel’s national AI programme. The researcher demonstrates how Israel’s vision of innovation was reflected in the approach to AI regulation, particularly the adoption of a more flexible AI programme instead of a strategy.
The literature review results show that society has not formed a unified approach to regulating AI in the context of human rights at the present stage. At the same time, most countries have already faced the need to introduce specific regulatory frameworks for AI, but defining them is complex. This study presents an approach that identifies the problems and prospects of AI’s impact on human rights based on analysing the differences between European and Middle Eastern approaches to regulating this technology.

Objectives
The purpose of the study is to conduct a comparative legal analysis of the role of AI in ensuring human rights in Europe (in the example of the European Union – EU (1950)) and the Middle East (in the example of Israel). Objectives of the study:
- *identify which human rights are affected by AI;
- *to analyse the EU’s approach to ensuring human rights in the context of enhanced AI development;
- *use Israel as an example to analyse the Middle Eastern approach to ensuring human rights in the context of enhanced AI development.
- Methodology
Research procedure
The research procedure is shown in Figure 1.

Sampling
The study is based on the legislative approaches of the EU and Israel. The main criterion for sampling was the active development of the legislative framework for AI regulation in the studied regions over the past five years, as well as the increased attention to human rights in the context of AI development. This allowed us to analyse the most relevant approaches to AI regulation and identify best practices and challenges that can significantly contribute to the further development of AI regulation.
The primary studied legislative documents and initiatives include:
- The EU Artificial Intelligence Act – as the basis of the legislative framework for AI regulation in the EU;
- The Convention on AI, Human Rights, Democracy and the Rule of Law – as the primary document that sets out ethical principles for the regulation of AI;
- The National Initiative for Secured Intelligent Systems (Israel), the Telem Report (Israel), and Israel’s Policy on Artificial Intelligence Regulation and Ethics (Ministry of Innovation, Science & Technology, 2023) – as the legislative acts that consistently demonstrate the evolution of the Israeli approach to AI regulation.
Methods :
An essential method in the context of the study was the formal legal method, which allowed the outline of the main provisions of the legislative initiatives identified for the study. The primary research method was comparative legal analysis, which helped to identify common and distinctive aspects of the legislative initiatives under study. In turn, this allowed us to identify the differences and main priorities in the general approaches of the studied countries to AI regulation.
In addition, a descriptive method was used to help identify the areas of AI’s impact on human rights. The statistical analysis method allowed us to identify the areas of AI that generate the most significant revenue and may violate human rights.
- Results
The role of AI in the protection of human rights in the EU
AI can significantly disrupt some human rights: respect for human values, individual liberty, equality, non-discrimination and solidarity, and social and economic rights. The effect on these and other rights can be confirmed by studying the areas of AI application in various fields. Figure 2 shows such regions with a note on the projected income from AI in each of them.
For example, the use of AI in only one of these areas (recognition, classification and tagging of static images) may violate the right to privacy and autonomy, threaten data security, and lead to discrimination. However, the issue of the need to regulate the technology has emerged relatively recently. In the EU, this process began with the creation of a group of experts in March 2018.
The group members represented various stakeholders’ interests, and one of the most critical areas of the group’s activities was the development of recommendations – ethical, political, investment, assessing the reliability of AI, etc. However, the existing problems related to bias, discrimination, lack of explanation of decisions, over-simplification of social issues, etc., have made it necessary for the EU to develop rules that will not only be advisory but also legally binding.

In April 2021, the European Commission presented a proposal known as the AI Act (Council of the European Union, 2024), which is intended to regulate the development and use of AI in various scenarios. The proposal caused a wave of criticism due to insufficient attention to ethical issues, as it was aimed primarily at regulating standardisation and harmonisation.
At the same time, the law’s final version mentioned above as of 2024 contains provisions banning specific AI systems. This is an essential achievement from an ethical perspective, as it prevents using several technologies that may violate human rights. Figure 3 shows the AI practices that are subject to prohibition. The prohibition applies to placing such technologies on the market, commissioning, and use.
Such systems distort human behaviour by affecting the ability to make informed decisions, which may or will cause significant harm to that person or another person.


The AI Act should not be viewed as the final legislative framework, as efforts to improve AI regulation are underway in other areas. In particular, work is underway to implement the AI Convention, a document designed to ensure human rights, democracy and the rule of law in AI development.
The review shows that the EU actively addresses ethical and other issues in AI development. To summarise, some documents are still under development or approval, and there is still much work to be done. Therefore, it isn’t easy to draw definitive conclusions about their effectiveness. Nevertheless, it is necessary to identify the bottlenecks in such documents because the sooner the problems are identified, the sooner work can begin on their resolution.

The first document to be reviewed is the AI Act. The main shortcomings of the final project include:
- high-risk AI systems meet the availability requirements, but this does not apply to medium- and low-risk systems;
- government agencies or organisations acting on their behalf that develop high-risk systems are required to register such systems, but the private sector and security agencies are not subject to this requirement;
- the document contains provisions for assessing the impact on fundamental rights, which increases transparency. However, it lacks a substantive assessment and commitment to prevent negative consequences and provisions for mandatory stakeholder engagement. In addition, the transparency exemptions for law enforcement and migration authorities are questionable;
- lack of precise recognition of the “victim” and definition of specific rights and remedies for victims;
- double standards regarding the rights of people outside the EU – for example, there is no ban on EU companies exporting AI systems that are banned in the EU to other countries;
- the law does not apply to systems developed or used exclusively for national security purposes, regardless of the company’s ownership. This creates a loophole for automatically exempting specific AI systems from verification (Article19, 2024).
- the draft of the COE AI Convention can be distinguished by such advantages as the possibility to file a complaint, great attention to ensuring human dignity and individual autonomy, and the attitude to public consultation processes. However, this document is not without its drawbacks, primarily related to the lack of guarantees of fundamental human rights:
- covering the public and private sectors equally would contribute to improving human rights. However, for the private sector, the document outlines a different approach, which involves optional regulation;
- the exclusion of AI systems used to ensure national security is a disadvantage. At the same time, these systems may pose a risk to human rights;
- the lack of such components of the approach as human supervision of AI systems and transparent criteria for the prohibited use of AI;
- unclear wording, such as “strive to ensure”, “where possible” or “following national law”, creates additional loopholes and makes it challenging to fulfil obligations (ENNHRI, 2024).
Thus, both documents have significant shortcomings that overlap in essential aspects. In the author’s opinion, special attention should be paid to the limited regulation of AI in the private sector and the limited regulation of systems designed to ensure national security. These shortcomings may outweigh the main advantages of the documents and pose a significant threat to human rights.

The role of AI in the protection of human rights in the Middle East (in the example of Israel)
Israel’s vision of AI development differs from the EU’s in certain aspects. Israel positions itself as a country of innovation. It avoids planning for the distant future, which may explain its choice to adopt a more flexible national programme instead of a national strategy.
In 2018, Israel launched the National Initiative for Secured Intelligent Systems (from now on referred to as the Initiative), designed to create a national AI strategy. The ethical provisions in the document were based on those proposed by the EU, but there were differences. In particular, in the EU’s vision, AI systems should be human-centric. In the Israeli Initiative, citizens are not given influential tools to enforce their rights. At the same time, the authors note that the development of AI should be based on trust, although the document also lacks mechanisms for building trust.
However, several members of the Initiative’s steering committee opposed the conclusions drawn in the report. The idea of creating a new AI Directorate, as envisaged in the strategic plan, was rejected. In a new report (the Telem report), the comprehensive vision proposed by the Initiative was changed to focus on four aspects: infrastructure, talent, regulation, and access to data.
Instead of a strategy, the Telem report recommended creating a national programme. The new report lists the following ethical principles: transparency, clarity, confidentiality, data protection, and cybersecurity. Both documents’ principles are somewhat abstract, and no tools are provided for assessing potential harm. Also, judging by the content of both documents, the Israeli approach lacks any significant involvement of citizens. That is, citizens do not have adequate tools to control the use of technology.
The following document, Israel’s Policy on Artificial Intelligence Regulation and Ethics 2023 (Ministry of Innovation, Science & Technology, 2023), which is regulatory and published by the Israeli Ministry of Innovation, Science and Technology, emphasises the need for a unified national regulatory policy on AI. Table 1 compares the ethical principles in this document with those presented in the European AI Aсt.

Ultimately, the ethical and responsible AI principles in Israel and the EU are consistent. Each principle proposed in the AI Act has a counterpart in the Israeli approach with certain modifications. In particular, the Israeli approach specifically emphasises promoting Israel’s leadership in innovation alongside sustainable development. The corresponding principle in the EU is more general and proclaims the pursuit of social and environmental well-being.
- Discussion
The analysis in this paper shows significant gaps in the existing regulatory proposals for AI regulation. These gaps leave loopholes for using AI in ways that may lead to human rights violations.
Smuha (2020) argues that for human rights to become a valid basis for AI regulation, it is necessary to clearly define the applicability and vulnerability of human rights in the AI system, to concretise legal interpretations of human rights where they are too abstract, and to evaluate mechanisms for enforcing such rights. Noting that human interaction with AI requires both caution and boldness in the right balance, Shaelou and Razmetaeva (2024) outline the following principles for harmonious human and AI life: renewed fundamental rights, core values embedded in AI design, and an uncompromising regulatory framework for the protection of human rights, democracy, and the rule of law. However, this study lacks specificity. As the author’s work has shown, even the principles of human rights, democracy and the rule of law enshrined in legislative documents do not guarantee their observance in various scenarios, leading to violations.
Some studies have shown that ensuring human rights in digital space requires adding many new rights to the list of such rights. Dror-Shpoliansky and Shany (2021) argue that human rights in cyberspace cannot be fully ensured by relying solely on the existing legal framework to protect human rights. For example, such a human right in cyberspace as the right not to be subject to an automatic decision has no corresponding analogues among offline rights.

Also, when it comes to the right to access the Internet, offline law lacks provisions to prevent violations such as interference with Internet access. When examining the impact of AI on the rule of law, Imran et al. (2023) also cite its integration into automated decision-making systems as an example. Such systems can be used, for instance, by judges. Scholars note the lack of transparency and fairness in making such decisions. However, the author’s article found that the existing gaps in legislative proposals concern not only new digital rights but also call into question the implementation of fundamental offline rights.
Muller (2020) believes that ensuring human rights in the context of AI development is possible by establishing specific prohibitions. In particular, certain forms of biometric recognition, mass surveillance, personal surveillance, hidden AI systems, and deep fakes are prohibited. The author allows using such capabilities only under exceptional control for national security or medical purposes. As the author’s article reveals, regulating AI in national security is one of the most controversial issues. The lack of proper regulation in this area may harm human rights.
In identifying the main challenges related to AI, Rodrigues (2020) touches upon political and legal technical shortcomings and notes the lack of multiple stakeholders. The researcher believes political and legal issues can be resolved, but technical shortcomings require significant attention. The main problems and biases are formed at the design stage.
Regarding the stakeholder deficit, the researcher notes that certain groups of people are underrepresented in the global AI discourse, and other stakeholders promote their own, sometimes opposing, interests (innovation, profit, ethical standards). While agreeing with the importance of the researcher’s highlighted problem of the lack of multiple stakeholders, it should be noted that political and legal issues may pose the most significant challenges. They can be resolved only with the consent of all stakeholders who may have opposing interests, which makes the task much more difficult.

Ulnicane (2022) notes that in the competition for AI development, the EU seeks to differentiate itself from countries such as China and the United States by emphasising an ethical, human-centred approach. The Union’s AI strategy states that the EU can lead in developing and applying AI “for the good and all”. However, Gstrein (2022) suggests that the EU’s approach to AI regulation is mainly focused on standardisation and stops promoting human rights and human dignity.
This question was prompted by the study of the European Commission’s proposal for the AI Act. Van Kolfschooten and Shachar (2023) argue that the AI Act is intended to optimise AI products. The regulation of ethical aspects is mainly covered in the Draft AI Convention.
At the same time, this document is not without its drawbacks. In particular, the problems relate to the document’s ambitious goals of achieving global consensus, as some countries (the United States, Canada, the United Kingdom, and Israel) seek to limit the convention’s scope to the public sector. This article reflects the problems identified in the researchers’ works. However, in the author’s opinion, the main problem is the insufficient regulation of AI in the private sector and national security.
Paltieli (2022) reveals the reasons for Israel’s adoption of an AI programme instead of an AI strategy. The researcher found that the programme is a more flexible tool promoting AI innovation. At the same time, the researcher emphasises that the choice in favour of the programme may not provide a proper regulatory and ethical framework. The author draws on Paltieli’s work during his research. Still, given that this study dates back to 2022, it does not address more recent regulations, particularly Israel’s Policy on Artificial Intelligence Regulation and Ethics 2023 (Ministry of Innovation, Science & Technology, 2023). In the author’s opinion, this document emphasises ethics, but its principles are also not legally binding.

Rizk (2020) concludes that an integrated set of freedoms for citizens and small companies is needed to ensure a favourable environment for the use of AI. The researcher states that investing in technology alone does not promote inclusion and can exacerbate divisions. These comments are also relevant to this study, given the conclusions drawn for Israel.
The existing contradictions in the discussions on AI regulation make developing recommendations for improving the situation important. For example, Latonero (2018) believes that human rights in the context of the growing role of AI can be ensured through cooperation between technology companies and the public and academics, risk assessment throughout the software product life cycle, governments fulfilling their human rights obligations, collaboration between different stakeholders, etc. This is a valid point, especially regarding the government’s exclusive role in ensuring human rights.
- Conclusions
The introduction of proper AI regulation is an objective necessity and should be carried out with the state’s participation to exclude the possibility of offences. The analysis of the EU and Israeli legislative initiatives is a valuable contribution to current research on the problem of AI regulation. The legislative initiatives of both the EU and Israel have significant shortcomings.
The primary deficiencies identified revolve around the limited regulation of AI in certain areas. These areas include the private sector and national security. We can also note the use of AI-based automated decision-making systems by judges, which may affect the transparency of decisions in criminal law. Insufficient regulation in these areas creates loopholes that can harm human rights.
The ethical principles of AI regulation in Israel and the EU are broadly consistent, but some peculiarities exist. For instance, Israel prioritises the pursuit of leadership in innovation. The EU positions itself as a union for which human values are paramount in the technology race. However, as the analysis has shown, current initiatives do not sufficiently meet this statement.
The relevant adjustments will be made since the process of forming a legislative framework for AI regulation is still in its infancy. Otherwise, human rights will be at significant risk. Further research should determine how the shortcomings identified in this paper may affect human rights and what protective mechanisms can be proposed.
Below are the recommendations that should be taken into account in the process of improving the legislative framework regarding AI:
- review the requirements for registration of developers of AI systems in the private sector and the field of national security;
- ensure mandatory involvement of interested parties;
- to clarify the definition of key concepts in regulatory documents and revise inaccurate wording;
- eliminate the possibility of allowing double standards.

References
Allegri, M. R. (2022). The right to be forgotten in the digital age. What People Leave Behind, 237-251. https://doi.org/10.1007/978-3-031-11756-5_15#DOI
Almeida, F., Santos, J. D., & Monteiro, J. A. (2020). The challenges and opportunities in the digitalisation of companies in a post-COVID-19 World. IEEE Engineering Management Review, 48(3), 97-103. https://doi.org/10.1109/EMR.2020.3013206
Article19. (2024). EU: AI Act Fails to Set Gold Standard for Human Rights. Retrieved from https://www.article19.org/resources/eu-ai-act-fails-to-set-gold-standard-for-human-rights/
Ben-Israel, I., Matania, E., & Friedman, L. (2020). Harnessing innovation: Israeli perspectives on AI ethics and governance (Report for CAHAI). Retrieved from https://sectech.tau.ac.il/sites/sectech.tau.ac.il/files/CAHAI%20-%20Israeli%20Chapter.pdf
Council of the European Union. (2024). Artificial Intelligence Act, № 2021/0106(COD). Retrieved from https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf
Dror-Shpoliansky, D., & Shany, Y. (2021). It’s the end of the (offline) world as we know it: From human rights to digital human rights – A proposed typology. European Journal of International Law, 32(4), 1249-1282. https://doi.org/10.1093/ejil/chab087
ENNHRI. (2024). Draft Convention on AI, human rights, democracy and rule of law finalised: ENNHRI raises concerns. Retrieved from https://ennhri.org/news-and-blog/draft-convention-on-ai-human-rights-democracy-and-rule-of-law-finalised-ennhri-raises-concerns/
European Union. (1950). European Convention on Human Rights. Retrieved from https://www.echr.coe.int/documents/d/echr/convention_ENG
Geist, M. A. (2021). AI and international regulation. Artificial Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021). Retrieved from https://ssrn.com/abstract=3734671
Gilbert, N. (2024). 70 vital artificial intelligence statistics: 2024 data analysis & market share. Finances Online. Retrieved from https://financesonline.com/artificial-intelligence-statistics/
Gstrein, O. J. (2022). European AI regulation: Brussels effect versus human dignity? Zeitschrift für Europarechtliche Studien (ZEuS), 4. https://dx.doi.org/10.2139/ssrn.4214358
Habibi, F., & Zabardast, M. A. (2020). Digitalisation, education and economic growth: A comparative analysis of Middle East and OECD countries. Technology in Society, 63, 101370. https://doi.org/10.1016/j.techsoc.2020.101370
Imran, M., Murtiza, G., & Akbar, M. S. (2023). The impact of artificial intelligence on human rights, democracy and the rule of law. Journal of Politics and International Studies, 9(01), 15-29. Retrieved from https://plantsghar.com/index.php/45/article/view/127/127
Kolt, N. (2024). Governing AI agents. SSRN. https://dx.doi.org/10.2139/ssrn.4772956
Kwilinski, A., Hnatyshyn, L., Prokopyshyn, O., & Trushkina, N. (2022). Managing the logistic activities of agricultural enterprises under conditions of digital economy. Virtual Economics, 5(2), 43–70. https://doi.org/10.34021/ve.2022.05.02(3)
Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity. Data & Society. Retrieved from https://apo.org.au/sites/default/files/resource-files/2018-10/apo-nid196716.pdf
Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., & Briggs, M. (2021). Artificial intelligence, human rights, democracy, and the rule of law: A primer. arXiv. https://doi.org/10.48550/arXiv.2104.04147
Ministry of Innovation, Science & Technology. (2023). Israel’s Policy on Artificial Intelligence Regulation and Ethics 2023. Retrieved from https://www.gov.il/BlobFolder/policy/ai_2023/en/Israels%20AI%20Policy%202023.pdf
Muller, C. (2020). The impact of artificial intelligence on human rights, democracy and the rule of law. Strasbourg, Council of Europe. Retrieved from https://allai.nl/wp-content/uploads/2020/06/The-Impact-of-AI-on-Human-Rights-Democracy-and-the-Rule-of-Law-draft.pdf
Myovella, G., Karacuka, M., & Haucap, J. (2020). Digitalisation and economic growth: A comparative analysis of Sub-Saharan Africa and OECD economies. Telecommunications Policy, 44(2), 101856. https://doi.org/10.1016/j.telpol.2019.101856
Nagy, N. (2023). “Humanity’s new frontier”: Human rights implications of artificial intelligence and new technologies. Hungarian Journal of Legal Studies. https://doi.org/10.1556/2052.2023.00481
Paltieli, G. (2022). Visions of innovation and politics: Israel’s AI initiatives. Discover Artificial Intelligence, 2(1), 8. https://doi.org/10.1007/s44163-022-00024-6
Ravia, H. & Hammer, D. (2023). Israel publishes AI regulatory policy document. Pearl Cohen. Retrieved from https://www.pearlcohen.com/israel-publishes-ai-regulatory-policy-document/
Rizk, N. (2020). Artificial intelligence and inequality in the Middle East: The political economy of inclusion. In M. D. Dubber, & F. Pasquale (Eds.), The Oxford Handbook of Ethics of AI (pp. 625-649). Oxford Academic. https://doi.org/10.1093/oxfordhb/9780190067397.013.40
Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4, 100005. https://doi.org/10.1016/j.jrt.2020.100005
Salgado-Criado, J., & Fernández-Aller, C. (2021). A wide human-rights approach to artificial intelligence regulation in Europe. IEEE Technology and Society Magazine, 40(2), 55-65. https://doi.org/10.1109/MTS.2021.3056284
Shaelou, S. L., & Razmetaeva, Y. (2024). Challenges to fundamental human rights in the age of artificial intelligence systems: Shaping the digital legal order while upholding rule of law principles and European values. ERA Forum. https://doi.org/10.1007/s12027-023-00777-2
Shneiderman, B. (2020). Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109-124. https://doi.org/10.17705/1thci.00131
Smuha, N. A. (2020). Beyond a human rights-based approach to AI Governance: Promise, pitfalls, plea. Philosophy & Technology, 34(1), 1-14. https://doi.org/10.1007/s13347-020-00403-w
The EU Artificial Intelligence Act. (2024). Up-to-date developments and analyses of the EU AI Act. Retrieved from https://artificialintelligenceact.eu/
Ulnicane, I. (2022). Artificial intelligence in the European Union: Policy, ethics and regulation. In T. Hoerber, G. Weber, & I. Cabras (Eds.), The Routledge Handbook of European Integrations (pp. 1-16). London: Taylor & Francis. https://doi.org/10.4324/9780429262081-19
Van Kolfschooten, H., & Shachar, C. (2023). The Council of Europe’s AI Convention (2023–2024): Promises and pitfalls for health protection. Health Policy, 138, 104935. https://doi.org/10.1016/j.healthpol.2023.104935
Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20(4), 567-583. https://doi.org/10.1007/s12027-020-00602-0

[1] The author is a PhD, Associate Professor of the Department of Civil Procedure, National University «Odesa Law Academy», Odesa, Ukraine. She can be reached at the email Suleimanovasr@gmail.com
Images from different sources
Mahabahu.com is an Online Magazine with collection of premium Assamese and English articles and posts with cultural base and modern thinking. You can send your articles to editor@mahabahu.com / editor@mahabahoo.com (For Assamese article, Unicode font is necessary)