Monday, March 24, 2025

ARTIFICIAL INTELLIGENCE IS NOW PART OF OUR DAILY LIVES

 Filenews 24 March 2025 - by Dora Christodoulou



Artificial Intelligence is rapidly expanding into more aspects of our lives, creating new multi-layered challenges, having the ability to bring huge benefits to human development, but at the same time presenting significant risks.

Speaking to "F" about this issue, which occupies an increasing part of citizens' daily lives, the lecturer in International Economic Law and Human Rights at Neapolis University Paphos, Dr. Eleni Gabriel, emphasizes that the most important risks are the separation of people, discrimination, the violation of freedoms due to mass surveillance and the violation of the right to a fair trial.



"Governments, institutions, scientists and now civil society are focusing more and more intensely on the need to regulate Artificial Intelligence due to the risks that lurk from its use," observes Dr. Gabriel, noting that "at an international level, it is now understood that some legislative protective railings are required in the development and use of Artificial Intelligence technology. In light of this urgent need, on 14 June 2024, the European Parliament voted in favour of negotiating a regulatory proposal that will be the first comprehensive bill on the use of Artificial Intelligence. The proposal received 499 votes in favour, 28 against and 93 abstentions. "We are making history!", the President of the European Parliament, Roberta Metsola, posted on Twitter.

Referring to the detailed provisions of the European Artificial Intelligence Act, Dr Eleni Gabriel first explains the purpose of the European Parliament by establishing this regulatory framework. The Parliament aims to shield EU citizens from the threats of Artificial Intelligence: "The relevant laws will promote a human-centric and trustworthy Artificial Intelligence that protects health, safety and fundamental rights. In this regard, the Artificial Intelligence systems that will be developed and used in the member states will be in line with the principles of the European Union."

At the same time, he notes that "the bill will also set rules for the protection of democracy, the prohibition of discrimination, the promotion of environmental responsibility and in general the building of trust in Artificial Intelligence. It therefore wishes to create a favourable environment by supporting research and business. The ultimate goal is to strike the right balance between technological development and the protection of fundamental rights. Through these safeguards, the benefit of all citizens from Artificial Intelligence will be ensured."

Four levels of risk from the use of AI systems

Referring to what the new regulatory framework defines, the lecturer in International Economic Law and Human Rights at Neapolis University Paphos, points out that the cornerstone of the new rules that will govern Artificial Intelligence (AI) is the establishment of obligations based on the level of risk from its use. "In other words, AI systems are separated based on the risk that their use can cause," he points out. "Depending on the risk, the obligations of states and companies vary. In this context, four risk levels are defined: Unacceptable risk, high risk, limited risk and minimal/zero risk.

The first category includes tools that pose a threat to people and therefore will not be allowed. These include social scoring systems, cognitive behavioural manipulation systems of vulnerable individuals or groups, and real-time and remote biometric recognition systems."

The second category, according to the academic of Neapolis Paphos, includes, among others, systems used in critical sectors such as law enforcement, migration and asylum management, education and professional life: "For example, the misuse of Artificial Intelligence can lead to biased decision-making and discrimination based on gender, or nationality when hiring or firing employees. According to research by Amnesty International, biometric recognition tools reinforce racist and biased law enforcement, disproportionately affecting specific racial groups of people. Furthermore, these systems can identify and analyse profiles of individuals thus violating freedom of expression and assembly."

The third category, he says, includes Artificial Intelligence systems of limited risk. According to the bill, tools such as ChatGPT, will have to comply with minimum transparency requirements by informing users that they are interacting with AI. As Dr. Gabriel says, "this compliance of companies with the new rules will be done by imposing penalties in the form of high fines. It is noted that the fines can reach up to €30 million or up to 6% of a company's annual global profits. The General Data Protection Regulation (GDPR) is cited as an example at this point, which was an essential step and a model in strengthening fundamental rights in the digital age."

Finally, Eleni Gabriel particularly points out the importance of this legislative framework, stressing that the European Union wants to be the first to establish comprehensive rules governing Artificial Intelligence tools and therefore to lead in current developments. "On the contrary, countries such as the United States of America and the United Kingdom have not proceeded with regulation but remain in the study and publication of non-binding texts. This European bill can be a milestone in the effort to regulate Artificial Intelligence and mitigate the risks from its use. In this way, the EU acts as a pioneer in the legislative regulation of Artificial Intelligence", concludes the lecturer of International Economic Law and Human Rights at Neapolis University Paphos.