European Union Approves Landmark AI Law
KAKALI DAS
On March 13th, 2024, a significant milestone was achieved in the European Union as it ratified the EU AI Act, marking the introduction of a comprehensive framework to address the risks associated with Artificial Intelligence. This landmark legislation not only sets a precedent for the EU but also stands as a pioneering effort globally.
Europe asserts that the EU AI Act will enable innovation while safeguarding fundamental rights, with a focus on human-centric technology development.
What are the key provisions of the EU AI Act? How does it address the regulation of AI tools such as ChatGPT? Does it include measures to combat deep fakes? What penalties does it impose? And are there expectations for other nations to adopt similar regulations?
Let’s delve into the EU AI Act, which represents a comprehensive legal framework governing Artificial Intelligence. It marks a pioneering effort by the European Union and sets a global precedent in regulating AI technologies.
The EU Parliament has given its endorsement to the EU AI Act, awaiting formal approval from EU member states.
The core principle of the act is to prioritize human-centric technology by regulating AI according to its potential societal harm. The higher the risk, the stricter the rules.
How does the act define Artificial Intelligence?
While a basic understanding of AI characterizes it as computers performing tasks akin to humans, the EU AI Act offers a more intricate definition, describing it as a machine-based system engineered to function with varying degrees of autonomy. This encompassing definition includes technologies such as chatbots like ChatGPT and Gemini.
How does the bill address the risks associated with AI systems?
The act categorizes AI systems into low risk, mid risk, or high risk, with high-risk systems including those utilized in banking, educational institutions, or critical infrastructure. These systems are required to maintain a high level of accuracy and human oversight, with their usage closely monitored. Additionally, citizens have the right to query such systems if they directly impact them.
The legislation also identifies AI systems that are prohibited due to their potential to cause harm and significantly impact individuals’ lives, such as social scoring systems that categorize people based on social behaviour or personality traits. While China has implemented such a system, it is strictly prohibited in the European Union. Additionally, the legislation outlines AI systems that are exempted from certain regulations.
Not all AI tools fall under the purview of this act. For instance, AI tools intended for military, defence, or national security purposes, as well as those designed for scientific research, are exempted. However, facial recognition tools are permitted for law enforcement applications, albeit under strict regulation.
What about Deep Fakes? The act also addresses the issue of deep fakes, aiming to combat their proliferation. Contents that are artificially generated or altered must be appropriately labeled, with individuals, companies, and organizations required to flag such instances.
Generative AI, such as ChatGPT and DALL-E, possesses the capability to create various forms of content, including text, images, and audio. The EU stipulates that these systems must adhere to specific requirements, including compliance with copyright laws and the transparent publication of the data utilized to train them.
What are the consequences of if the act is not followed?
Penalties for non-compliance with the act include fines ranging from €7.5 million to €35 million. These fines apply to instances such as providing incorrect information to regulators, violating provisions of the act, and developing or deploying prohibited AI tools.
How have tech companies responded to the legislation?
The response from tech companies has been mixed. While they have generally welcomed the legislation, there is lingering wariness regarding the specifics. The act is expected to be enacted around May, with implementation slated to begin in 2025, providing companies with approximately two years to prepare.
The EU sets a precedent with its AI act, while the US mandates AI developers to share data with the government. China has implemented a variety of AI laws. As awareness of the risks associated with AI grows, other nations may follow suit in regulating the technology.
Images from different sources
Mahabahu.com is an Online Magazine with collection of premium Assamese and English articles and posts with cultural base and modern thinking. You can send your articles to editor@mahabahu.com / editor@mahabahoo.com (For Assamese article, Unicode font is necessary)