The recent approval of the Artificial Intelligence (AI) Act in the European Union marks a significant milestone in the development and regulation of this technology on a global scale. Beyond being a mere set of rules, this law has profound implications spanning from ethics to technological innovation and the protection of human rights.
One of the most prominent features of this legislation is its focus on security and fundamental rights. By categorizing AI according to the risk it poses, from “minimal risk” to “unacceptable risk,” the law seeks to balance the need to promote innovation with the protection of citizens. This means that AI systems used in the European Union must be secure and respect human rights, avoiding applications that could cause harm or discrimination.
The law also specifically addresses certain potentially dangerous uses of AI, such as cognitive manipulation, biometric recognition, and categorization based on sensitive characteristics like sexual orientation or race. By banning these applications and establishing strict requirements for others, such as those involving critical infrastructure, the legislation aims to protect the privacy and dignity of individuals.
Another crucial aspect of the law is its focus on transparency and accountability. AI models used in the European Union must comply with transparency obligations before being released to the market, meaning that citizens have the right to know how decisions affecting them are made. Additionally, “high-impact” models must undergo risk assessments, report incidents, and ensure cybersecurity, contributing to greater accountability in the development and use of AI.
In addition to its direct implications in the technological and ethical spheres, AI legislation also has economic and geopolitical repercussions. By setting clear and mandatory standards for the development and use of AI in the European market, the European Union positions itself as a leader in promoting responsible and ethical practices globally. This can not only influence the development of international standards but also the competitiveness and attractiveness of the European market for investment and innovation in AI.
Implications for ChatGPT
The approval of the new legislation on artificial intelligence (AI) in the European Union will have significant implications for AI systems like ChatGPT and its developers. As part of the category of “general-purpose AI models,” ChatGPT will be subject to specific transparency and accountability obligations as outlined in the law.
One of the main implications for ChatGPT will be the need to comply with transparency obligations before being marketed in the European market. This means that ChatGPT developers will have to provide detailed information on how the model works, how the data has been trained, and how it makes decisions. Additionally, they will need to disclose any known biases or limitations of the system, as well as any potential impact on users.
Furthermore, as a general-purpose AI model, ChatGPT could be considered “high-impact” under the law, especially if used in critical or sensitive applications. This would entail increased supervision and risk assessment associated with its use, as well as the obligation to report serious incidents and ensure the cybersecurity of the system.
Implementation of the Law
The full implementation of the new law on artificial intelligence (AI) in the European Union is expected to occur by the end of 2026. Although the legislative agreement has already been reached, additional time is needed to draft specific provisions, establish enforcement mechanisms, and ensure that Member States are prepared to comply with the new requirements.
During this transition period, adjustments and revisions are likely to be made to address any challenges or concerns that arise. Additionally, companies and organizations developing or using AI systems will need to adapt to the new regulations and ensure compliance with the obligations established in the law.