- News>
- Technology
EU AI Act: Regulations Impacting ChatGPT, Google Gemini, And Deepfakes-All You Need To Know
It took a whopping five years for the EU AI Act to pass through the European Parliament, indicating the thoroughness and significance of the regulations.
New Delhi: The European Parliament has just given its green light to the World's first comprehensive AI Law -the EU AI Act, that will govern how artificial intelligence (AI) is used across the continent. These rules are designed to ensure that humans remain in control of this powerful technology and that it serves the best interests of humanity.
Interestingly, it took a whopping five years for these rules to pass through the EU Parliament, indicating the thoroughness and significance of the regulations.
Scope of the Regulations: Which AI Systems Are Covered?
Under the EU AI Act, these regulations will have a broad impact, affecting AI systems such as OpenAI’s ChatGPT and Google’s Gemini, among others. Essentially, any machine-based system operating with some level of autonomy and producing output based on data and inputs, whether from machines or humans, will fall under the purview of these rules. Moreover, companies developing AI for general use, like Google and OpenAI, will need to adhere to EU copyright law during the training of their systems.
Risk-Based Approach: Categories and Scrutiny Levels
A key feature of the EU's AI Act is its risk-based approach. It categorizes AI systems into four risk categories: unacceptable risk, high risk, general purpose and generative AI, and limited risk. The level of scrutiny and requirements placed on AI systems will vary depending on their risk category.
For instance, higher-risk AI models, such as ChatGPT 4 and Google's Gemini, will face additional scrutiny due to their potential to cause significant accidents or be misused for cyber attacks. Companies developing such AI systems will be required to provide clear information to users and maintain high-quality data on their products.
Regulations for High-Risk AI Systems
The Act also prohibits certain high-risk applications of AI, including the use of AI-powered technology by law enforcement to identify individuals, except in very serious cases. Predictive AI systems aimed at forecasting future crimes are also banned, as are systems designed to track the emotions of students or employees.
Prohibited Applications and Ethical Considerations
Another important provision of the Act mandates the labelling of deepfakes—manipulated images, videos, or audio—to prevent the spread of disinformation. Moreover, companies developing AI, like OpenAI and Meta, will be compelled to disclose previously undisclosed details about their products.
In light of recent events, Google has taken steps to restrict its Gemini chatbot from discussing elections in countries holding elections this year, aiming to mitigate the risk of spreading misinformation.
Implications and Timeline
These regulations signify a significant milestone in guaranteeing the responsible advancement and utilization of AI technology within the European Union. Scheduled to take effect from May 2025, they herald a fresh era of AI governance focused on safeguarding both individuals and society at large.
These regulations mark a significant step in ensuring the responsible development and use of AI technology within the European Union. They are set to come into force starting in May 2025, and mark the start of a new era of AI governance aimed at safeguarding both individuals and society as a whole.