Artificial Intelligence Act (AIA)
EU regulation for AI, ensures safety, transparency, accountability. Categorizes AI risks, mandates compliance for high-risk systems, establishes governance framework, promotes innovation while safeguarding rights.
EU regulation for AI, ensures safety, transparency, accountability. Categorizes AI risks, mandates compliance for high-risk systems, establishes governance framework, promotes innovation while safeguarding rights.
DMA regulates big tech, promotes competition, prevents monopolistic behavior, mandates transparency, and enhances user choice in digital services across the EU.
EU AI Act introduces regulatory framework for AI in Europe, emphasizing safety and innovation. Approved in 2024, it categorizes AI into risk levels, banning unacceptable risk systems. Compliance is required for high-risk AI by August 2026, with severe penalties for violations. Organizations must assess AI use, train staff, and adhere to standards to align with new regulations.
EU aims to lead in AI with the “AI Continent Action Plan,” emphasizing swift, ambitious policies for economic growth and cultural protection through trustworthy, human-centric AI.
https://digital-strategy.ec.europa.eu/en/library/ai-continent-action-plan
EU plans to amend GDPR to simplify compliance for SMEs, maintaining data privacy principles. Changes include streamlined documentation and reduced complexity for businesses under 500 employees. Intended to boost European digital competitiveness amid AI concerns, reforms will clarify algorithmic processing rules and address cross-border data transfer issues. Balancing privacy with law enforcement needs also under discussion. Legislative proposals expected by June 2025, aiming to harmonize enforcement standards across EU member states.
https://idtechwire.com/eu-plans-major-gdpr-overhaul-to-ease-business-compliance-rules/
EU plans to ease AI compliance for startups amid regulatory burdens. The Commission seeks feedback to simplify AI Act application following business complaints about red tape.
https://www.reuters.com/world/europe/europe-wants-lighten-ai-compliance-burden-startups-2025-04-08/
EU AI Office's third draft of the General-Purpose AI Code of Practice outlines commitments for GPAI model providers regarding copyright compliance under the AI Act, effective August 2025. Key obligations include adhering to training data copyright laws, respecting opt-out requests from content creators, and limiting web crawling practices. The streamlined draft emphasizes transparency and governance measures, with a focus on mitigating copyright infringement risks. Differences in US and EU copyright practices, such as the lack of a “fair use” doctrine, are noted, highlighting the complexities of navigating AI copyright law in Europe. Finalization expected May 2025.
EU's Community of Practice on AI has released updated non-binding Model Contractual Clauses (MCC-AI) for public procurement of AI systems. Two templates address “high-risk” and “non-high-risk” AI systems, aligned with the EU AI Act. This guidance aims to assist public organizations but may also benefit private companies. Stakeholders are encouraged to report their use of MCC-AI, enhancing AI procurement practices in the public sector.
AILD: EU legislation on AI accountability; establishes liability rules for AI-related damages; aims to ensure safety and trust in AI technologies while fostering innovation.
EU's push for AI competitiveness led to withdrawal of AI Liability Directive (AILD), raising concerns about accountability in AI-related harms. Big Tech benefits from this retreat, avoiding liability for potential damages. Effective oversight becomes challenging due to AI's ‘black-box' nature, risking consumer protection. A call for reassessment of AI regulation instead of deregulation is essential for safeguarding citizens against harmful practices.