regulation

Prohibited Practices Under the AI Act: Answered and Unanswered Questions in the Commission’s Guidelines

EU AI Act prohibits harmful practices in AI systems with hefty fines for non-compliance. Key prohibitions include manipulation, exploitation of vulnerabilities, social scoring, and emotion recognition. Guidelines clarify ambiguous areas, such as applicability to ‘providers' and ‘deployers', AI definitions, and risks in targeted advertising. Violations can incur significant penalties, and there is no grandfathering for existing practices. Compliance requires careful assessment and governance integration to avoid breaches. Enforcement begins after the market surveillance authorities are designated by August 2025.

https://www.insidetechlaw.com/blog/2025/03/prohibited-practices-under-the-ai-act-answered-and-unanswered-questions

Navigating The EU AI Act: Critical Insights For CTOs And CIOs

EU AI Act mandates compliance for AI use in the EU, starting Feb 2025. Noncompliance risks 35M euro fines, impacting all businesses using AI. Act categorizes AI systems by risk and prohibits harmful practices like deceptive AI, social scoring, and predictive policing. CTOs/CIOs must prioritize risk assessments and governance protocols to align with regulations and enhance innovation. Key steps: comprehensive audits, governance implementation, legal engagement, and vendor compliance checks.

https://www.forbes.com/councils/forbestechcouncil/2025/03/05/navigating-the-eu-ai-act-critical-insights-for-ctos-and-cios/

Law Under Tech? On Standardization and the Hidden Rule Makers Under the EU AI Act

EU AI Act combines fundamental rights with technical standards for AI system certification, raising concerns about due process as standardization bodies assume legislative roles. The Act allows self-certification or third-party assessment against harmonized standards, yet the standards are delayed, risking compliance gaps. This empowers conformity assessment bodies (CABs) to fill voids akin to “activist judges” on human rights issues, despite their lack of expertise in that area. While CABs must maintain objectivity and transparency, they might face challenges aligning with legal frameworks.

https://www.law.kuleuven.be/citip/blog/law-under-tech-on-standardization-and-the-hidden-rule-makers-under-the-eu-ai-act/

From Meta to Airbnb, Companies Flag Risks Dealing With EU AI Act

Over 70 U.S. companies, including Meta and Airbnb, are highlighting potential risks from the EU's AI Act in their financial disclosures. This regulation imposes compliance costs and could force changes in product offerings. Firms express concerns about civil claims, fines for breaches, and ambiguity in the law's requirements. The Act's enforcement could apply differently across EU member states, adding to uncertainty. Companies emphasize the importance of understanding these regulations for operating in or entering the EU market.

https://news.bloomberglaw.com/financial-accounting/from-meta-to-airbnb-companies-flag-risks-dealing-with-eu-ai-act

AI Companies Battle Over Europe’s AI Act as Creatives Push Back

AI companies, led by OpenAI, challenge transparency requirements in Europe's AI Act, particularly around notifying content creators when their works are used as training data. As the August 2 deadline approaches, creatives demand compensation, citing copyright infringement and the use of their works without consent. European rightsholders, including journalist groups, feel inadequately protected and are opting out to prevent unauthorized access, while AI firms argue regulations hinder innovation. France, a key player in both AI development and cultural protection, navigates this complex landscape as it balances technological advancement with artist rights.

https://variety.com/2025/digital/global/ai-companies-battle-europe-ai-act-creatives-push-back-1236302611/

Navigating AI Regulation on Both Sides of the Atlantic

EU and US have differing AI legislation paths: US eases regulations for innovation; EU prioritizes societal risks with the AI Act. Companies face challenges navigating these regulations, which can hinder development. Experts suggest embracing self-regulation for low-risk AI applications and seeking external guidance to manage compliance effectively.

https://www.tietoevry.com/en/blog/2025/02/navigating-ai-regulation-on-both-sides-of-the-atlantic/

EU AI Act Unpacked #22: Key Considerations for Employers as Deployers Vs. Providers Under the EU AI Act

The EU AI Act defines roles for employers as either deployers or providers of AI systems, impacting their obligations. Deployers use existing AI systems, while providers modify or use systems significantly. Employers must understand compliance requirements, especially for high-risk AI applications, including monitoring, transparency, and data protection. Employers must ensure AI literacy among users, effective February 2025. The classification of deployer versus provider can change based on actions taken with the AI systems, necessitating careful assessment.

https://www.lexology.com/library/detail.aspx?g=11f71f6b-e110-4e8c-bcc4-183c38ec9746

Tech Giants Push Back at a Crucial Time for the EU AI Act

Tech giants are opposing the EU AI Act, which is notable for its general principles without implementation details. Key compliance requirements are detailed in a forthcoming Code of Practice, facing delays that some attribute to industry pressure. Major companies like Meta and Google challenge the regulations, arguing they hinder competitiveness and seeking changes. Concerns center around copyright in AI training and independent risk assessments. The fight over the AI Act highlights the balance between innovation and safety as global regulatory actions intensify.

https://www.pymnts.com/artificial-intelligence-2/2025/tech-giants-push-back-at-a-crucial-time-for-the-eu-ai-act/

Scroll to Top