regulation

European Commission Releases Analysis of Stakeholder Feedback on AI Definitions and Prohibited Practices Public Consultations

European Commission analyzes stakeholder feedback on AI definitions and prohibited practices from public consultations, aiding in the application of the AI Act. Report highlights majority industry responses, calls for clearer definitions, and concerns over prohibited practices like emotion recognition and social scoring. Guidelines issued to assist stakeholders with compliance and will evolve based on feedback and new use cases.

https://digital-strategy.ec.europa.eu/en/library/european-commission-releases-analysis-stakeholder-feedback-ai-definitions-and-prohibited-practices

AI Act Deadline Missed as EU GPAI Code Delayed Until August, Richard Barker

EU's General Purpose AI Code release missed May 2 deadline; now expected by August, delaying related AI Act provisions. Reasons for delay include allowing feedback and assessing support from AI providers. Political solutions may be necessary if not finalized by August, while tech developers face additional regulatory challenges.

https://thelens.slaughterandmay.com/post/102karg/ai-act-deadline-missed-as-eu-gpai-code-delayed-until-august

NIS2 Directive: New Rules on Cybersecurity of Network and Information Systems

NIS2 Directive enhances EU cybersecurity rules across 18 sectors, requiring member states to develop national strategies, manage risks, report incidents, and establish accountability. It expands coverage beyond energy and healthcare to include public services and digital platforms, fostering cooperation and information sharing among nations through CSIRTs and networks like EU-CyCLONe. This legislation, effective from January 2023, supersedes NIS1, aiming for heightened security amidst rising cyber threats. Member states must comply by October 2024.

https://digital-strategy.ec.europa.eu/en/policies/nis2-directive

EU Clarifies AI Act’s Prohibited Practices With New Guidelines

EU issues guidelines clarifying prohibited AI practices under AI Act. Key prohibitions include manipulative techniques, social scoring, risk assessments for crime prediction, untargeted facial image scraping, emotion recognition in certain settings, biometric categorization of sensitive traits, and real-time biometric identification for law enforcement. Guidelines establish legal certainty, refine definitions, and highlight the interplay with existing EU laws. Safeguards for exemptions will require impact assessments on fundamental rights.

https://natlawreview.com/article/european-commissions-guidance-prohibited-ai-practices-unraveling-ai-act

States Are Passing AI Laws; What Do They Have in Common?

States are enacting AI laws influenced by the EU AI Act. Common features include disclosure of AI-generated content, use-case transparency, regulations for high-risk applications, and anti-discrimination measures. States like California, Colorado, and Utah lead in these regulations, emphasizing transparency and stakeholder compliance, with potential sanctions for non-compliance. Companies must align with these laws through governance programs, risk assessments, and ethical practices.

https://www.corporatecomplianceinsights.com/states-passing-ai-laws-what-do-they-have-common/

EU Sails Past Deadline to Tame AI Models Amid Vocal US Opposition

EU fails to meet deadline to regulate AI amid US lobbying, with concerns over new rules following surge in AI use post-ChatGPT. Efforts to establish a “code of practice” for AI models face criticism from US tech firms and concerns from European lawmakers about diluting regulations. The US government has echoed these criticisms, complicating the EU's regulatory ambition. The outcome hinges on cooperation from major AI companies as August 2 compliance deadline approaches.

https://www.politico.eu/article/eu-deadline-artificial-intelligence-models-lobbying/

Corporate Compliance Under the EU Artificial Intelligence Act: Legal Framework and Strategic Implications

EU's Artificial Intelligence Act establishes a comprehensive legal framework for AI, imposing obligations on companies within and outside the EU. It adopts a risk-based approach requiring compliance assessments, internal policies on generative AI, and ongoing monitoring after deployment. The Act categorizes AI systems by risk level, outlines compliance procedures, and mandates transparency and incident reporting. Non-compliance can result in significant penalties. The Act aims to unify the internal market, mitigate risks, and foster trustworthy AI development. Companies must proactively embrace compliance for strategic advantage.

https://www.leadersleague.com/en/news/corporate-compliance-eu-artificial-intelligence-act

Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems

AI governance must shift focus from just ensuring model safety to addressing the risks of unsafe deployment ecosystems, which are influenced by institutional contexts, conflicting incentives, and inadequate oversight. While initiatives like the EU AI Act emphasize technical compliance, they often ignore the broader environment affecting AI use, leading to harmful outcomes like discrimination and misinformation. Effective governance requires assessing deployment contexts, aligning institutional incentives, ensuring accountability, and establishing adaptive oversight to manage emerging risks, ultimately recognizing that AI's dangers stem from both its operation and the settings it inhabits.

https://www.techpolicy.press/beyond-safe-models-why-ai-governance-must-tackle-unsafe-ecosystems/

What if the EU Was Really Serious About AI?

EU's AI strategy lags behind the US and China. To become competitive, it should:

  1. Infrastructure: Increase investment in cloud computing and partnering with US tech.
  2. Data: Simplify data regulations, enhance open data access, and incentivize data sharing.
  3. AI Adoption: Set ambitious AI targets and focus on outcomes in public contracts.
  4. Skills and Talent: Fund AI academic positions and pivot education programs toward AI skills.
  5. (De)Regulation: Streamline regulations for ease of use while ensuring safety.

Addressing defense AI and promoting global leadership in open-source AI is vital. Europe has the resources; bold actions are required to catch up.

https://cepa.org/article/what-if-the-eu-was-really-serious-about-ai/

Scroll to Top