regulation

Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems

AI governance must shift focus from just ensuring model safety to addressing the risks of unsafe deployment ecosystems, which are influenced by institutional contexts, conflicting incentives, and inadequate oversight. While initiatives like the EU AI Act emphasize technical compliance, they often ignore the broader environment affecting AI use, leading to harmful outcomes like discrimination and misinformation. Effective governance requires assessing deployment contexts, aligning institutional incentives, ensuring accountability, and establishing adaptive oversight to manage emerging risks, ultimately recognizing that AI's dangers stem from both its operation and the settings it inhabits.

https://www.techpolicy.press/beyond-safe-models-why-ai-governance-must-tackle-unsafe-ecosystems/

What if the EU Was Really Serious About AI?

EU's AI strategy lags behind the US and China. To become competitive, it should:

  1. Infrastructure: Increase investment in cloud computing and partnering with US tech.
  2. Data: Simplify data regulations, enhance open data access, and incentivize data sharing.
  3. AI Adoption: Set ambitious AI targets and focus on outcomes in public contracts.
  4. Skills and Talent: Fund AI academic positions and pivot education programs toward AI skills.
  5. (De)Regulation: Streamline regulations for ease of use while ensuring safety.

Addressing defense AI and promoting global leadership in open-source AI is vital. Europe has the resources; bold actions are required to catch up.

https://cepa.org/article/what-if-the-eu-was-really-serious-about-ai/

EU Moves to Clarify AI Act Scope for gen-AI

EU proposes thresholds for computational resources to clarify compliance for general-purpose AI (GPAI) models under the AI Act effective August 2025. The guidelines, subject to industry feedback via a survey, aim to establish when AI models become subject to regulatory requirements. Key points include defining GPAI models based on compute use (>= 10^22 FLOP), obligations for record-keeping, copyright policies, and potential compliance benefits for signatories to a forthcoming code of practice. Critics argue reliance on FLOP is flawed as it may inadequately reflect model capabilities and risks. Moreover, modifications over certain compute thresholds may elevate compliance burdens.

https://www.pinsentmasons.com/out-law/news/eu-clarify-ai-act-scope-gen-ai

EU AI Office Clarifies Key Obligations for AI Models Becoming Applicable in August

EU AI Office issued draft guidelines for obligations on general-purpose AI (GPAI) models applicable from August 2025. Stakeholders can provide feedback until May 22, 2025. The guidelines clarify the AI Act's provisions for GPAI, defining it as models performing multiple tasks, needing technical documentation and copyright compliance. Systems exceeding 10^25 FLOPs qualify as GPAI with systemic risk (GPAI-SR) and have stricter requirements. Fine-tuning these models may create new compliance obligations. Companies should establish AI governance, map AI applications, and prepare for the upcoming regulations. Compliance for earlier models must be achieved by August 2027.

https://www.wsgr.com/en/insights/eu-ai-office-clarifies-key-obligations-for-ai-models-becoming-applicable-in-august.html

The Rise of Responsible AI: Regulation, Ethics & Transparency in 2025

Rise of Responsible AI in 2025: Focus on ethics, regulation, and transparency in AI development. Businesses and governments collaborate on frameworks to enhance accountability and prevent misuse. Key issues include bias, data ethics, and AI explainability. Organizations adopt governance measures and prioritize ongoing monitoring. Ethical AI practices provide competitive advantages and foster trust. Collaboration across sectors is essential for establishing best practices in AI governance.

https://www.techiexpert.com/the-rise-of-responsible-ai-regulation-ethics-transparency-in-2025/

The EU AI Act: How Businesses Using AI Can Avoid New Fees

The EU AI Act, effective August 2026, requires organizations using AI in the EU to classify AI systems by risk, implement governance frameworks, ensure data quality, and maintain ongoing compliance to avoid fines of up to €35 million or 7% of global revenue. Businesses need to assess their AI systems, collaborate with compliance partners, and establish monitoring tools.

https://www.forbes.com/sites/jessicamendoza1/2025/04/25/the-eu-ai-act-how-businesses-using-ai-can-avoid-new-fees/

EU Commission Publishes Guidelines on the Prohibited AI Practices Under the AI Act

EU Commission establishes guidelines for prohibited AI practices under AI Act, effective February 2025. Prohibitions include harmful manipulation, exploitation of vulnerabilities, social scoring, predictive criminal assessments, untargeted facial data scraping, emotion recognition, biometric categorization, and real-time remote biometric identification. Guidelines aim to clarify compliance and foster uniform application of the Act across the EU, though they are non-binding. Providers and deployers are responsible for ensuring AI systems meet regulations.

https://www.orrick.com/en/Insights/2025/04/EU-Commission-Publishes-Guidelines-on-the-Prohibited-AI-Practices-under-the-AI-Act

What’s Behind Europe’s Push to “Simplify” Tech Regulation?

EU's push to “simplify” tech regulation aims to streamline its complex laws, raising concerns about diluting hard-won protections like GDPR and the AI Act. Amid geopolitical competition with the US and China, 13 member states advocate for deregulation, arguing it hampers innovation. Experts warn this may benefit dominant tech firms rather than smaller businesses and stress the need for a coherent strategy rather than unfocused deregulation. Fragmentation and ineffective regulation hinder innovation in Europe, signaling that reform should focus on coordination and support for startups, not dismantling existing protections.

https://www.techpolicy.press/whats-behind-europes-push-to-simplify-tech-regulation/

EU Commission Clarifies Definition of AI Systems

EU Commission clarifies AI definition: The Commission published guidelines detailing the definition of AI systems under the AI Act, outlining seven components, including machine-based systems, autonomy, adaptability, objective-driven outputs, inference capability, environmental interaction, and influence over environments. The guidelines help companies assess AI Act applicability. However, the guidelines are non-binding and not yet formally adopted.

https://www.orrick.com/en/Insights/2025/04/EU-Commission-Clarifies-Definition-of-AI-Systems

Scroll to Top