EU

Age Verification in the European Union: The Commission’s Age Verification App

EU's Age Verification App aims to verify users' ages via digital wallets but poses privacy and accessibility concerns. It relies on various methods (eIDs, biometric data), but many marginalized groups may be excluded, risking their access to online services. Privacy measures in the app are not mandatory, and the reliance on zero-knowledge proofs and verification regulations may not adequately protect user data. The initiative could hinder democratic access while attempting to safeguard children online. More robust regulations and equitable access solutions are needed.

https://www.eff.org/deeplinks/2025/04/age-verification-european-union-mini-id-wallet

Digital Identities and the Future of Age Verification in Europe

EU age verification trends push digital identities for user safety, but raise privacy concerns. Proposals may mandate age checks, risking free expression and contradicting children's rights. Current laws suggest age evaluations without explicit requirement. Upcoming digital identity wallets planned for 2026 could be used for age verification, potentially expanding beyond intended limits, creating further privacy issues. EFF critiques this approach, urging to prioritize user rights.

https://www.eff.org/deeplinks/2025/04/digital-identities-and-future-age-verification-europe

What if the EU Was Really Serious About AI?

EU's AI strategy lags behind the US and China. To become competitive, it should:

  1. Infrastructure: Increase investment in cloud computing and partnering with US tech.
  2. Data: Simplify data regulations, enhance open data access, and incentivize data sharing.
  3. AI Adoption: Set ambitious AI targets and focus on outcomes in public contracts.
  4. Skills and Talent: Fund AI academic positions and pivot education programs toward AI skills.
  5. (De)Regulation: Streamline regulations for ease of use while ensuring safety.

Addressing defense AI and promoting global leadership in open-source AI is vital. Europe has the resources; bold actions are required to catch up.

https://cepa.org/article/what-if-the-eu-was-really-serious-about-ai/

R&G Tech Studio: Navigating AI Literacy—Understanding the EU AI Act

R&G Tech Studio podcast discusses the EU AI Act, focusing on AI literacy requirements for organizations. Hosts Rohan Massey and Edward Machin explain the broad definition of AI systems under the Act and emphasize the importance of AI literacy for both providers and deployers, irrespective of risk categories. Organizations must tailor AI literacy training based on contextual needs, employee roles, and resources. They suggest that companies begin developing AI literacy strategies now, despite limited guidance, to ensure compliance and effectively manage AI-related risks.

https://www.ropesgray.com/en/insights/podcasts/2025/04/rg-tech-studio-navigating-ai-literacy-understanding-the-eu-ai-act#page=1

EU Moves to Clarify AI Act Scope for gen-AI

EU proposes thresholds for computational resources to clarify compliance for general-purpose AI (GPAI) models under the AI Act effective August 2025. The guidelines, subject to industry feedback via a survey, aim to establish when AI models become subject to regulatory requirements. Key points include defining GPAI models based on compute use (>= 10^22 FLOP), obligations for record-keeping, copyright policies, and potential compliance benefits for signatories to a forthcoming code of practice. Critics argue reliance on FLOP is flawed as it may inadequately reflect model capabilities and risks. Moreover, modifications over certain compute thresholds may elevate compliance burdens.

https://www.pinsentmasons.com/out-law/news/eu-clarify-ai-act-scope-gen-ai

EU AI Office Clarifies Key Obligations for AI Models Becoming Applicable in August

EU AI Office issued draft guidelines for obligations on general-purpose AI (GPAI) models applicable from August 2025. Stakeholders can provide feedback until May 22, 2025. The guidelines clarify the AI Act's provisions for GPAI, defining it as models performing multiple tasks, needing technical documentation and copyright compliance. Systems exceeding 10^25 FLOPs qualify as GPAI with systemic risk (GPAI-SR) and have stricter requirements. Fine-tuning these models may create new compliance obligations. Companies should establish AI governance, map AI applications, and prepare for the upcoming regulations. Compliance for earlier models must be achieved by August 2027.

https://www.wsgr.com/en/insights/eu-ai-office-clarifies-key-obligations-for-ai-models-becoming-applicable-in-august.html

The EU AI Act: How Businesses Using AI Can Avoid New Fees

The EU AI Act, effective August 2026, requires organizations using AI in the EU to classify AI systems by risk, implement governance frameworks, ensure data quality, and maintain ongoing compliance to avoid fines of up to €35 million or 7% of global revenue. Businesses need to assess their AI systems, collaborate with compliance partners, and establish monitoring tools.

https://www.forbes.com/sites/jessicamendoza1/2025/04/25/the-eu-ai-act-how-businesses-using-ai-can-avoid-new-fees/

Scroll to Top