Digital Markets Act (DMA)
EU legislation aiming to ensure fair competition in digital markets, preventing monopolistic practices by major tech firms, promoting consumer choice, and enhancing innovation.
EU legislation aiming to ensure fair competition in digital markets, preventing monopolistic practices by major tech firms, promoting consumer choice, and enhancing innovation.
EU financial services face significant regulatory transformation through laws like the AI Act, GDPR, Data Act, and Cyber Resilience Act, reshaping compliance and risk management. Institutions must integrate these into their governance strategies, balancing innovation with data privacy and cybersecurity requirements. Effective governance and cyber resilience can transform compliance from a burden into a competitive advantage, driving innovation and customer trust.
https://www.timesofmalta.com/article/financial-services-crossroads.1106212
Europe's Digital Markets Act (DMA) marked its one-year anniversary amid efforts to enhance competition and fairness in digital markets. The DMA targets major digital “gatekeepers,” providing a regulatory framework intended to empower consumers and small businesses through increased market contestability. A recent conference highlighted ongoing assessments of the DMA's effectiveness, where experts noted emerging competition signals such as new app stores and user choice in browsers. However, the act faces geopolitical challenges, particularly from U.S. industry pushback, raising concerns about potential weakening of its enforcement. Comparatively, countries like South Korea are also grappling with regulatory frameworks shaped by local contexts and pressures from dominant U.S. tech firms. The conversation underscores the need for global collaboration in crafting equitable digital regulations amidst differing national interests.
https://www.techpolicy.press/assessing-europes-digital-markets-act-one-year-in/
AI: Friend or Foe?
Experts discuss AI legislation's future, moving focus from capabilities to regulation. Bunnings’ facial recognition case highlights privacy concerns and the need for risk-based regulatory frameworks, as seen in the EU's 2024 AI Act. A global consensus on AI's societal benefits is needed, emphasizing ethical principles over tech-specific laws. Trust in AI is crucial, particularly regarding open-source models. The call for regulations promotes safe AI deployment while balancing innovation, with Australian laws lagging behind global standards.
https://www.monash.edu/alumni/monash-life/articles-2025/ai-friend-or-foe
EU aims to balance AI regulation with global competitiveness amid pressures from the U.S. and China. The EU's regulatory-first approach prioritizes ethical values but risks economic growth. Recent initiatives, such as the AI Act and substantial investments in AI, aim to enhance competitiveness while facing challenges like resource dependency and complex legislation. The EU must simplify regulations without compromising human rights to become a leader in ethical AI, attract investment, and sustain its geopolitical influence. However, achieving consensus among member states and securing funding remains critical for successful implementation.
EU AI Act mandates AI literacy in organizations, requiring tailored training for technical teams, non-technical staff, and leaders. Effective programs should ensure compliance but also promote security culture and address AI risks. Comprehensive training enhances resilience and prepares the workforce for an AI-driven future.
EU Commission provides AI literacy guidance under AI Act; companies must ensure staff training on AI. Literacy obligations started February 2, 2025, national enforcement begins August 2025. AI literacy defined as understanding AI's risks/benefits. Companies should customize training, record efforts, but formal certifications aren't mandatory. FAQ guidance document anticipated.
EU's AI Code of Practice faces criticism for potential regulatory clashes with existing laws, raising compliance concerns. The third draft may overreach beyond the EU AI Act, complicating legal clarity and contradicting the EU's goal to simplify regulations.
https://www.mofo.com/resources/insights/250306-ai-compliance-or-bureaucratic-burden
EU AI Act prohibits harmful practices in AI systems with hefty fines for non-compliance. Key prohibitions include manipulation, exploitation of vulnerabilities, social scoring, and emotion recognition. Guidelines clarify ambiguous areas, such as applicability to ‘providers' and ‘deployers', AI definitions, and risks in targeted advertising. Violations can incur significant penalties, and there is no grandfathering for existing practices. Compliance requires careful assessment and governance integration to avoid breaches. Enforcement begins after the market surveillance authorities are designated by August 2025.
EU AI Act mandates compliance for AI use in the EU, starting Feb 2025. Noncompliance risks 35M euro fines, impacting all businesses using AI. Act categorizes AI systems by risk and prohibits harmful practices like deceptive AI, social scoring, and predictive policing. CTOs/CIOs must prioritize risk assessments and governance protocols to align with regulations and enhance innovation. Key steps: comprehensive audits, governance implementation, legal engagement, and vendor compliance checks.