AI

DIA Europe: Expert Says Risk Pyramid Can Determine Whether a Device Requires AI Act Conformity Assessment

Expert suggests using a risk pyramid to classify medical devices under the EU AI Act for conformity assessments. High-risk devices need assessments; low-risk items face transparency obligations, while minimal-risk devices are unregulated. Compliance involves documentation, labeling, and risk management. AI tools will advance diagnostics, exemplified by non-invasive tests like LiverMultiScan.

https://www.raps.org/news-and-articles/news-articles/2025/3/dia-europe-expert-says-risk-pyramid-can-determine

Navigating Global AI Regulation and Innovation

AI regulation needs clarity as laws develop globally, affecting innovation. A framework is essential for organizations to manage AI risks effectively amidst uncertainty. Current EU guidelines address specific AI systems, yet many low-risk applications remain unregulated, creating risks that require self-regulation. Companies should develop unique AI risk profiles and integrate governance standards within enterprise frameworks. Technological solutions can aid compliance and support responsible AI usage, emphasizing the need for clear governance strategies at enterprise, product, and operational levels. As regulations evolve, a proactive and balanced approach to AI governance is crucial for leveraging innovation while minimizing risks.

https://www.fticonsulting.com/uk/insights/articles/navigating-global-ai-regulation-innovation

Gap in the EU’s Rules for AI Requires a Well-Documented Approach

EU AI Act and GDPR create compliance challenges for organizations using sensitive personal data in AI, particularly regarding bias detection. Regulatory gap exists as the AI Act permits processing special data for bias correction but conflicts with GDPR's prohibitions. Organizations must assess risks, ensure dual compliance, and document processes thoroughly until clearer regulatory guidance emerges.

https://news.bloomberglaw.com/us-law-week/gap-in-the-eus-rules-for-ai-requires-a-well-documented-approach

Why Goldman Sachs’ CIO Is Taking a Measured Approach to Rolling Out AI Across the Business

Goldman Sachs' CIO Marco Argenti is implementing a cautious AI rollout for the firm's 46,000 employees, emphasizing change management and responsible use of AI technology. Half of the workforce currently has access, with tools like the GS AI Assistant facilitating productivity. The approach includes a focus on controlled experimentation with AI, ensuring safety and compliance. Argenti is exploring diverse AI models and gathering employee feedback to enhance usability, aiming for broader adoption by the end of 2025. The strategy prioritizes quality data and user engagement to optimize AI effectiveness across the organization.

https://fortune.com/2025/03/19/goldman-sachs-cio-ai/

EU AI Act Provides GCs Innovation Guideposts, Not Barriers

The EU AI Act offers guidance for general counsel (GCs) on managing generative AI risks, fostering innovation rather than hindering it. Despite their hesitance, legal leaders can use regulatory frameworks like the EU AI Act as a roadmap for responsible AI implementation. The Act delineates categories for AI risk, with high-risk applications facing stringent requirements to ensure safety and compliance. By understanding these regulations, organizations can mitigate risks like bias and ethical concerns while promoting technological advancements.

https://news.bloomberglaw.com/us-law-week/eu-ai-act-provides-gcs-innovation-guideposts-not-barriers

The EU AI Act Is Coming Into Force. Here’s What It…

The EU AI Act, the world's first comprehensive AI regulation, is set to impact financial markets with risk classifications for AI systems: Unacceptable (banned), High (restricted), Limited/Transparency (requires user awareness), and Minimal (no additional requirements). It applies to all EU member states and externally to entities impacting the EU market. Investors see challenges for tech firms, particularly smaller ones, but potential growth for those specializing in compliance and trustworthy AI. Compliance costs are high, penalties for non-compliance severe. The Act officially enforcement starts August 2024, with parts rolling out earlier.

https://www.morningstar.co.uk/uk/news/262229/the-eu-ai-act-is-coming-into-force-heres-what-it-means-for-you.aspx

The Future of Transatlantic Digital Collaboration With EU Commissioner Michael McGrath

EU Commissioner Michael McGrath discusses transatlantic digital collaboration and data protection strategies at CSIS event. Key topics included: the role of the EU in lawmaking, GDPR modifications, the importance of the Data Privacy Framework for transatlantic trade, withdrawal of the AI Liability Directive, AI's impact on elections, and the new 28th company regime for ease of business across the EU. McGrath emphasized the need for dialogue amid tariff tensions with the U.S., and the potential for enhanced cooperation on consumer protection and digital regulation.

https://www.csis.org/analysis/future-transatlantic-digital-collaboration-eu-commissioner-michael-mcgrath

AI Project Failure Rates Are on the Rise: Report

AI project failures rising: 42% of businesses abandoned initiatives in 2025, up from 17% in 2024 (S&P Global). Main obstacles: cost, data privacy, security. Enterprises struggle to transition pilots to production, emphasizing need for selective AI use to reduce failures. Embracing failed projects fosters a culture of experimentation, leading to better future outcomes.

https://www.ciodive.com/news/AI-project-fail-data-SPGlobal/742590/

Scroll to Top