Week 18
EU AI Act: Less Than 100 Days to Get Ready
On August 2, 2026, obligations for high-risk AI systems come into force across Europe. With less than three months to go, most SMEs and mid-market companies I (and AI Partner) work with still haven’t clarified their regulatory exposure. And that’s understandable — the text is dense, the categories are technical, and the ongoing Omnibus political saga doesn’t help.
What happened on April 28
The EU trilogue negotiations on the Omnibus — the text that was supposed to push back the high-risk deadline from August 2026 to December 2027 — collapsed after 12 hours of discussions. The sticking point: how to reconcile the AI Act with existing sectoral regulations (industrial machinery, medical devices, automotive). A new trilogue is scheduled for May 13, but the probability of a deal being adopted and published before August 2 is now very low.
Bottom line up-front : plan for August 2, 2026. If the Omnibus passes later, you’ll have extra time. If it doesn’t, you’ll be ready.
Are you in scope?
This is THE question 90% of business leaders are asking. Here’s the simplified decision framework:
You are in scope if your company uses (as a “deployer”) or develops an AI system in one of these categories: CV screening or candidate scoring, creditworthiness assessment, biometric identification, critical infrastructure management (water, energy, transport), or education grading systems.
Important: even a 15-person company using an off-the-shelf SaaS tool for candidate screening potentially falls under the “high-risk” scope. It’s not about company size, it’s about the use case.
Conversely, if you use Copilot to summarize emails or an internal chatbot to answer HR questions, you’re likely not in the high-risk category. But you still need to comply with transparency obligations (disclosing AI-generated content) and AI literacy requirements (training your teams) — both already in force since February 2025.
Four actions to take before August
1. Map your AI usage. List every tool incorporating AI in your organization — including SaaS you didn’t build. For each, determine whether it falls under an Annex III category. And feel free to watch our webinar to get a full methodology on this.
2. Clarify your role in the value chain. Are you a provider (you build the system), a deployer (you use it in production), or an importer? Obligations vary significantly.
3. Document. If you’re a deployer of a high-risk system, you must demonstrate human oversight, a fundamental rights impact assessment, and ongoing system monitoring.
4. Train your teams. AI literacy isn’t optional — it’s a legal obligation since February 2025. Anyone in your organization interacting with AI systems must have a sufficient level of understanding.
The cost of inaction
Penalties reach up to €35 million or 7% of global annual turnover. But beyond fines, it’s reputational risk and exclusion from contracts that weigh heaviest for mid-market companies.
My take
The AI Act isn’t an innovation blocker — it’s a trust framework. Companies that comply early will have a competitive edge in procurement and client relationships. And for SMEs, compliance costs remain reasonable: between €5,000 and €10,000 for an initial audit and action plan.
Share this information with your CIO (and feel free to contact me should you have any question about this)
AI & Sovereignty News — April 28 – May 3, 2026
AI Act: Trilogue collapses, August 2 deadline stands
Parliament and Council failed to agree on the Omnibus after 12 hours of negotiations on April 28. The sticking point: exemptions for AI embedded in products already covered by sectoral regulations (machinery, medical devices). Next round on May 13. If nothing changes, high-risk obligations apply as-is on August 2. The message for businesses: don’t count on a delay.
Sources: The Next Web · IT Boltwise
Atos launches integrated digital sovereignty offering
Announced April 28: Atos Group is deploying a comprehensive digital sovereignty offering for regulated environments and AI projects. The concept: a unified framework covering cloud, infrastructure, cybersecurity, data platforms, AI and applications — with sovereignty by design (identity control, auditability, cryptographic key ownership). Guardrails extend to AI agents. Target sectors: government, defense, finance and healthcare. A strong signal that the European sovereign market is maturing.
Source: Generation NT
Mistral AI launches Workflows: sovereign AI agent orchestration
Mistral moves beyond pure inference with Workflows in public preview — an AI agent orchestration engine built on Temporal (the same infrastructure powering Netflix and Stripe). Organizations like France Travail, La Banque Postale, CMA-CGM and ASML are already using it to automate critical processes. Key sovereignty angle: this is a credible European alternative to US-based orchestrators (LangChain, CrewAI), hosted on French infrastructure. Millions of daily executions from day one.
Sources: VentureBeat · Mistral AI
ISO 42001: The certification that reassures your clients now
In 2026, ISO 42001 (AI management system) is no longer a nice-to-have — it’s a commercial lever. Recognized by the European Commission as a standard facilitating AI Act compliance, it structures AI risk, security, and ethics. The concrete benefit: certified companies win procurement deals and shorten sales cycles, especially in regulated sectors. Think of it as what ISO 27001 was for cybersecurity ten years ago.
Source: Journal du Net
Oliver Wyman: AI drives Q1 2026 revenue growth
The strategy consultancy starts 2026 with +6% growth, driven by its AI platform “Quotient” — now its fastest-growing unit. A telling signal: large clients have moved past “experimenting” with AI and want to deploy AI strategies at scale. Demand for AI transformation advisory is accelerating rapidly.
Source: Consultor
Pentagon signs 7 tech giants for classified AI — Anthropic absent
The DoD signed agreements with OpenAI, Google, Microsoft, Nvidia, Amazon, SpaceX and Reflection to deploy AI on classified networks. Anthropic is notably absent — in open conflict with the Trump administration. A US sovereignty story, but one that raises the question in Europe too: who controls the AI models we use, and whose interests do they serve?
Source: CNN
Study of the Week — Malt Tech Trends 2026: Cybersecurity
Source: Malt Tech Trends 2026 — Cybersecurity Section
Based on 2.5 million searches, 250,000 tech freelancers, and 90,000+ companies
The finding: GRC cybersecurity under extreme tension
Malt’s study reveals a growing imbalance between supply and demand for cybersecurity skills, particularly in GRC (Governance, Risk, Compliance) — precisely the scope impacted by the AI Act.
Key figures
• +31% demand for ISO 27001 certification skills
• +229% demand for ISO 7816 (hardware security / smart cards)
• 30% of top-demanded skills (cloud, cyber, data) have insufficient supply
• €635/day: average daily rate for senior cybersecurity experts
• x60: demand for “AI agent” skills vs. 2024
• +1,390%: growth in projects involving n8n (orchestration/automation)
AI’s impact on cybersecurity
AI isn’t just creating new threats — it’s transforming the discipline itself. The rise of autonomous AI agents (x60 demand growth) creates an entirely new risk perimeter that traditional GRC frameworks don’t yet cover. The profiles organizations need are evolving: they’re no longer just looking for pentesting experts, but architects who can think about AI system security (agent governance, LLM access control, RAG pipeline auditing).
What this means for mid-market companies
The market faces a structural shortage. Hiring GRC/cyber expertise in-house will be slow and expensive. The alternatives: upskill internally through training, or outsource to specialized freelancers — but at €635/day, you need to prioritize carefully.
The convergence between the AI Act (AI compliance) and ISO 27001 (information security) is driving demand for rare, expensive hybrid profiles. Companies that get ahead — by pursuing ISO 42001 certification and training their teams now — will have a structural advantage.

