Week 11
Ethics was the promise. Pragmatism is the operating system.
The timeline is damning—sourced from the brillant newsletter from Brief.me.
2015: The AI research community made a collective promise. An open letter signed by hundreds of researchers and industry leaders demanded that AI systems be designed to “maximize societal benefit.” The language was aspirational. Ethics wasn’t a compliance checkbox—it was supposed to be foundational. The signatories included OpenAI founders, Google Brain researchers, and Anthropic’s future leaders. The commitment felt binding.
2018: That commitment fractured publicly. When Google employees protested Project Maven—a Pentagon contract to apply AI to military drone footage—the company faced a choice: uphold the 2015 ethics pledge, or pursue the lucrative defense contract. Google chose pragmatism. The email revolt lasted months. The contract proceeded.
2021: The pressure mounted differently. Dario Amodei left OpenAI, publicly citing disagreements over the company’s direction. The founder of Anthropic has since positioned his new company as more ethically rigorous than OpenAI, with constitutional AI and formal safety commitments. But even Anthropic’s positioning is perhaps above all a communication posture according to AI ethics researchers. The competition became: who can appear most ethical while remaining most profitable?
2024–2025: Google completed the reversal. In February 2025, the company removed its pledge not to use AI for military surveillance and weapons applications. The move was announced quietly, buried in a blog post update. By then, the message was clear: the real business—military contracts, surveillance infrastructure, government partnerships—demanded pragmatism.
What emerges from this timeline is not conspiracy but inevitability: ethics only survives when it’s profitable to maintain it.
The AI industry spent a decade claiming that safety, fairness, and societal benefit were non-negotiable.
But the moment regulatory pressure eased, competitive pressure spiked, or a lucrative contract appeared—the frameworks evaporated.
European enterprises now face a genuine governance crisis: they’re required by the AI Act to ensure responsible, ethical deployment—but they’re building on infrastructure controlled by companies that no longer believe in those principles.
August 2, 2026, the EU AI Act will reach full applicability.
One email a week to understand the latest advancements in AI for Consultant, Market and Competitive Intelligence professionals.
Don't forget to share it!!! Cheeerrrssss :-)
Four Critical Dates That Will Define AI Sovereignty in 2026
From military tensions to regulatory showdowns, 2026 is the year when AI ethics meets geopolitical reality.
The intersection between artificial intelligence and military applications has emerged as the defining battleground of 2026. Since Google's abandonment of Project Maven in 2018 (following internal employee pressure), technology companies have oscillated between ethical commitments and commercial interests. Current AI models can execute sensitive military tasks—from generating target lists to autonomous decision-making—raising the specter of lost human control.
This tension has metastasized across the industry. OpenAI recently signed agreements with the U.S. Department of Defense while maintaining public positioning around safety and responsibility. The contradiction is stark: how can a company simultaneously pledge commitment to human oversight while contractually enabling military autonomy?
The regulatory response has sharpened the divide. The EU adopted the AI Act in 2024 as the first major legal framework prohibiting certain use cases and regulating "high-risk" tools. Conversely, the United States moved in the opposite direction when President Trump signed an executive order in late 2025 mandating regulation "as minimally constraining as possible"—treating regulatory frameworks as competitive disadvantages in the race against China.
The underlying challenge remains unresolved: human control over AI systems. As autonomous agents become more capable, the question of who decides, when, and under what conditions becomes foundational—not just for ethics, but for governance.
Source: Brief.me - AI Ethics Debates
Why the Next Wave of AI Could Actually Favor Europe
The first wave of AI belonged to American hyperscalers. The second wave—robotics, industrial applications, healthcare—plays to Europe's structural strengths.
The first generation of artificial intelligence—large language models, chatbots, foundation models—was dominated by American technology giants. But the emerging second wave operates in the physical world: robotics, manufacturing, healthcare, energy, chemistry. This is precisely where Europe's industrial base concentrates.
The European Union represents 22% of global AI research citations and graduates 2.2 million STEM-credentialed professionals annually. More significantly, the EU commands a massive industrial foundation—Siemens, Airbus, ASML, Bayer, Roche—that generates high-quality real-world data essential for industrial AI applications. This resource remains largely unexploited.
The primary constraint is capital deployment at scale. Despite scientific excellence, Europe struggles to transform innovations into global enterprises: American AI startups captured approximately $68 billion in investment in 2023, compared to just $8 billion for the EU. The European Commission's InvestAI initiative, committing €200 billion, is positioned as a corrective signal.
The paradox highlighted by industry observers: being behind on the first wave may be an advantage, as Europe can rebuild on clean foundations. This "second chance" can only be seized if the continent transcends its strategic, financial, and technological fragmentation to act as a unified bloc.
Source: Fortune - Europe's AI Next Wave
SAP's €20 Billion Sovereign Cloud Play: Europe's Real Move
While US hyperscalers localize infrastructure in Europe, SAP commits to building sovereign solutions from the ground up—Europe's only genuine technological independence play.
SAP has committed €20 billion in investment toward development of sovereign cloud and AI solutions engineered explicitly for sectors where data sovereignty, operational responsibility, and regulatory compliance are non-negotiable: defense, aeronautics, public administration, and critical infrastructure.
This strategy diverges fundamentally from the American playbook. SAP isn't localizing U.S. infrastructure in Europe; it's constructing an EU-controlled infrastructure designed for European requirements from inception. SAP Sovereign Cloud stores all data within EU borders, operates on open-source principles, and offers on-premises deployment for organizations rejecting even regional cloud models.
The commitment is materially distinct from Microsoft's €80 billion datacenters or AWS's "sovereign" cloud operations—both maintain architectural control in the United States while offering European processing facilities. SAP, by contrast, places architectural authority in European hands.
The timeline imperative remains brutal. SAP's production-grade solutions won't reach operational scale until 2027-2028. By then, Microsoft will have deployed €80 billion in infrastructure across 15 European countries. AWS's German region will be operational. Cisco's air-gapped infrastructure will be deployed. European enterprises requiring functional solutions immediately will adopt U.S. companies' "sovereign" alternatives while awaiting SAP's genuine alternatives.
Sources: RRHHDigital - SAP Sovereign Cloud Alliance, SAP Digital Transformation Commitments
Cisco's Sovereign Critical Infrastructure: Control of the Control Plane
Cisco launched its Sovereign Critical Infrastructure Portfolio this month, offering European organizations genuine operational sovereignty over their digital foundations.
Cisco's Sovereign Critical Infrastructure portfolio addresses a structural European aspiration: infrastructure that cannot be remotely disabled, controlled entirely by the customer, and certified toward EUCC (European Cybersecurity Certificate) standards. The approach centers on air-gapped architecture, no remote management access, and full customer operational control.
This represents the closest U.S. technology vendor approach to real sovereignty concession. Hardware is geographically located in Europe. Remote management is prohibited. Updates require explicit customer approval. For critical infrastructure sectors—government, banking, healthcare—this matters.
The constraint: Cisco remains an American company. Firmware, underlying architecture, foundational design—all remain American-controlled. Sovereignty here means European control of deployment, not design. It's a genuine step forward compared to cloud solutions, but operates within architectural boundaries set by a U.S. corporation.
Source: NetworkWorld - Digital Sovereignty Options
WellStrategic Launches AI Agent Safety Stack: Ahead of Regulatory Reckoning
An open-source blueprint for autonomous AI control appears exactly when 2026's regulatory deadlines demand proof of human oversight.
WellStrategic launched the AI Agent Safety Stack, comprising twelve open-source specifications in Markdown format, designed to enable developers and organizations to define emergency shutdown protocols, safety boundaries, and accountability standards for autonomous AI agents. These specifications—available freely under MIT license—span four domains: operational control, data security, output quality, and accountability.
The timing is deliberate. The 2026 regulatory deadlines—particularly the EU AI Act and Colorado AI Act—impose high-risk AI systems requirements for human supervision documentation and security control certification. An open-source standard addressing exactly these requirements, released before compliance deadlines, positions itself as infrastructure rather than marketing.
The strategic significance: if autonomous AI agents become the commercial standard, standardized safety specifications become as foundational as API documentation. Early movers in this space establish category definition.
Source: APNews - AI Agent Safety Specifications
AMI Labs Raises $1.03 Billion: Yann LeCun's Bet Against LLMs
Meta's former AI chief raised the largest seed round in European AI history, betting that foundation models are the wrong approach—and that world models trained on video are the path forward.
AMI Labs, the laboratory of Yann LeCun, secured $1.03 billion in seed funding from investors including Nvidia, Bezos Expeditions, Cathay Innovation, Temasek (Singapore), and SBVA. This represents the largest seed-stage capitalization for a European AI startup to date, signaling private sector commitment to foundational research in the region.
LeCun's central thesis directly contradicts the industry consensus: large language models represent the wrong architectural direction for machine intelligence. His bet centers on "world models" trained on video and spatial data, capable of reasoning, planning, and interacting with physical environments—robotics, transportation, manufacturing.
This funding volume sends a market signal: foundational research in AI, conducted in Europe, with backing from global capital, has become viable. The significance extends beyond LeCun's specific architecture bet—it validates European research excellence as investable, differentiating European AI development from pure infrastructure play or regulatory compliance.
Source: Fortune - AMI Labs Funding
Thanks for reading The C&M Intelligence Leader's Wine! Subscribe for free to receive new posts and support my work.

