What do Trump, Starmer and Anthropic have to do with your emails? (Hint: it's about data)
Everyone covered the Anthropic blacklisting as an AI safety story. They covered Trump and Starmer as a geopolitics story. But really? They're part of the same story, one that risks all our data.
In the first week of March 2026, two things happened that most people treated as separate news stories.
The first: the United States and Israel launched a wave of airstrikes on Iran. Within days, US President Donald Trump was on television publicly rebuking the UK Prime Minister for not immediately granting the US military access to British bases. He called Keir Starmer “not Winston Churchill.” He told The Sun that Starmer had “not been helpful” and that the relationship between the two countries was “obviously not what it was.” The UK - a country that had, until that week, operated under the assumption of a durable special relationship with the US - watched in real time as a long-standing ally made veiled threats in front of the world.
Channel 4 Footage of Trump discussing UK PM Kier Starmer. Full video available at: https://youtube.com/shorts/AEkyfDFQCtQ?si=_ayIlzLVxwa4d1MF
The second: Anthropic - the US-based AI lab whose tools hundreds of thousands of businesses worldwide currently use (including this humble writer) - was designated a supply chain risk to national security by the US ‘Department of War’ (I mean, seriously?!) and Trump ordered federal agencies to immediately cease all use of its technology.¹
The reason Anthropic was labeled a national security risk (a designation previously reserved for companies like Huawei)? It refused to let the US military use its models without restrictions on domestic mass surveillance and autonomous weapons. The Pentagon said it wanted access for “any lawful use.” Anthropic said no - and have stood firm despite the Pentagon’s aggressive moves.
Don’t worry though guys - the poor folk at The Pentagon haven’t had their dreams - of using AI to monitor US citizens en masse, while AI deploys drones to defeat overseas enemies - completely dashed… Just hours after Anthropic issued its final refusal, Sam Altman, CEO of OpenAI, announced he’d struck a deal with the Pentagon for them to use OpenAI’s models. If you’re sitting there scratching your head, yes, this is a problem - Anthropic’s CEO called Altman’s claims that OpenAI models wouldn’t be used for the same purposes, “safety theatre.”²
And somewhere in the noise of both these stories colliding, it was also reported that even as Anthropic was being blacklisted, its Claude models were being used to help plan US military strikes on Iran.³
Read that again. The same AI model that you or I might use to draft a report or client proposal on a Tuesday morning was, simultaneously, helping plan airstrikes on a sovereign nation while being designated a national security risk for refusing to hand over unrestricted access.
Now, the so what.
The US president has now demonstrated that he is willing to designate a US AI company a threat to national security for refusing to comply with military demands. That is a precedent. If this administration will blacklist Anthropic over model access, it is no longer theoretical to imagine it pressuring a US company to hand over customer data under the same national security justification. The legal mechanism for this already exists - it is called the CLOUD Act, and I will come back to it. What the Anthropic episode demonstrates is the political willingness to use that kind of leverage.
AI and data tools do not sit in a neutral server somewhere, separate from politics. They operate at the pleasure of whoever is in the White House. The terms can change overnight.
Most of the coverage of these two stories treated them as an AI governance story on one hand and a geopolitics story on the other. I think they are the same story. And the implications for anyone running a business on US-controlled infrastructure are serious.
The rabbit hole
I have been thinking about data ownership and AI for a while. It is something that comes up again and again in my work - from the headlines about AI companies breaching copyright when they scrape the internet, to the repeated conversations with clients about GDPR-compliant AI tools. The question of who controls our data when we use AI tools (and the more worrisome question of what they might do with it) has surfaced in almost every consulting engagement I have done over the last two years.
But recent events have changed my appreciation for the scale of the problem.
Watching the US President publicly threaten the UK Prime Minister on live television, suggests that any assumptions we have about the status quo might need adjusting. Watching the US President legally vilify a successful AI company over safety guardrails, makes the ‘safe technology choice’ feel significantly less certain. Which got me thinking…
In my day-to-day advising businesses on AI strategy and AI governance…
In my days-and-nights building my own AI startup…
And in most of the UK business community…
We make decisions based on some widely-held assumptions about data protection, regulations, and international agreements. Certain software providers are considered best-in-class for data security and privacy (hello Microsoft365, I’m looking at you). But if the US president will dress down the UK’s leader over a military base in front of the world’s media, how durable is any data protection agreement that could be overturned on the whim of the White House?
While down the rabbit hole, I came across a blog post by a ML company Helix that highlighted some lesser-known US legislation that mean this threat-to-our-data isn’t just hypothetical.
The CLOUD Act I mentioned earlier (AKA the Clarifying Lawful Overseas Use of Data Act) was signed in 2018 with bipartisan support. It was pretty unremarkable at the time, and it gave US law enforcement authority to demand data from US-headquartered companies regardless of where that data is physically held. The name tells you everything: it was passed to amend a previous Act, to confirm that U.S. service providers (e.g. cloud storage, social media, telecoms companies) must produce data under their control, even if it is stored on foreign servers.
So, your Azure instance in Frankfurt. Your Anthropic contract specifying “EU data residency.” Your AWS bucket in Ireland. Your Google Workspace tenant with the UK flag next to it in the settings… None of it matters, if the US decides it is justified in accessing your data.⁴
Retaliatory legal action
The EU has already taken action in response to the CLOUD Act being passed. In 2020, Europe’s highest court struck down the Privacy Shield - the agreement that was supposed to make it legal for European personal data to flow to US companies - precisely because US surveillance law is incompatible with European fundamental rights.⁵ While these is a replacement framework in place to allow data to be transferred to US companies, it exists at the pleasure of the current administration. And this month demonstrates how quickly that administration’s priorities can shift.
I had assumed - and I suspect most small business owners in the UK assume - that choosing a GDPR-compliant provider and selecting a European data residency option (choosing to have your data physically stored on servers in the UK or EU) meant my data was secure, and safe from overseas exploitation. It doesn’t. Not really.
What is most baffling here, is that we are not talking about the Chinese government forcing Chinese AI companies to hand over UK customer data - a fact that is both unsurprising and has been widely criticised. We are talking about the United States. A country the UK had - until very recently - a “special relationship” with. While pretty much every country (the UK included) have similar laws in place that allow them to compel private companies to hand over customer data, at least as a UK citizen, living in the UK, I have access to the legal mechanisms to query, challenge, or vote against UK government actions. I have no such democratic power in the US.
The risk rises with AI and Agentic systems
While these risks have, to some extent, always existed, the current technological shift has increased the scale of risk exponentially. With increasing AI usage and the introduction of agents and agentic systems into businesses, the volume of data a business is now processing or storing on US-owned infrastructure increases massively.
No longer are we talking about the files you deliberately save, or the emails you personally send. AI agents run continuously. They process your company’s data autonomously - internal documents, client records, communications, strategy. They access your systems, make decisions, take actions you might not review for hours. Every prompt sent and every response received flows through infrastructure that, if it is US-controlled, is in legal scope for government access.
Whether you’re a small business owner, a consultant, the leader of a professional services firm, whatever: think about what actually goes through your AI tools. Client briefs? Commercially sensitive proposals? Proprietary research? Internal communications? Much of this is the intellectual work that makes your business what it is. And with agents, it goes beyond one-off queries. It is everything, continuously.
Recent research found that 38% of UK employees share confidential data with AI platforms without employer approval.⁷ Sixty percent of organisations feel unable to even identify shadow AI use within their own teams.⁸ Introducing approved AI tools is meant to prevent confidential data from flowing to AI companies. But, even those approved tools (given most are US based) are fair game, should the US Administration decide so.
Building sovereign AI - the solution?
In March 2026, the European Commission and a consortium of over 70 organisations launched EURO-3C - a €75 million Horizon Europe-funded project to build pan-European sovereign AI infrastructure integrating telecoms, edge computing, cloud, and AI, deploying across 70+ nodes in 13+ European countries.⁶ That’s a lot of jargon to say the EU is investing in EU-owned and operated AI infrastructure.
The consortium includes Telefónica, Vodafone, BT, Deutsche Telekom, Nokia, Ericsson, Orange. These companies are building this for one reason: the organisations they serve - governments, financial institutions, critical infrastructure operators, and defence contractors - understand the importance of protecting their data from extraterritorial interference.
This is great. But as usual, my question immediately becomes - but what about smaller businesses? If big business is now trying to build its way out of this US-dependency, why are so many small businesses (and I include myself here too) still overwhelmingly building into it?
I tried it, so you don’t have to
So, I’ve been down the rabbit hole, and have come out the other side. The next logical step, therefore, was to look for alternatives.
I was thinking about this from two angles. As a small business owner who currently relies heavily on US-based technology. And as someone building a product that needs to be GDPR-compliant by default (which now means something much more complicated than I had previously assumed!).
My question: What would it actually take for me to replace my current tech stack with UK-owned alternatives?
Here’s what I came up with:
File storage (to replace the likes of Google Drive)
Nextcloud is an open source storage platform that lets you access, share and protect your files, calendars, contacts, and communications. You can host it on UK based servers relatively easily. But the UX is a step down from Google Drive or SharePoint, and you’d lose the ease of integration that come with using the latter options.Knowledge management (to replace Notion)
Basically, there is no UK-incorporated Notion equivalent at comparable quality. Outline and AppFlowy exist as open-source options. But the gap is real and it is wide.Email (to replace Google Workspace + Superhuman)
Proton Mail and Tuta are functional, privacy-focused, and European. But you lose the ecosystem integration that makes Google Workspace or Microsoft 365 feel seamless. Plus Superhuman is quite literally my can’t-live-without tech tool and there’s just no competition here.AI tools (to replace Claude/ChatGPT/Gemini/Perplexity and/or Copilot)
Mistral is French-incorporated and offers a capable alternative to OpenAI or Anthropic. I’m already a fan. It is also possible to download AI models and run them entirely on your own servers, so no data ever leaves your building. But “possible” and “practical for a small consultancy” are different things.Collaborative editing (to replace Google Docs/ Microsoft Word, etc)
Collabora or OnlyOffice via Nextcloud both work - they allow you to create, edit and share documents with your team, with features like tracked changes. The polish gap compared to Google Docs is noticeable. Not unworkable, but noticeable.
I spent a fair while researching these options, and my honest assessment is that while full sovereignty is technically possible, it’s practically punishing for small businesses. Every sovereign alternative involves a step down in UX polish. But the integration loss is where it really hurts - the value of Google or Microsoft lies in how the products connect, and sovereign alternatives do not replicate that ecosystem. Even “managed” alternatives require more technical engagement than most small teams can absorb (i.e. while I’m a nerd and might quite enjoy setting up hosting and integrations myself, most people probably wouldn’t say the same). And perhaps most significantly, the switching cost is as much cognitive as it is technical. You are asking people to rethink workflows they have used for a decade.
This difficulty shows us how deeply dependent we have become on US infrastructure without most of us noticing. That dependency isn’t accidental; it is the product of two decades+ of platform strategy that made convenience inseparable from control (companies want their products to be ‘sticky’).
It should not be this hard to find UK-based companies that provide best-in-class business software. I want to refuse to believe that. But in the process of *actually* looking for *actual* alternatives to the tools I *actually* use, I’m struggling to make that case.
So why is this? The obvious answer is that Silicon Valley provides the funding and talent density to build software that just works - well-funded means well-designed, no startup-grade bugs or clunky UX to tolerate. That funding gap has compounded over decades and the result is that the US basically owns the infrastructure layer of global business productivity.
If the UK government is serious about data sovereignty (which it seems to be saying it is, at least in the context of AI models and the National Data Library) then model development alone will not be enough. Sovereign AI running on sovereign compute still needs to connect to something. If the productivity software it connects to is all US-incorporated, you have moved the dependency one layer sideways, not removed it. What is needed is investment in sovereign business software across every category where the US currently dominates: file storage, collaboration, email, knowledge management, project management. The full stack. Otherwise “sovereign AI” is a headline, and the data still flows through Washington.
What sovereignty actually means
If you ask cloud providers what ‘data sovereignty’ means, they’ll talk about “data residencies” - basically, choose a UK or EU server location and the problem goes away. But real sovereignty is about jurisdiction, not geography.
It means your AI infrastructure runs in your jurisdiction, on infrastructure you control (or at minimum on infrastructure controlled by a company incorporated in your jurisdiction).
It means no prompts leave your network without your knowledge and consent.
It means you can choose which AI models to use - and swap them out when something better comes along - without being locked into a single provider whose servers sit under someone else’s legal system.
It means nobody can change the terms on you, revoke your access, or make your tools subject to someone else’s government overnight.
This month clarified something important: “choose the right American company” might have been fine up until now. But long-term, this presents a huge risk for all UK businesses.
"This time next year, we’ll be millionaires!"
There’s a huge gap in provision here, which is a huge opportunity for whoever fills it. While enterprises might be able to move to their own dedicated servers (Helix’s ‘sovereign server’ costs £175,000, for example, and requires a data centre), or benefit from EURO-3C’s national infrastructure, what about smaller organisations? Those without the budget or in-house capabilities to run their own bespoke infrastructure? Don’t we deserve data sovereignty too?
This is something I have been chipping away at for a while - in developing BrandScribe, and through work I am not yet ready to talk about publicly. The argument I keep arriving at is that data sovereignty affects everyone, whether or not they have an enterprise budget. So it needs an everyone solution.
That means architecture that works at the individual and small business level - where your data does not require a government contract negotiation or a seven-figure-a-year server to keep private, and where you can run AI on your data, with your models, in your jurisdiction, without a US company (or anyone else, really) as an intermediary. Architecture where ownership follows you, portability is real, the terms under which you operate are yours. Where no one can pull the rug out based on what a US administration decided this month.
I said earlier that the sovereign productivity suite the UK needs does not exist yet. I think building one is one of the biggest untapped opportunities in UK tech right now. A genuinely competitive, UK-incorporated alternative - built with sovereignty as the default architecture, not bolted on as a premium feature - would address a problem that at some point will become impossible to ignore. The institutional world is already spending €75 million on sovereign infrastructure at the enterprise level. The demand at the small business level is there. Nobody is meeting it.
I did not arrive at this by reading some web3 thread on X. I arrived at it by trying to solve a problem myself, failing, and realising the tools I needed did not exist. That is usually a good sign that something worth building is missing.
Data sovereignty has been part of the web3 space for years now. But it has always been an infrastructure argument - and the events of this month have made it one we need to hear, now.
Footnotes
¹ CNBC (2026). Anthropic officially told by DOD that it’s a supply chain risk. (https://www.cnbc.com/2026/03/05/anthropic-pentagon-ai-claude-iran.html)
² TechCrunch (2026). Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies.’ (https://techcrunch.com/2026/03/04/anthropic-ceo-dario-amodei-calls-openais-messaging-around-military-deal-straight-up-lies-report-says/)
³ Center for American Progress (2026). The Department of Defense’s Conflict With Anthropic and Deal With OpenAI. (https://www.americanprogress.org/article/the-department-of-defenses-conflict-with-anthropic-and-deal-with-openai-are-a-call-for-congress-to-act/)
⁴ CLOUD Act, 18 U.S.C. § 2713 (2018). (https://en.wikipedia.org/wiki/CLOUD_Act)
⁵ Court of Justice of the European Union, Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (C-311/18), July 2020.
⁶ Telefónica (2026). Europe takes a decisive step towards digital sovereignty with the launch of EURO-3C. (https://www.telefonica.com/en/communication-room/europe-takes-a-decisive-step-towards-digital-sovereignty-with-the-launch-of-euro-3c/)
⁷ CybSafe/National Crime Agency (2024). Survey of 7,000 respondents on AI data sharing practices.
⁸ Cisco (2025). Shadow AI identification and organisational AI governance survey.

