Towards An Open, Resilient, Non Aligned AI
16/02/2026
Scroll

Towards An Open, Resilient, Non Aligned AI

16/02/2026

arrowVoir tous les articles

Towards An Open, Resilient, Non Aligned AI

The original version of this article was published in the Grand Continent journal and is available at this link

Starting from the common ground: the case for a global stack

As President Macron said recently, Europe is at an existential moment. In fact, the whole world faces an existential moment. How to direct – or rein in where needed – AI development is one of the most important questions we face today. 

In the run-up to the AI Impact Summit in Delhi, states, industry, civil society and other stakeholders have an opportunity to build a new, non-aligned movement around the core tenets of open, resilient and collaborative AI. Drawing inspiration from the Non-Aligned Movement of countries seeking to remain neutral in the Cold War through the 1950s and 60s, this initiative seeks to provide an answer to the core question of AI sovereignty – who controls and has agency over AI? 

Before asking where, when and how countries can exercise control and agency, to build and deliver AI that truly furthers human and public interests, we must first understand what the AI stack is. 

The AI stack is the hierarchy of components required to build, run and scale AI applications.

It is composed of:

— the infrastructure layer (the hardware (e.g. GPUs) and the cloud platforms they run on (e.g. services like AWS)), 

— the model layer (the models themselves such as GPT-4 or Claude and the tools to service them),

— the data layer (the information used to train and run the models), and finally

— the application layer where the model meets its audience, whether public or professional. 

Controlling all these layers is, for almost all countries, impossible as it is both prohibitively expensive, and access to technology, expertise and energy remains unevenly distributed. A country might want its own chips, data centers, models and applications but in practice, costs and efficiencies of scale make this neither practical nor desirable. 

Yet countries and their people need and want AI to be highly specific to their history, their culture, and finely tuned to public interest needs and outcomes, that are inherently contextual.

How can we achieve this? 

First, we must recognize that open stacks are smarter, more resilient stacks. Openness in artificial intelligence, here defined as AI models that are freely available to use, study, modify and share, models which combine elements of open science, open innovation, open data and open source, enables this third way. Open stacks allow middle powers (any interested powers) to capture the social, economic and political value of AI.  

Second, it is clear that no country apart from China and the US can aspire to have a fully sovereign stack, where sovereignty is defined as agency and control over all aspects of the stack, the different stages of producing key AI applications. 

In fact, nor should they want it. Why fight it? Embrace it. The aim should not be a national, sovereign stack, but a global, open, resilient, non-aligned stack, adaptable to the unique needs of each country. 

France, Germany, Nigeria, India and Morocco have more in common than that which divides them when it comes to AI. Any of the middle powers have more in common with each other than they share with the US or China.

And that is the common ground we must start from.

The other definition of sovereignty

Most discussions on AI sovereignty get it wrong.

Think of sovereignty along an axis going from fragility to resilience. The more open the stack, the more resilient it is. The less open it is, the more it depends on the choices of a few actors, the more fragile any country’s position is vis a vis this crucial technology. Sovereignty does not mean full control or ownership but resilience, sharing the pieces that can be shared, and owning the pieces that can be owned.

They get it wrong because they overfocus on the ‘bricks and mortar’ side of AI infrastructure such as building data centers. These are the necessary but not sufficient condition for public interest AI to thrive. What matters is how they are used, who uses them, the availability and accessibility of high-quality data to power them. For example, if we are ever to find a cure for breast cancer (beyond increasing the likelihood of detection and prevention, where progress has been made), models will need privacy-preserving access to patient treatment outcome data and likely also to genetic data. For patients to be comfortable providing that information, they will need to trust that the system will respect their rights and not use this data against them in their daily lives, be it insurance rates, mortgage applications or their workplace security. We are nowhere near this stage, where data centers are powered by locally relevant, trusted data. 

How does this relate to sovereignty? The impulse behind the quest for sovereignty in AI is the right one. The push for AI sovereignty is driven by governments and the private sector’s shared realization that dependence on major foreign technology actors is a critical vulnerability, but it is not limited to that. The public is asking for it as well. The public is increasingly demanding choice and agency when it comes to the technologies shaping everyday life and work. To respond to these needs, AI sovereignty must go beyond national security and competitiveness, prioritizing the public’s demand for open, privacy-respecting technologies that do not lock them in, but rather empower them, as users. This will enable everyday users to benefit from the same controls and agency over data about them that technology companies regularly provide to enterprise users. 

Critics will say that sovereignty is a barrier to innovation, or equivalent to erecting walls that make it harder to use technology. Defining sovereignty as the opposite of innovation and flexibility of use is an attractive sleight of hand. Buying proprietary products and getting locked in to vendor agreements you cannot control is not innovative. It is monopolistic. And monopolies kill innovation. 

Making the right choices

AI models are commodities 

For comparable capabilities, AI models are well on the way to becoming commoditised. Regardless of who produces them, whether OpenAI, Anthropic or any of the other players in the field, a certain uniform quality has emerged over the past 12 months. There are important differences at the frontier, but for most specific, contextual public interest applications, the model is less important than the product layer. The differentiator comes at the application layer when the products are developed that businesses and the public use. The value lies in those products and services. To succeed, to deliver demonstrable improvements in people’s lives, all countries will need access both to locally relevant, accessible and high quality national and international datasets, as well as to off-the-shelf solutions across a shared, global, open stack. 

Beyond computing power: data, open source, and smaller models

Compute is a national and international obsession due to so-called scaling laws, which posit that the bigger the models, the more compute, the more data, the more powerful the model will be. A debate rages as to whether scaling laws still apply 1 . Regardless, these are mostly applicable for frontier applications, at the bleeding age of AI development. For most contextual applications, what matters is – a certain amount of compute of course – but crucially investment in data and investment in open source 2 .  

Innovation in data 

Innovation in compute has been staggering over the past ten years. GPU price to performance has doubled roughly every two years over the past ten years. In contrast, innovation in data – how to access, make available and use data, including personal data, in ways that people find safe and trustworthy has largely plateaued and needs investment. Experts are debating collaborative approaches to data sharing, be they data trusts or other forms of data stewardship 3 , but few if any have reached the scale needed to make an impact. Investment is critically needed, both on the technical side (how to separate the data from the model) and on the governance side (how to ensure group data sharing is trusted by those sharing it). 

Investment in open source 

The same is true of open source. For all the talk of how critical open source is to the AI stack and ecosystem, the sector is massively under-invested . As with open source software, the top-tier of open source AI development will be funded by the largest company users, omitting a bottom tier of critical dependencies, almost entirely supported by volunteers. A few groups stand out, such as ROOST 4 , a non-profit providing open trust and safety tooling, but these remain the exception to the rule. Investment here is key. 

Going through the stack

For middle powers, sovereignty is not about recreating the whole AI value chain at home. It is about working together and identifying which of the building blocks that make up the AI stack – the hardware, the infrastructure, the model, the applications – should and can be open 5 , which need international cooperation and how, which do actually need to be developed at home and what is needed for that domestic development to succeed. 

States should embrace their market-shaping role, proactively identifying the parts of the AI stack that have to be and can realistically be sovereign, those where it is acceptable or perhaps temporarily inevitable that they are not (e.g. where the risk of relying on foreign suppliers for digital services is actually smaller than the cost of reducing the dependence), and those parts of the stack that should be open-sourced. In some cases, the state can and should exercise sovereignty by opening the market, like the UK did with open data and open banking 6 , not closing it. The move should be to commodify the sovereign stack so that anyone can have access to the tools and data to build new models.

This is the task at hand at the forthcoming AI Impact summit. 

The geopolitics of AI: towards a new non-alignment

For this to happen, the AI field needs three things. The good news is we already have two of these.

First, we need the platforms to foster those international exchanges, platforms such as Current AI 7 , which was launched a year ago at the AI Action Summit in Paris and brings together the developer community, private sector, government and philanthropy towards building a collective, collaborative and independent vision for AI.. These need a commensurate scale of resources and talent in order to build technology that serves the public interest.

Second, we need the tools and investment in open source and in data innovation to power this realignment. Again the good news is that many organizations originally focusing on open source software already exist and can be supported.  

Finally, thirdly, countries will need to identify new forms of international cooperation that, rather than fueling rivalry and competition, help medium-sized powers to unite in order to build a resilient, open, and non-aligned stack. 

Our geopolitical use of the term alignment has a different meaning from the term “AI alignment”: an “aligned” AI is a term specific to the “AI safety” movement, technology that does what its designers intend it to do. We believe that truly contextual AI, producing tangible improvements in people’s lives, can only be achieved by breaking out of the monopolistic orbit of technology giants, and by abandoning the myth of a completely national sovereign stack. 

When it comes to AI, collaboration, geopolitical non-alignment is necessary in order to produce AI that serves the public interest, with results and purposes that benefit humanity.

This is about showing, not telling, that AI can be a new tool to bring countries together, rather than pulling them apart. This is where the AI Summit series has a crucial role to play, passing the baton from one year to the next, building on each other and contributing to the coherence, rather than the fragmentation, of global AI governance and infrastructure.

For AI, this is the answer to Mark Carney’s acclaimed call to middle powers in Davos. 

This is not about not choosing sides, but rather creating choice and agency so that many mutually beneficial partnerships can flourish. Today’s third way in AI takes traditional non-alignment further by focusing on building multiple avenues and connections to build an open, resilient, and non-aligned AI. 

Notes

  1. Sara Hooker, «On The Slow Death Of Scaling», December 2025.
  2. Open Source: How Middle Powers Can Build Influence in the Age of AI.
  3. Data Stewards Network.
  4. See their website.
  5. [2405.15802] Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence.
  6. See: «Open banking: setting a standard and enabling innovation», The Open Data Institute, 26 October 2016.
  7. See their website.
+--
voir le planfermer
citer l'article +--

citer l'article

APA

Martin Tisné, Towards An Open, Resilient, Non Aligned AI, Feb 2026,

notes et sources +
+--
voir le planfermer
notes+