By Ernesto Ángeles. SPR Informa. Mexican Press Agency.
On July 30, 2025, Mexico’s Business Coordinating Council (CCE), U.S. semiconductor giant Nvidia, and Mexican authorities led by Minister of Economy Marcelo Ebrard launched the initiative “Mexico AI: Accelerated Investment.” The project aims to transform Mexico into a regional AI hub by building a network of data centers and supercomputers and, most importantly, developing a large-scale Spanish-language model.
According to organizers, Nvidia will provide access to its GPUs and collaborate with universities such as UNAM (Universidad Nacional Autónoma de México) to create supercomputers. Meanwhile, the CCE will promote AI adoption in productive sectors and support the creation of over 200 AI startups. The project also aims to train the model using local data in Spanish and regional dialects for use in education, healthcare, and public administration.
At this point, you may be asking: what for? How does this benefit Mexico? And are there risks involved?
To begin with, it’s important to consider that commercial AI models such as GPT are typically trained on data that excludes Latin American voices and often reproduce cultural biases. A regional model could therefore provide more contextually relevant responses and, at best, prevent data extraction by foreign corporations. In line with this, Mexico is also part of the Latam-GPT project, spearheaded by Chile, to create an open-source model tailored to Latin American cultures and languages.
These efforts —both “Mexico AI” and Latam-GPT— could mark a shift toward digital representation more aligned with local realities, as interpreted by and for Latin American and Mexican communities. It could also be seen as a step toward digital sovereignty, by expanding local realities into representative digital tools and helping preserve the ethnocultural heritage of indigenous groups.
While this initiative may seem like a strong move toward technological sovereignty —as Ebrard himself argued—, technical specifics matter. Even if a country develops its own AI model, if that model relies on infrastructure and technology controlled by foreign actors, the resulting sovereignty would be partial at best —still subject to the capacities and decisions of corporations or other states.
Let’s not forget that Mexico and other countries depend on hardware and foundational models produced by U.S. companies. Without investment in domestic chip design and manufacturing capacity, dependency will persist —unless national projects are aligned to support and integrate with this “Mexican GPT,” using available resources wisely to reduce reliance on advanced foreign tech.
The “Mexico AI” initiative is being carried out in partnership with Nvidia —a massively powerful and monopolistic company that sells chips, cloud computing, and software. So, despite its CEO stating that “every country needs sovereign AI,” the real focus should be on the fine print and technical implications. Nvidia’s profits may not be purely financial —they may also come in the form of data. Not just any data, but meaningful, structured, and curated datasets not publicly available —saving Nvidia millions in data collection and enriching its proprietary models, thereby reinforcing its monopolistic position.
In this context, a more sovereign strategy in the short term would involve diversifying providers (including open European projects and Chinese technologies), and backing regional initiatives such as Latam-GPT.
Another crucial issue is the model’s transparency. There has been no mention of whether the Mexican GPT will be open-source. A closed-source national AI model would hinder international cooperation, create a centralized and opaque asset, and benefit only a select few.
Moreover, since the model is expected to include indigenous languages, it will be essential to obtain informed consent from the communities involved and ensure they benefit from any resulting applications. Otherwise, this could perpetuate data colonialism. Therefore, it’s vital to establish data governance bodies and digital education efforts involving indigenous groups, academics, and civil society.
In addition, supercomputers and data centers consume massive amounts of energy and water, which makes it imperative to assess the environmental impact and working conditions associated with building and maintaining this infrastructure.
In conclusion, this initiative represents a first step toward breaking free from technological dependence —one that hampers national innovation. However, there’s still a long way to go, even with a “Mexican GPT.” A truly sovereign AI means a State must control the infrastructure, data, and development of its models —not simply trade those assets for chips, cloud services, and so-called “national” models from powerful tech corporations. If not, we risk replicating technological colonialism, just under new names and appearances