By Ernesto Ángeles. SPR Informa. Mexican Press Agency.
A few days ago, Reuters news agency revealed an internal report from Meta, parent company of Facebook, showing that its AI chatbots are allowed to flirt, hold provocative conversations, and even display racist attitudes in their interactions with children and teenagers.
And, as is typical for tech companies—especially Facebook—Zuckerberg’s firm had to half-admit its mistake and promise that it is “working on it,” self-regulating, acting as judge and jury thanks to the geopolitical power of its country of origin, as well as the incompetence and complicity of governments and institutions that either fail to regulate such companies properly or regulate them poorly.
This is happening worldwide with child-protection regulations in digital environments, where restrictions and safeguards are being erected to prevent children from accessing certain platforms, websites, or content. Yet these measures are arbitrary, easy to circumvent, or shift responsibility onto parents and consumers. In this context, it is striking that these regulations admit the toxicity and negative impact of social media platforms, but only attempt to raise the minimum age of access to such digital chaos.
It is also puzzling that such regulations are only now being introduced, as if social media platforms had not existed for more than 15 years, or as if there weren’t a historical record of numerous studies warning of the dangers of these technological products. Therefore, it would come as no surprise if the risks and problems that AI, like Facebook, poses to children are not addressed politically until ten years from now—if we’re lucky.
We are not only living through and suffering the consequences of years of weak or nonexistent regulation of digital companies, but now we must prepare for even greater risks, many stemming from our chronic dependence on harmful products like social networks. In the past, the issues were loss of attention, violation of personal data, digital manipulation, disinformation, and other scourges. Now we also face problems such as the weakening of critical thinking skills, the historical and factual manipulation enabled by AI, and the possibility of directly influencing and manipulating people’s emotions—especially those of new generations, who will enter this reality with fewer tools of resistance than those of us who grew up without the omnipresent influence of AI.
The great paradox in this landscape is that while cosmetic reforms are being discussed to create the appearance of protecting children, the very companies that design the algorithms and build digital environments continue to amass data and perfect mechanisms of social control.
Meta, Google, OpenAI, or Microsoft don’t just sell technology; they also sell a worldview where the market defines what is permitted and what is prohibited, pushing the public interest into the background. That’s why each crisis, each leak, and each public complaint ends up being just another chapter in the narrative that “something is being done,” when in fact the problem is simply being kicked down the road.
The situation becomes even more complex considering that Facebook’s AI is just one among many. One must ask what other companies are doing to make their products safer—if anything at all—since it seems likely that in a few years AI will spread much like “smart things” once did. The difference is that now the promotion of values, products, perspectives, ideas, and facts will be more constant, harmful, imperceptible, adaptable, and automatic. We must start questioning what future awaits us, and what future we want, especially since Google, with Gemini, and OpenAI, with ChatGPT, have already announced that their models will promote products in the middle of conversations.
This bleak scenario is compounded by the structural weakness of the very institutions that should be regulating the field. In many countries, the agencies tasked with protecting children or data privacy lack the budget, authority, independence, and technical expertise to confront corporations that operate globally. Regulatory capture is evident: officials move from regulating these companies to working within them, legislators parrot industry talking points, and governments prefer not to strain relations with powerful nations that export these technological models as part of their soft power.
What is most alarming is that while companies get tangled up in the rhetoric of “responsible innovation,” they are already laying the groundwork for a future in which AI will be the main mediator of our daily lives. It’s not just about a bot flirting with a child or normalizing racist attitudes—it’s about systems that will be able to shape political opinions, consumer preferences, cultural identities, and even emotions in real time. We are not simply facing technical glitches but rather a silent redesign of human socialization.
For all these reasons, the discussion cannot remain limited to the minimum age for accessing social media. Instead, it must focus on how we want to protect the autonomy, privacy, and critical thinking skills of new generations in an ecosystem where AI is expanding unchecked. Otherwise, in a few years we may find ourselves with a population shaped from childhood by private algorithms, incapable of questioning the narratives imposed on it, and subject to a digital power that no longer responds to democratic values, but to commercial and geopolitical interests.
Related: The Mexican Chat-GPT: A Step Toward Technological Sovereignty or Just More Dependence?