The vertiginous evolution of artificial intelligence continues to pose dilemmas that transform our understanding of privacy and autonomy. In light of the most recent technological research, the humanist legacy of Santiago Ramón y Cajal acquires renewed value. To comprehend the true magnitude of this scenario, it is most useful to turn to the concept of “argocapitalism,” set forth in 2020 by the academic and distinguished Cajalian, D. Enrique López González, in his induction speech to the Royal Academy of Economic and Financial Sciences. This term engages directly with the established theory of “surveillance capitalism” by sociologist Shoshana Zuboff.

Argocapitalism and the erosion of “practical obscurity”

López González defines argocapitalism as the new economic order of digitalisation, grounded in a mythological duality: on one hand, Argos Panoptes (the all-seeing giant), representing continuous surveillance and data extraction; on the other, the ship Argo (whose prow possessed the gift of prophecy), symbolising the predictive capacity of algorithms. In this new regime, data stands as an essential factor of production, under the maxim: D=C (Data equals Capital).

This panoptic facet has materialised today with great sophistication. Until recently, the immensity of the internet offered a refuge of privacy. However, as analyst Enrique Dans expounds in his reflection on when anonymity was “enough”, we have been living within the convenient fiction that using a pseudonym was equivalent to being safe.

The rigorous study published in February 2026 by Simon Lermen and his team (Large-scale online deanonymization with LLMs) demonstrates that Large Language Models (LLMs) have drastically reduced the costs and barriers to deanonymising users. By analysing unstructured text comments on forums such as Reddit or Hacker News, an autonomous AI agent can infer identity signals and reveal who we are, achieving up to 68% identity recovery with 90% precision. While this method is neither infallible nor universal—profiles with low activity resist better—the attack is highly scalable in both known databases (“closed-world”) and the open internet (“open-world”). AI has severely eroded the “practical obscurity” that once protected our intimacy on the web.

From stochastic parrots to World Models: the prophetic facet

This profiling capability intersects with a paradigm shift in AI architecture. Current commercial systems operate predominantly as “stochastic parrots,” limited to predicting the next word based on statistical patterns without genuine comprehension. However, as Dans warns in his analysis on the world from within, we are witnessing an emerging trend toward World Models.

This new architecture reintroduces context, causality, and time into the machine, aiming to predict consequences in dynamic environments. As this technology matures, the prophetic facet (the ship Argo) described by López González will reach a new level.

Conceptual diagram of the evolution of Artificial Intelligence: from predictive text models ("Stochastic Parrots") toward causal and physical simulation architectures of the environment ("World Models").

It is essential to balance the analysis: World Models are not inherently harmful. They offer extraordinary promises and tangible benefits for humanity in fields such as advanced robotics, medical simulation for complex surgeries, and safe autonomous driving. The risk arises not from the architecture itself but from its unregulated application in the realm of commercial persuasion and surveillance.

To provide clarity, the following table summarises this technical evolution:

FeatureStochastic Parrots (current LLMs) / World Models / Core operation
Understanding of timeStatic; lacking deep temporal anchoring. Dynamic; modelling how environments change over time.
Primary applicationText generation, translation, summaries, code. Robotics, autonomous driving, physical simulations, action prediction.
Associated riskCheap disinformation, hallucinations, reproduced biases. Highly precise predictive simulations of human behaviour.

The business of “behaviourmatics” and regulatory responses

To protect our minds from commercial intrusions, regulation must attack the structural root. Recently, the European Commission, relying on the Digital Services Act (DSA), has intervened to demand that platforms such as TikTok mitigate addictive designs like the infinite scroll, treating it as a systemic risk. This is a concrete and valuable institutional step. Nevertheless, Dans argues that banning the infinite scroll will not save anyone on its own: the central problem is not just the interface—it is the business model.

The real problem is the argocapitalist model centred on behavioural advertising. López González (2020) calls this mechanism “behaviourmatics” (conductimática), a discipline that uses hyper-nudges (highly personalised algorithmic micro-pushes) to reconfigure our digital choice environments and capitalise on our impulses.

For regulatory measures to progress from a “local anaesthetic” to a real cure, viable proposals are needed. The legislative debate must advance toward imposing taxes on mass data extraction, strict limitation of hyper-targeted advertising in favour of subscription or contextual advertising models, and promotion of open-source neurotechnologies. In this regard, the full entry into force of the EU Artificial Intelligence Act (EU AI Act) in August 2026, together with the recent The Senate approves Guidelines for the Use of Artificial Intelligence in the Upper House, represent fundamental governance frameworks for redirecting the impact of algorithms.

Cajal, neuroplasticity, and the advance of neurorights

Don Santiago Ramón y Cajal taught us that the human being has the capacity to be the sculptor of their own brain. This assertion is not a simple metaphor but a biological reality grounded in neuroplasticity. Every time we yield to the intermittent reward of an application, our neuronal connections are reinforced in that direction. Cajal warned that sculpting the brain demands a rigorous “mental hygiene” through study, attention, and sustained effort. Resisting “behaviourmatics” and “hyper-nudges” requires this same ascesis: a conscious effort to forge neural pathways that are not dictated by corporate design.

If algorithms can already deanonymise us on a massive scale using solely our textual traces, the protection of our direct brain data is an urgent priority. Fortunately, the defence of neurorights is gaining legislative traction. In Spain, the pioneering Cantabria promotes the first European law to protect neurorights and brain data—whose preliminary draft reached the regional parliament in September 2025 and is currently under review—seeks to protect brain data and require explicit consent. In parallel, in the United States the Management of Individuals’ Neural Data Act (When Thought Becomes Data: The MIND Act and the Coming Debate Over Neurotechnology) was introduced in September 2025, an early-stage bill urging the Federal Trade Commission (FTC) to study a regulatory framework against the exploitation of neural data. Together with UNESCO’s ethical framework (November 2025) and the recent forums of the Supreme Court in Mexico, it is clear that mental privacy is consolidating its place on the geopolitical agenda.

Conclusion

The current digital ecosystem—governed by the logics of argocapitalism and empowered by the transition toward “World Models”—presents formidable challenges. However, the future is not predetermined. The advance of rights-protective legislation and the juridical consolidation of neurorights grant us the necessary tools to govern technology.

The true defence of our cognitive sovereignty requires combining the individual effort of Cajalian “mental hygiene” with collective civic action. We must demand that technical progress always be subordinated to human dignity.

Selected bibliography

1 de
100%
Cargando documento…

Induction Speech of Dr Enrique López 2020 — Docs.Santiagoramonycajal