Chronos Disrupted: How Artificial Intelligence Is Rewriting Human Temporality

di Elisabetta Pepe - 19 Settembre 2025

  Roma, Italia 

 

DOI10.48256/TDM2025_00007

Artificial Intelligence is not merely automating tasks or amplifying intelligence — it is transforming our experience of time. From real-time predictive policing to anticipatory marketing algorithms, AI collapses the distinction between present and future, creating a regime of computational preemption. In this essay, I explore how AI reshapes the grammar of temporality: accelerating decision loops, eroding deliberative slowness, and colonizing the future as a zone of control. Drawing on philosophy of time, legal theory, and critical AI studies, I argue that our primary challenge is not to align AI with human values — but to preserve the human condition from being overwritten by machinic chronopolitics.

From Linearity to Anticipation: The Temporal Revolution of AI

Artificial Intelligence is not merely altering the world; it is reconfiguring the temporal coordinates within which the world becomes legible. We have crossed a threshold. AI systems no longer wait passively for inputs to react to. They preempt, forecast, simulate. In doing so, they inaugurate a new ontological regime — a future that arrives already computed, already constrained, already acted upon. Recommendation engines no longer follow desire; they precipitate it. Predictive policing doesn’t deter crime; it patrols statistical shadows. Credit scoring anticipates delinquency not based on action but on proxy identity markers. The logic of AI is not historical but pre-emptive — it generates futures not by unfolding time, but by closing it. This is not merely a technological shift. It is a chronopolitical rupture: a transformation in who controls the future, how it is structured, and whether it remains a domain of human sovereignty or becomes a computational artifact.

Kairos Deleted: The Erasure of Human Temporal Agency

For centuries, human action has revolved around a non-chronometric form of time: the kairos. Unlike Chrònos, which marks duration, Kairòs designates timeliness, the opportune moment when action becomes ethically or existentially charged. Kairos is the time of decision, emergence, and becoming. AI systems are structurally kairos-blind. They operate on microtemporal scales — nanosecond feedback loops — that preclude deliberation. In this machinic temporality, there is no pause, no latency, no hesitation. The temporality of machine learning is one of ceaseless inference, where the interval necessary for reflection collapses. In this sense, AI does not simply outpace human cognition — it deconditions it. The future becomes a zone not of potential, but of probabilistic foreclosure. Where kairos once invited action, AI now replaces it with optimization.

Algorithmic Preemption and the Juridical Seizure of the Future

Law has long been one of the most sophisticated cultural mechanisms for mediating the future. Legal norms construct horizons of responsibility, risk, and restraint. But predictive AI reframes legality as anticipatory administration: the norm becomes a function of forecasted behavior. Consider recidivism prediction tools. Ostensibly neutral, they encode histories of racialized policing, socioeconomic precarity, and institutional bias. Yet once formalized as predictive outputs, these asymmetries are transmuted into actionable truths. Judges defer to the “neutrality” of the machine — even as it replicates epistemic violence. The result is a temporal inversion: individuals are assessed not on what they have done, but on what the system expects they might do. Law becomes pre-emptive governance, and justice collapses into statistical determinism. We are witnessing the juridical colonization of the not-yet.

Temporal Expropriation: AI and the Loss of Self-Futurity

To speak of AI as “predictive” is to understate the existential implications. What is being automated is not just decision-making — but the very capacity to project oneself into the future. The anticipatory logics of AI enact a form of temporal capture. When algorithms nudge, sort, recommend, or score, they do more than optimize; they narrow the horizon of possible selves. The user becomes a probabilistic echo — not a subject in formation, but a statistical residue. This is the paradox: the more AI knows about us, the less future we possess. We become temporally saturated, pre-modeled, pre-classified. The future is no longer a space of contingency, but a mirror of the past rendered actionable. Temporal expropriation is the new logic of control: it is not your data that is extracted — it is your capacity to become.

Delay as Resistance: Reclaiming Temporal Sovereignty

What does resistance look like in an era of anticipatory computation? Perhaps the most radical act is not acceleration but delay. To delay is to puncture the machinic rhythm. To reclaim the interval. To insist on a tempo irreducible to prediction. This is not nostalgia for analog slowness — it is a political assertion of kairotic space, the right to remain unformed, undecided, uncomputed. In a system that renders all becoming legible, delay becomes defiance. Slowness becomes sovereignty. To refuse the automated decision is not just to demand better outputs — it is to defend the temporality of the human.

Conclusion: Time as the Final Battleground

The ethical debates around AI — fairness, transparency, explainability — miss the deeper substrate: temporality. Who gets to shape the future? Who retains access to the interval between stimulus and response? Whose becoming is preempted, and whose remains indeterminate? If AI is to become a partner in human flourishing, it must not simply be accountable to law. It must be accountable to time. We must resist the flattening of the future into forecast. We must protect the intervals where meaning arises — in hesitation, in contradiction, in risk. Because intelligence without temporal freedom is not intelligence at all. It is merely acceleration without agency.And agency, in the end, is a matter of when as much as of what.

                                                                                                                      ***

Autore dell’articolo: Elisabetta Pepe, Undergraduate Student in Law at LUISS Guido Carli University (Rome). Guided by a polymathic vocation, her research interests include law, philosophy of language, and artificial intelligence.

***

Nota della redazione del Think Tank Trinità dei Monti

Come sempre pubblichiamo i nostri lavori per stimolare altre riflessioni, che possano portare ad integrazioni e approfondimenti. 

* I contenuti e le valutazioni dell’intervento sono di esclusiva responsabilità dell’autore.

Editor’s Note – Think Tank Trinità dei Monti

As always, we publish our articles to encourage debates, and to spread knowledge and original and alternative points of view.

* The contents and the opinions of this article belong to the author(s) of this article only.

Autore