1. Introduction
Mercy constructs a highly procedural legal imagination: in Los Angeles in 2029, legislators activate the Central Mercy Court, where defendants face AI Judge Maddox, and their probability of guilt is quantified in real time . A 98% probability is sufficient to find guilty, and the defendant must reduce it to below 92% within 90 minutes, or face immediate execution.
This premise attempts to replace the language of the rule of law with the language of engineering, and then disguise the language of engineering as the language of morality. As a researcher who deals daily with data analysis and algorithmic models, what I see is not only the disappearance of procedural justice, but also a highly misleading and deliberate denigration of artificial intelligence (AI). The film intends to portray the cruelty of AI, but inadvertently exposes a more terrifying risk: the disappearance of procedural justice does not require real AI, but only a sufficiently clear statistical figure and a group of people willing to use it to kill.
2. Mercy as "Newspeak": The Arrogance of Power Enveloped in Language
George Orwell introduced the concept of " Newspeak " in 1984. Its core logic is to limit the human capacity to think about complex problems at their root by reducing vocabulary and changing word meanings .
The film names the trial system "Mercy," constituting a typical Orwellian "double standard." It suggests to the public that the death penalty is no longer a punishment, but a highly efficient form of grace . When the semantics of the rule of law are replaced by technological discourse, this process of stripping away layers completes a dimensional reduction from "the pursuit of justice" to "parameter calibration . "
"Mercy" replaces "execution" : the 90-minute countdown is packaged as an opportunity, but in reality, the death penalty is set as the system's default output.
"Probability" has replaced "reasonable doubt" : the law no longer focuses on the truth of the facts, but instead on numerical thresholds. Proving innocence has been simplified into a computational game of reducing the probability of guilt from 98% to below 92%, and trials have thus degenerated into model parameter tuning .
"Calibration" replaces "defense" : the defendant is no longer the subject of rights, but a passive survivor in the error bar.
This linguistic reconstruction cloaks violence in a scientific guise, giving law enforcement officers a technological sense of psychological acquittal : they no longer see themselves as executors of life-taking, but merely as end-point plug-ins in a data chain. Under this shadow of technology's complete erosion of subjectivity, the essence of trial degenerates from an understanding of humanity to a risk-clearing process : once the law abandons the pursuit of absolute justice and instead seeks this efficiency-based handout of life , the spirit of the rule of law dies out the moment language is alienated.
3. Technological Refutation: Why will AI be so unintelligent in 2029?
Mercy in the movie is not a true intelligent being, but rather a database retrieval device with incomplete functionality.
3.1 Vanishing Social Evolution: Skipped Algorithmic Asymptotes
The implementation of any algorithm follows a gradual trust curve . However, the emergence of the Mercy system was a leap , which is extremely precarious from a sociological perspective.
In reality, the ideal entry point for AI trials should be in areas such as financial anti-money laundering, tax audits, or traffic insurance claims. In these fields, the logical chain of data is closed-loop, the evidence is structured, and the causal relationship is relatively clear. Society's trust in AI should be built on its ability to accurately identify abnormal transaction patterns and locate tax evasion loopholes from massive amounts of tax data. Each step here requires a large number of boundary cases for correction.
However, the film skips over these foundational social adjustments and fast-forwards to the most complex level—the end of life—involving unstructured human nature and strategic maneuvering. The public seems to accept, without any psychological adjustment, a monster capable of making life-or-death decisions without considering the facts. If a system dares to directly decide life and death, yet fails to pinpoint a person's true intentions through cross-modal analysis , instead getting stuck on low-level feature recognition like "finding the missing half of a photo," this not only diminishes AI but also disregards the technological capabilities of the entire era .
Good science fiction should demonstrate how technology gradually erodes and reshapes social boundaries, rather than simply presenting an ultimate, abrupt, and terrifying outcome. In its pursuit of extreme dramatic conflict, the film sacrificed its most valuable social implications , turning Mercy into a digital Leviathan suspended in the sky above 2029, lacking any legitimate logic .
3.2 "Murder today, death tomorrow": Speed studies supersede criminology
In the movie trailer, Chris, who plays the male lead, describes the system as "If you kill someone today, you'll die tomorrow," essentially turning the judicial system into an instant feedback system.
In the physical world, reconnaissance is a process of reducing the entropy of information , which requires physical work and time (DNA sequencing, cross-departmental surveillance retrieval, laboratory analysis of physical traces, etc.).
The reason the law emphasizes controlling suspects rather than immediate execution is that each suspect is a crucial information node . The arrest of one person is often the key to preventing the next chain of crimes and dismantling the entire criminal chain. If AI in 2029 is truly intelligent enough, it should calculate the informational value of that individual's survival while calculating the probability of guilt . However, in the movie, the algorithm rushes to execute a death sentence within 90 minutes despite severe information gaps—this represents a systemic data truncation.
Forcing it to 90 minutes will only result in two outcomes:
The system actually knew the answer all along; the 90 minutes were just a performance.
The system doesn't care about the truth at all; 90 minutes is just part of the execution process.
So the Mercy system is just a fig leaf for government rule; what it conceals is that the countdown is not designed for the truth, but for execution.
3.3 Digital Tyranny: When Judicial Thresholds Deviate from Uncertainty Constraints
In rigorous inference systems, numbers should not be isolated endpoints, but rather measures with confidence boundaries.
If Mercy gives P(guilty) = 97.5%, what really needs to be questioned is not just the number itself, but the three levels of questions behind it:
At the institutional level: How should society handle uncertainty?
When confidence levels are insufficient to support judicial thresholds, what is the system's default decision-making rule? Is it to adhere to "preserving the truth in case of doubt" or to shift to "prioritizing risk management"? The setting of any probability threshold is essentially a social value choice , not a purely technical issue.
Model Layer: How do algorithms generate and amplify uncertainty? In scenarios with strong correlation but no causal relationship , will
the model systematically overestimate risk ? When data is missing or distributionally skewed, which group is the model statistically more likely to misjudge? More importantly, AI is not trained in a vacuum. If historical law enforcement itself has a structural bias against certain groups , the algorithm often fails to neutralize this bias and instead amplifies it in the data feedback loop. Real-world research has repeatedly pointed out that systems trained on historical crime data may inherit and reinforce existing discriminatory law enforcement patterns . Even in some risk assessment tools, algorithms may generate systematic high-risk labels for different groups, even if sensitive variables such as race and occupation are not explicitly used; bias is still indirectly encoded into the data structure. In other words, algorithms may not only fail to eliminate bias but may even give bias a scientific appearance .
At the engineering level: Does the threshold truly represent an irreversible verdict?
In the film, the protagonist's probability of guilt during his self-incrimination process once rose to 98%, reaching the threshold for judgment, yet he still retained room for further self-incrimination. If the threshold truly represents the legal boundary, then crossing that line should trigger an irreversible state machine-like process ; if crossing that line doesn't trigger it, then the threshold itself is merely a narrative prop, not a legal mechanism.
The film fails to answer these questions, thus reducing the sole function of the probability threshold for conviction to convincingly telling the audience that science has already made the moral judgment for us . This is precisely the most dangerous form of digital tyranny : when uncertainty is hidden behind probability, society mistakenly believes it is exercising rationality , rather than making choices .
4. The Collapse of Procedural Justice: When AI Becomes a Megaphone for Power
4.1 "Judging by actions, not intentions": The bottom line of the law and the temptations of the AI era
The modern rule of law principle is " to judge by actions , not intentions," meaning to look at behavioral evidence rather than punishing those who might commit crimes . The rule of law tradition emphasizes "actions" (evidence that can be proven, challenged, and verified), while risk governance naturally pursues "intentions" (predictable intent, behavioral tendencies, and future probabilities). AI's strength lies precisely in packaging these "intentions" as "actions": it doesn't say "I read minds," it says "I've detected that you're like a bad person in a high-dimensional feature space." Society is thus tempted: if prediction is possible, why not take preventative measures? But once "potential crime" is treated as "already committed a crime," people are transformed into risk containers. In that world, innocence is not a right, but a state that needs constant proof . Once the intention to punish is permitted, power will have an unlimited scope for expansion.
4.2 Loss of Control in Execution: When Enforcement Authority Exceeds Algorithmic Decisions
If AI were truly in charge of adjudication, then the enforcement process would either be more strictly automated or more strictly constrained. However, in the movie, the police still possess near-lynching discretionary power .
Policewoman Jaq's hasty action in shooting Rob points to a more realistic evil: to prove the system's effectiveness, someone might intentionally create it. When execution efficiency becomes a KPI, AI becomes a tool for shifting blame.
The injustice becomes explainable : it's not that someone wronged you, it's that the data wronged you.
Violence becomes scalable : procedural executions require no hatred, only procedure.
The film portrays Mercy as a system that combines "judge + jury + executioner," which structurally implies a highly compressed and centralized judicial power. However, the true implementers of the verdicts are never the system itself, but rather the institutions and people outside the system.
4.3 The Myth of Outcomes: If AI is always right, then it is superfluous.
As a promoter of the Mercy system, policewoman Jaq's strong urge to shoot at the scene constitutes a form of institutional self-negation.
If AI's results only uphold the initial judgment each time, then the system's sophistication cannot be demonstrated. Conversely, if AI can overturn the original judgment and prove the suspect innocent within 90 minutes, then its value and necessity are truly proven.
5. Defending "Intuition": It's Not Just AI That's Being Belittled
The film pits "human intuition" against "machine probability," which is not only a contempt for AI but also a misunderstanding of the human brain.
Human intuition is not metaphysics; it is essentially an instantaneous, non-linear judgment made by the brain after processing large amounts of unstructured data in parallel . In Bayesian inference, this can be understood as a very strong prior distribution guiding the process.
A truly mature judicial AI by 2029 should possess " machine intuition " more acute than the human brain: it should not only provide probabilities but also output evidence weight diagrams and perform counterfactual simulations . The film, however, downgrades AI to simple logic gates to highlight the greatness of human intuition; this binary opposition is weak and unconvincing.
6. Privacy and the "right to access without guilt": This system inherently contains opportunities for intelligence arbitrage.
The film sets up a scenario where the defendant can access all cloud resources to prove their innocence, but this presents a significant security vulnerability :
The irreversibility of privacy breaches : If the defendant is ultimately acquitted, what happens to the city-wide surveillance footage, other people's private data, and government encrypted information that he accessed during the process? Innocent people are forced to become legal voyeurs .
Intelligence arbitrage risk : If someone deliberately fabricates criminal leads to gain access to the system, solely for 90 minutes of cloud-based "God's-eye view" to acquire core secrets for themselves or their organization, and then escapes unscathed by releasing pre-prepared innocence evidence at the last minute, then this system is essentially a massive backdoor to public security . What would happen to society if such side-channel attacks , disguised as interrogations, were to occur ? Would Mercy become more stringent? More stringent means more authoritarian; less stringent means more of a "black market." The film barely touches on this issue, but it is a real problem in AI governance: the more centralized the data, the more power resembles a gravitational singularity, where information will collapse, and so will its misuse .
7. Conclusion: Who is afraid of real AI?
"The Trial of the Limits" doesn't depict the horror of the AI era, but rather the horror of the power to interpret technology . The film aimed to portray the terror of AI, but instead depicted the brutality of human institutions.
The film diminishes AI, portraying it as a illogical and dysfunctional killing machine to highlight the greatness of human intuition and obscure the responsibilities that should rightfully belong to humanity . However, what is truly uncontrollable and least objective is often not the algorithm itself, but rather those who hide behind it and wield power in its name.
If society ultimately accepts probability as justice , then law will no longer be used to regulate behavior, but only to manage risk. When civilized institutions begin to shift from "regulating behavior" to "managing risk ," society has essentially accepted that individuals are no longer subjects of rights , but rather carriers of risk . If law begins to punish probability, then innocence will no longer be a state, but a capability that needs constant proof .
Unfortunately, the film avoids all the profound questions, leaving only a thrilling 90-minute countdown game.
I feel I must speak up for AI: don't define our future with such crude science fiction, and don't let AI scapegoat human arrogance. I believe true intelligence will not become a shield for the banality of evil.
Even so, I still highly recommend watching this movie.
If a film can get audiences talking about the relationship between technology, power, and justice , then it has at least accomplished one of the most important things in science fiction: bringing future issues into reality ahead of time .