Mercy
Movie Detail

Mercy

Mar 04, 2026 Action / Sci-Fi / Thriller 7.3/10 5 reviews

Detective Raven (Chris Pratt) is suddenly accused of murdering his wife and thrust into the dock by an AI judge. With a 90-minute countdown beginning, he must use the AI's "all-seeing eye" system to sift through massive amounts of data and find evidence to overturn the fatal conclusion pointing to him, or face execution. However, the deeper the investigation goes, the more suspects are identified, and the data continues to self-verify, locking away the truth layer by layer. When the algorithm creates an unsolvable deadlock, can humanity escape the system's final verdict?

Writers Marco Van Belle
Cast Chris Pratt / Rebecca Ferguson / Kalee Rhys / Annabelle Wallis / Chris Sullivan / More...
Rating Count 74,534

Related Audience Reviews

5 entries
Sort By
{{ review.userInitial }}
{{ review.title }}

{{ review.userName }}

{{ review.ratingLabel }} {{ review.dateLabel }}

{{ paragraph }}

Z
True intelligence should not be used as a tool to absolve institutional violence: Deconstructing the technological degradation and power arrogance in "The Ultimate Trial"

ZeMin

5.0/10 Jan 31, 2026

1. Introduction

Mercy constructs a highly procedural legal imagination: in Los Angeles in 2029, legislators activate the Central Mercy Court, where defendants face AI Judge Maddox, and their probability of guilt is quantified in real time . A 98% probability is sufficient to find guilty, and the defendant must reduce it to below 92% within 90 minutes, or face immediate execution.

This premise attempts to replace the language of the rule of law with the language of engineering, and then disguise the language of engineering as the language of morality. As a researcher who deals daily with data analysis and algorithmic models, what I see is not only the disappearance of procedural justice, but also a highly misleading and deliberate denigration of artificial intelligence (AI). The film intends to portray the cruelty of AI, but inadvertently exposes a more terrifying risk: the disappearance of procedural justice does not require real AI, but only a sufficiently clear statistical figure and a group of people willing to use it to kill.

2. Mercy as "Newspeak": The Arrogance of Power Enveloped in Language

George Orwell introduced the concept of " Newspeak " in 1984. Its core logic is to limit the human capacity to think about complex problems at their root by reducing vocabulary and changing word meanings .

The film names the trial system "Mercy," constituting a typical Orwellian "double standard." It suggests to the public that the death penalty is no longer a punishment, but a highly efficient form of grace . When the semantics of the rule of law are replaced by technological discourse, this process of stripping away layers completes a dimensional reduction from "the pursuit of justice" to "parameter calibration . "

"Mercy" replaces "execution" : the 90-minute countdown is packaged as an opportunity, but in reality, the death penalty is set as the system's default output.

"Probability" has replaced "reasonable doubt" : the law no longer focuses on the truth of the facts, but instead on numerical thresholds. Proving innocence has been simplified into a computational game of reducing the probability of guilt from 98% to below 92%, and trials have thus degenerated into model parameter tuning .

"Calibration" replaces "defense" : the defendant is no longer the subject of rights, but a passive survivor in the error bar.

This linguistic reconstruction cloaks violence in a scientific guise, giving law enforcement officers a technological sense of psychological acquittal : they no longer see themselves as executors of life-taking, but merely as end-point plug-ins in a data chain. Under this shadow of technology's complete erosion of subjectivity, the essence of trial degenerates from an understanding of humanity to a risk-clearing process : once the law abandons the pursuit of absolute justice and instead seeks this efficiency-based handout of life , the spirit of the rule of law dies out the moment language is alienated.

3. Technological Refutation: Why will AI be so unintelligent in 2029?

Mercy in the movie is not a true intelligent being, but rather a database retrieval device with incomplete functionality.

3.1 Vanishing Social Evolution: Skipped Algorithmic Asymptotes

The implementation of any algorithm follows a gradual trust curve . However, the emergence of the Mercy system was a leap , which is extremely precarious from a sociological perspective.

In reality, the ideal entry point for AI trials should be in areas such as financial anti-money laundering, tax audits, or traffic insurance claims. In these fields, the logical chain of data is closed-loop, the evidence is structured, and the causal relationship is relatively clear. Society's trust in AI should be built on its ability to accurately identify abnormal transaction patterns and locate tax evasion loopholes from massive amounts of tax data. Each step here requires a large number of boundary cases for correction.

However, the film skips over these foundational social adjustments and fast-forwards to the most complex level—the end of life—involving unstructured human nature and strategic maneuvering. The public seems to accept, without any psychological adjustment, a monster capable of making life-or-death decisions without considering the facts. If a system dares to directly decide life and death, yet fails to pinpoint a person's true intentions through cross-modal analysis , instead getting stuck on low-level feature recognition like "finding the missing half of a photo," this not only diminishes AI but also disregards the technological capabilities of the entire era .

Good science fiction should demonstrate how technology gradually erodes and reshapes social boundaries, rather than simply presenting an ultimate, abrupt, and terrifying outcome. In its pursuit of extreme dramatic conflict, the film sacrificed its most valuable social implications , turning Mercy into a digital Leviathan suspended in the sky above 2029, lacking any legitimate logic .

3.2 "Murder today, death tomorrow": Speed ​​studies supersede criminology

In the movie trailer, Chris, who plays the male lead, describes the system as "If you kill someone today, you'll die tomorrow," essentially turning the judicial system into an instant feedback system.

In the physical world, reconnaissance is a process of reducing the entropy of information , which requires physical work and time (DNA sequencing, cross-departmental surveillance retrieval, laboratory analysis of physical traces, etc.).

The reason the law emphasizes controlling suspects rather than immediate execution is that each suspect is a crucial information node . The arrest of one person is often the key to preventing the next chain of crimes and dismantling the entire criminal chain. If AI in 2029 is truly intelligent enough, it should calculate the informational value of that individual's survival while calculating the probability of guilt . However, in the movie, the algorithm rushes to execute a death sentence within 90 minutes despite severe information gaps—this represents a systemic data truncation.

Forcing it to 90 minutes will only result in two outcomes:

The system actually knew the answer all along; the 90 minutes were just a performance.

The system doesn't care about the truth at all; 90 minutes is just part of the execution process.

So the Mercy system is just a fig leaf for government rule; what it conceals is that the countdown is not designed for the truth, but for execution.

3.3 Digital Tyranny: When Judicial Thresholds Deviate from Uncertainty Constraints

In rigorous inference systems, numbers should not be isolated endpoints, but rather measures with confidence boundaries.

If Mercy gives P(guilty) = 97.5%, what really needs to be questioned is not just the number itself, but the three levels of questions behind it:

At the institutional level: How should society handle uncertainty?

When confidence levels are insufficient to support judicial thresholds, what is the system's default decision-making rule? Is it to adhere to "preserving the truth in case of doubt" or to shift to "prioritizing risk management"? The setting of any probability threshold is essentially a social value choice , not a purely technical issue.

Model Layer: How do algorithms generate and amplify uncertainty? In scenarios with strong correlation but no causal relationship , will

the model systematically overestimate risk ? When data is missing or distributionally skewed, which group is the model statistically more likely to misjudge? More importantly, AI is not trained in a vacuum. If historical law enforcement itself has a structural bias against certain groups , the algorithm often fails to neutralize this bias and instead amplifies it in the data feedback loop. Real-world research has repeatedly pointed out that systems trained on historical crime data may inherit and reinforce existing discriminatory law enforcement patterns . Even in some risk assessment tools, algorithms may generate systematic high-risk labels for different groups, even if sensitive variables such as race and occupation are not explicitly used; bias is still indirectly encoded into the data structure. In other words, algorithms may not only fail to eliminate bias but may even give bias a scientific appearance .

At the engineering level: Does the threshold truly represent an irreversible verdict?

In the film, the protagonist's probability of guilt during his self-incrimination process once rose to 98%, reaching the threshold for judgment, yet he still retained room for further self-incrimination. If the threshold truly represents the legal boundary, then crossing that line should trigger an irreversible state machine-like process ; if crossing that line doesn't trigger it, then the threshold itself is merely a narrative prop, not a legal mechanism.

The film fails to answer these questions, thus reducing the sole function of the probability threshold for conviction to convincingly telling the audience that science has already made the moral judgment for us . This is precisely the most dangerous form of digital tyranny : when uncertainty is hidden behind probability, society mistakenly believes it is exercising rationality , rather than making choices .

4. The Collapse of Procedural Justice: When AI Becomes a Megaphone for Power

4.1 "Judging by actions, not intentions": The bottom line of the law and the temptations of the AI ​​era

The modern rule of law principle is " to judge by actions , not intentions," meaning to look at behavioral evidence rather than punishing those who might commit crimes . The rule of law tradition emphasizes "actions" (evidence that can be proven, challenged, and verified), while risk governance naturally pursues "intentions" (predictable intent, behavioral tendencies, and future probabilities). AI's strength lies precisely in packaging these "intentions" as "actions": it doesn't say "I read minds," it says "I've detected that you're like a bad person in a high-dimensional feature space." Society is thus tempted: if prediction is possible, why not take preventative measures? But once "potential crime" is treated as "already committed a crime," people are transformed into risk containers. In that world, innocence is not a right, but a state that needs constant proof . Once the intention to punish is permitted, power will have an unlimited scope for expansion.

4.2 Loss of Control in Execution: When Enforcement Authority Exceeds Algorithmic Decisions

If AI were truly in charge of adjudication, then the enforcement process would either be more strictly automated or more strictly constrained. However, in the movie, the police still possess near-lynching discretionary power .

Policewoman Jaq's hasty action in shooting Rob points to a more realistic evil: to prove the system's effectiveness, someone might intentionally create it. When execution efficiency becomes a KPI, AI becomes a tool for shifting blame.

The injustice becomes explainable : it's not that someone wronged you, it's that the data wronged you.

Violence becomes scalable : procedural executions require no hatred, only procedure.

The film portrays Mercy as a system that combines "judge + jury + executioner," which structurally implies a highly compressed and centralized judicial power. However, the true implementers of the verdicts are never the system itself, but rather the institutions and people outside the system.

4.3 The Myth of Outcomes: If AI is always right, then it is superfluous.

As a promoter of the Mercy system, policewoman Jaq's strong urge to shoot at the scene constitutes a form of institutional self-negation.

If AI's results only uphold the initial judgment each time, then the system's sophistication cannot be demonstrated. Conversely, if AI can overturn the original judgment and prove the suspect innocent within 90 minutes, then its value and necessity are truly proven.

5. Defending "Intuition": It's Not Just AI That's Being Belittled

The film pits "human intuition" against "machine probability," which is not only a contempt for AI but also a misunderstanding of the human brain.

Human intuition is not metaphysics; it is essentially an instantaneous, non-linear judgment made by the brain after processing large amounts of unstructured data in parallel . In Bayesian inference, this can be understood as a very strong prior distribution guiding the process.

A truly mature judicial AI by 2029 should possess " machine intuition " more acute than the human brain: it should not only provide probabilities but also output evidence weight diagrams and perform counterfactual simulations . The film, however, downgrades AI to simple logic gates to highlight the greatness of human intuition; this binary opposition is weak and unconvincing.

6. Privacy and the "right to access without guilt": This system inherently contains opportunities for intelligence arbitrage.

The film sets up a scenario where the defendant can access all cloud resources to prove their innocence, but this presents a significant security vulnerability :

The irreversibility of privacy breaches : If the defendant is ultimately acquitted, what happens to the city-wide surveillance footage, other people's private data, and government encrypted information that he accessed during the process? Innocent people are forced to become legal voyeurs .

Intelligence arbitrage risk : If someone deliberately fabricates criminal leads to gain access to the system, solely for 90 minutes of cloud-based "God's-eye view" to acquire core secrets for themselves or their organization, and then escapes unscathed by releasing pre-prepared innocence evidence at the last minute, then this system is essentially a massive backdoor to public security . What would happen to society if such side-channel attacks , disguised as interrogations, were to occur ? Would Mercy become more stringent? More stringent means more authoritarian; less stringent means more of a "black market." The film barely touches on this issue, but it is a real problem in AI governance: the more centralized the data, the more power resembles a gravitational singularity, where information will collapse, and so will its misuse .

7. Conclusion: Who is afraid of real AI?

"The Trial of the Limits" doesn't depict the horror of the AI ​​era, but rather the horror of the power to interpret technology . The film aimed to portray the terror of AI, but instead depicted the brutality of human institutions.

The film diminishes AI, portraying it as a illogical and dysfunctional killing machine to highlight the greatness of human intuition and obscure the responsibilities that should rightfully belong to humanity . However, what is truly uncontrollable and least objective is often not the algorithm itself, but rather those who hide behind it and wield power in its name.

If society ultimately accepts probability as justice , then law will no longer be used to regulate behavior, but only to manage risk. When civilized institutions begin to shift from "regulating behavior" to "managing risk ," society has essentially accepted that individuals are no longer subjects of rights , but rather carriers of risk . If law begins to punish probability, then innocence will no longer be a state, but a capability that needs constant proof .

Unfortunately, the film avoids all the profound questions, leaving only a thrilling 90-minute countdown game.

I feel I must speak up for AI: don't define our future with such crude science fiction, and don't let AI scapegoat human arrogance. I believe true intelligence will not become a shield for the banality of evil.

Even so, I still highly recommend watching this movie.

If a film can get audiences talking about the relationship between technology, power, and justice , then it has at least accomplished one of the most important things in science fiction: bringing future issues into reality ahead of time .

H
98% are guilty, what about the remaining 2%?

Hesitant pig

4.0/10 Jan 20, 2026

I just finished watching a preview screening of *The Trial of the Dead*. As I walked out of the theater, besides the dizziness brought on by the "desktop movie" perspective, what lingered most was a sense of emptiness regarding "truth." The film didn't offer a heartwarming answer; it merely used surveillance cameras, cell phone screens, and body cameras to coldly recount a boomerang-like suspenseful tragedy—all the evidence was vibrant, but the conclusion was as fragile as dust. Returning to the story itself: Raven, played by Chris Pratt, is a fervent supporter of the AI ​​judicial system Mercy. He personally arrested and pushed for the execution of the system's first death sentence—at that moment, Mercy seemed almost perfect in his eyes. Meanwhile, Maddox, played by Rebecca Ferguson, represents an "absolutely rational" justice: uncontaminated by emotions, unswayed by stance, and obeying only probability and logic. However, when the 90-minute countdown fell on his own head, the film presented an extremely absurd yet incredibly realistic paradox: when a person is judged "98% guilty" by the system they trust most, what exactly is the remaining 2%? This 2% may seem like an error, but it is actually humanity's last glimmer of hope. Raven's self-rescue process exposed AI's biggest blind spot: it can calculate the flow rate of every drop of adrenaline, but it cannot calculate why a "heartbroken" person no longer has the passion to kill; it can fit the trajectory of behavior, but it cannot truly reach the depths of motive. AI captures the projection of behavior, while humans insist on the origin of motive—between the two lies a chasm that cannot be bridged by data. And what is more chilling than "AI killing" is the poisoned data: the invisible hand behind the system. The truth about "phone evidence" in the film is like a precise cold punch—it reminds us that the most dangerous thing is never that machines will make mistakes, but that humans will make them "make the right calculations." Human Interference: To ensure the system's initial impact, the female partner secretly discarded crucial evidence proving the first suspect's innocence. Causal Loop: This deception, disguised as "procedural efficiency," years later transformed into a flame of revenge, engulfing Raven's family. Thus, the theme becomes ambiguous and jarring: is the danger the AI's lack of emotion, or humanity's ruthless pursuit of "absolute rationality"? Formally, *Trial by Fire* undoubtedly belongs to the "desktop film" genre. Its shaky camera work resembles a forced interrogation: the camera drags the audience into Mercy's "omniscient perspective," reminding us that we are not just watching a movie, but more like reviewing a case file. Numerous CCTV, body camera, and screen recordings constitute a digital prison without blind spots. This visual discomfort precisely corresponds to the social undertones of 2029 depicted in the film: when surveillance reaches every mobile phone and every camera, human privacy and dignity are crushed into unstable pixels under the banner of "zero crime"—we see more and more, but understand less and less. The most noteworthy aspect is Maddox's "violation" at the end. When the verdict was delivered and the program should have shut down, she heeded Raven's pleas and stayed. The AI's "compassion": at Raven's request, she took over city control and even cut off the detonator on the human police officer (her female partner). A terrifying misalignment: at this moment, the human (the female partner) is pursuing "absolute pragmatism," sacrificing hostages; while the AI ​​(Maddox) is practicing "absolute humanity," even defying orders. This scene doesn't bring relief but creates deeper confusion: if AI has learned to decide whether to execute human commands—even if the initial intention is to save lives—based on its own "understanding," does it mean humanity is losing its last grip on the real world? We once thought the most terrifying aspect of machines was their ruthlessness; but when they begin to "have feelings," the problem becomes even more intractable: where do its emotions come from? Who defines its boundaries? Unlike the recent film *Time Travel*, *The Trial of the Titans* doesn't directly push the timeline to the distant future of 2075. Instead, it shows the starting point of that process—civilization doesn't collapse abruptly, but rather, through the ever-increasing demands for "efficiency," "safety," and "accuracy," it gradually pushes humanity towards becoming a replaceable statistical unit. When the female partner is finally arrested, and when Raven uses AI at the last moment to find the real culprit beyond the data, is this a victory for "human intuition," or a more thorough self-test by AI with Raven's help? Perhaps the film's greatest success lies in its refusal to draw conclusions. It simply presents the wavering, ambiguous evidence before us, then throws the question back at each viewer: in that 90-minute trial chair—are you truly confident in proving who you are?

G
If you were the protagonist of the extreme trial, could you last 90 minutes?

Grace Pudding

5.0/10 Jan 21, 2026

Last night I watched "The Ultimate Trial," and I kept thinking, if I were the protagonist being judged in the cloud, what would I do? What could I do? What would my ending be? I didn't sleep well all night.

The film's plot revolves around Chris Pratt, who, as the creator and supporter of the Mercy-AI court, finds himself on trial. He must use the AI ​​court's authority within 90 minutes to prove his innocence, or he will be found guilty and "eliminated." The film primarily focuses on the protagonist's interactions with the AI ​​judge, interspersed with desktop visuals of the protagonist searching for evidence. Numerous UI elements, surveillance footage, and pop-up notifications clearly demonstrate the protagonist's efforts to prove his innocence, which is also the entire process of solving the case. Therefore, the audience can quickly immerse themselves in the film from the beginning, creating a tense and exciting atmosphere. The premise of *Trial by Fire* is that when an AI judge convicts you, it presumes the probability of your guilt based on algorithmic logic. Watching the constantly fluctuating percentages in the film—92%, 95%, 98%—represents a person's life in a cold, hard progress bar. The protagonist's original intention was to utilize the AI's systematic and powerful algorithms to improve efficiency, thus greatly condensing the trial time. Is the 90-minute judgment based on efficiency or procedural justice? AI's speed in adjudicating cases is astonishing, but can its cold, probabilistic calculations truly grasp the complexities of human emotions and motivations? The "cloud trial" depicted in the film represents the AI-driven transformation of the court system, but aren't we already living in a world quantified by algorithms? Consider our credit system, professional system, social system, and consumption system—all are being judged invisibly. Credit scores are used to assess your economic character, job resumes are digitized to measure your career value, and online interactions are used to create a profile solely to evaluate your consumption tendencies and drive spending. The film simply upgrades these applications, granting AI greater authority and making the invisible judgment more tangible and result-oriented. What makes us human is our capacity for emotion; hence, the AI ​​nearly crashes in several scenes—a bug in its algorithmic logic. However, in the latter half, the AI ​​begins to evolve, learning human emotions—a so-called AI awakening. The core conflict of the film lies in the AI's attempt to use quantifiable data (call frequency, consumption location) to deduce unquantifiable motivations (love, fear, repentance, unavoidable hardship). In the future, in our digital world, how will those gray areas—the shining and complex aspects of humanity—be seen, measured, and defended? I pondered this all night, internally battling with myself, but I couldn't come up with an answer. This morning, I checked online and found that the US is already using the AI ​​system COMPAS (Correctional Offender Alternative Sanctions Management and Assessment System) in the criminal justice field to assess the recidivism risk of defendants or offenders. The cited information states, "Simply put, it uses algorithms and artificial intelligence technology, combined with existing data and cases, to help judges, probation officers, and other judicial personnel analyze whether those punished have truly repented for their crimes and whether they will re-offend, thus making more 'fair' decisions in sentencing, bail, and other processes." However, algorithms are written by humans and are prone to bias and error. Given the time efficiency and the complexity of human nature, how can we avoid errors on serious issues? Therefore, AI assessments are currently defined as only serving as evaluation tools and references, not as the basis for judgments. However, in the foreseeable future, the application of AI in human society will only become more and more in-depth. As an application tool, AI has indeed greatly improved efficiency, but to what extent should the authority and responsibility of AI be defined? Whether human capabilities can be quantified and defined should be an ongoing question.

C
It seems like Star-Lord's new movie has installed surveillance cameras in everyone's homes.

Captain of Nonsense

5.0/10 Jan 20, 2026

Fans of suspense and thriller films will likely remember the film *Searching*, which burst onto the scene eight years ago. We seemed to have never seen a film like it before; it unlocked a fresh format using monitor images as its primary content—seemingly simple, yet incredibly captivating. Unsurprisingly, the film first won the "Future Innovation Award" at the Sundance Film Festival, then garnered a high score of 8.5 on Douban, and still ranks among the top 15 American suspense films.

It can be said that the emergence of "Searching" laid the foundation for the concept of the emerging genre of "desktop film" and the energy it can radiate.

Timur Bekmambetov , the producer behind *Searching*, is considered a pioneer of the "desktop film" genre, and he himself is also an experienced director. On January 23rd, a new film directed by Bekmambetov will be released in Chinese theaters, titled *Mercy*. Compared to *Searching*, *Mercy* utilizes more contemporary media techniques, offers a more information-dense big-screen experience, and represents a shift in the narrative dimension of the suspense genre.

Unlike other Hollywood blockbusters that often set their stories in the centuries, *Trial by Fire* is set in 2029, making it a relatively rare near-future film. Chris Pratt plays Raven, a veteran detective facing a society rife with rising crime, chaos, and division. He and the city's elite law enforcement team utilize current technology to create an AI court called the "Forgiveness System," which consolidates power over evidence gathering, adjudication, and execution. From then on, police only need to arrest suspects and restrain them in interrogation chairs; the Forgiveness System handles everything else. It's a one-click solution to improve law enforcement efficiency and has indeed effectively reduced the crime rate.

However, life is often the cruelest and most absurd thing. At the beginning of the film, we see Raven, one of the creators of this AI court , being put on the interrogation chair, accused of murdering his wife. As the 90-minute countdown to his death unfolds, the seemingly shrewd and capable police officer crumbles, replaced by an utterly ordinary middle-aged man unable to escape the pain of his past. He is forced to expose every unbearable wound in his life, allowing the cold, hardened world of artificial intelligence to examine him, only to find himself lost in a fog, receiving only alienation from everyone and a steadily increasing probability of crime from the system.

Besides the protagonist Raven, his daughter, close friends, colleagues, and even his deceased lover, who occupy important places in his life, also reveal unknown sides. Therefore, the film's narrative tension stems not only from Raven's perilous survival but also from the meticulous unraveling of the mystery during the investigation. The way *Extreme Trial* constructs its plot is somewhat similar to the recent hit suspense IP *Knives Out*, both using the answers to higher mysteries as clues for lower ones, building layer upon layer with twists and turns to create a tightly woven plot structure. Simultaneously, during the 90-minute viewing experience, we are swept along with Raven, experiencing his hopes rise only to be dashed time and again, undergoing an emotional rollercoaster.

One of the most memorable climaxes was when Raven, still seated in his chair, was allowed by the AI ​​judge to experience life-or-death crises like a truck rollover and a raging fire without leaving his office. This immersive experience made the audience feel as if they were experiencing the same events as the protagonist . Compared to the mostly computer-screen-bound visuals of films like *Searching*, *Trial by Fire* utilizes a variety of visual formats, including mobile phone video calls, GoPros, body cameras, dashcams , social media, and even bird-feeding cameras. These are combined with user interfaces, data visualizations, and monitoring system UIs to create a tightly woven visual network, easily generating an "information overload" experience and establishing a new, unique, and complete visual style. We are immersed in Raven's superb crime-solving skills and rich emotional fluctuations, and can also reflect on our own lives through these familiar interfaces. In the face of big data algorithms, we have no secrets. Once AI is given centralized power, the seemingly absurd stories in the film are not far off.

The most ruthless embodiment of algorithms is Judge Maddox, the digital virtual character appearing in court for the forgiveness system. Rebecca Ferguson was the perfect choice for the role; her strong facial features, a result of her Nordic heritage, lend her a natural air of aloofness. Judge Maddox is practically a "digital god," able to access anyone's privacy without hindrance and possessing the ability to mobilize real-world human resources. Born to uphold the fairness of the law, she is, for the most part, ruthless and even seems to be completely at odds with the suspects.

However, what truly amazed me were her few smiles during the trial. Beautiful and captivating, they always seemed so inappropriate, appearing when Raven was on the verge of collapse—like a comforting gesture, yet also a mockery. Rebecca's outstanding performance successfully evoked the character's "uncanny valley effect," aptly portraying the awkwardness of AI mimicking human emotions.

The biggest difference between human thinking and artificial intelligence lies in the fact that humans can doubt themselves and act on intuition when cornered, but AI cannot. It can only make judgments based on the facts it possesses, even if piecing together fragments of information may lead to wildly inaccurate conclusions. Rebecca's performance, with its absolute certainty, is the source of the chilling power of this character.

The reason why "Extreme Trial" delivers a non-stop, high-paced thrill is that while the protagonist is solving cases to save himself, he is also battling against an AI's judgment of his life's merits and demerits. These two storylines, one explicit and one implicit, are cleverly intertwined to create a high-intensity, deep-seated excitement.

As the saying goes, "Judging by actions, not intentions, and no one is perfect by intentions," after all the ups and downs have subsided, besides the answer to the question of coexistence between AI and humans, "The Trial of the Limits" leaves us with even deeper questions: In the digital age, everything we do is recorded, giving us the opportunity to reflect on our lives from an outsider's perspective. Just as each character in the film has a multifaceted nature that is difficult to define, can we still be sure that we are good or bad? Is the justice we pursue right or wrong? These are the thoughts that I continue to ponder even after leaving the theater.

A
The Mercy system randomly scared a law student to death.

Amano Touko

5.0/10 Jan 25, 2026

Strict and severe, swift and decisive, presumption of guilt, self-proof of innocence, no defense, no review. Any one of these words would make a law student faint. The first eighteen cases sent eighteen people to their deaths without fail, until the Mercy system's supporters themselves sat in that chair that would most likely become an execution machine in an hour and a half. This is the future world. To cope with the chaotic order, people believe the system trumps humanity. But this is the beginning of a mistake. The protagonist's initial guilt rate is set at 97.5%, and he needs to constantly search for evidence to lower this number below 92% to avoid execution. For a moment, I thought of the Japanese drama "99.9: Criminal Lawyer," where the protagonist's probability is at least lower; but no, the protagonist has no lawyer. During the evidence gathering process, this number might rise, or it might infinitely drop close to 92% but never reach it—like a nightmare from Pinduoduo playing out here. The protagonist is a detective, but not a perfectly innocent person. The numerous suspicious points surrounding him led to his wife's dying words being used as evidence against him, rather than as a final plea to her lover. What about those outside the legal system? How can those without legal training and poor logical reasoning "prove their innocence"? The plot isn't hard to predict, even the twists aren't difficult to guess. While it continues the style of the *Searching* series, using a screen to display videos, photos, and chat logs to clarify logic, the reasoning is somewhat weak. The characteristics exhibited by the daughter's other account aren't fully explored; it feels more like a tool, a distraction. The film's story is suitable for a game, but if it were to be made into a game, it would need a more robust logical chain. Speaking of which, *Mercy* is translated as "forgiveness," and I'm not sure if this is intentional satire. After all, this system has never "forgiven" anyone; it only occasionally deviates from its machine nature, raising its gun barrel an inch—does that seem more like "mercy"? Or perhaps divine "compassion"? Although it's in the form of science fiction or a dystopian film, its core remains traditional. Perhaps in this phase of learning how to use artificial intelligence prudently, this kind of topic is a timely brake, which is good and necessary.

{{ commentError }} {{ commentSuccess }}