I Mapping the Field
Legal procedure determines how legal problems are processed. Many areas of procedure also raise issues of rights, which are established by substantive law and overarching principles, and allocated in the process of dispute resolution. More broadly, legal procedure reflects how authorities can impose a conflict settlement when the individuals involved are unable to do so.
Criminal procedure is an example of legal processing that has evolved over time and developed special characteristics. The state asks the alleged victim to stand back and allow the people to prosecute an individual’s wrongdoing. The state also grants the defendant rights when accused by the people. However, new developments are demanding that criminal procedure adapt in order to maintain its unique characteristics. Adjustments may have to be made as artificial intelligence (AI)Footnote 1 robots enter criminal investigations and courtrooms.
In Chapter 6, Sara Sun Beale and Hayley Lawrence describe these developments, and using previous research into human–robot interaction,Footnote 2 they explain how the manner in which these developments are framed is crucial. For example, human responses to AI-generated evidence will present unique challenges to the accuracy of litigation. The authors argue that traditional trial techniques must be adapted and new approaches developed, such as new testimonial safeguards, a finding that also appears in other chapters (see Section II.B). Beale and Lawrence suggest that forums beyond criminal courts could be designed as sandboxes to learn more about the basics of AI-enhanced fact-finding.
If we define criminal procedural law broadly to include all rules that regulate an inquiry into whether a violation of criminal law has occurred, then the relevance of new developments such as a “Robo-Judge” become even clearer (see Section II.D). Our broad definition of criminal procedure includes, e.g., surveillance techniques enabled by human–robot interaction, as well as the use of data generated by AI systems for criminal investigation and prosecution or fact-finding in court. This Introduction to Part II of the volume will not address the details of other areas such as sentencing, risk assessment, or punishment, which form part of the sanction regime after a verdict is rendered, but relevant discussions will be referred to briefly (Section II.D).
II The Spectrum of Procedural Issues
AI systems play a role in several areas of criminal procedure. The use of AI tools in forensics or predictive analysis reflects a policy decision to utilize new technology. Other areas are affected simply by human–robot cooperation in everyday life, because law enforcement or criminal investigations today make use of data recorded in everyday activities. This accessible data pool is growing quickly as more robots constantly monitor humans. For example, a modern car records manifold data on its user, including infotainment and braking characteristics.Footnote 3 During automated driving, driving assistants such as lane-keeping assistants or drowsiness-detection systems monitor drivers to ensure they are ready to respond to a take-over request if required.Footnote 4 If an accident occurs, this kind of alert could be used in legal proceedings in various ways.
II.A Using AI to Detect Crime and Predictive Policing
In classic criminal procedural codes, criminal proceedings start with the suspicion that a crime has occurred, and possibly that a specific person is culpable of committing it. From a legal point of view, this suspicion is crucial. Only if such a supposition exists may the government use the intrusive measures characteristic of criminal investigations, which in turn entitle the defendant to make use of special defense rights.
The use of AI systems and human–robot interactions have created new challenges to this traditional understanding of suspicion. AI-driven analysis of data can be used to generate suspicion via predictive policing,Footnote 5 natural language-based analysis of tax documents,Footnote 6 retrospective analysis of GPS locations stored in smartphones,Footnote 7 or even more vague data profiling of certain groups.Footnote 8 In all of these cases, AI systems create a suspicion which allows the authorities to investigate and possibly prosecute a crime, one that would not have come to the government’s attention previously.Footnote 9
Today, surveillance systems and predictive policing tools are the most prominently debated examples of human–robot interaction in criminal proceedings. These tools aim to protect public safety and fight crime, but there are issues of privacy, over-policing, and potentially discrimination.
Broader criminal justice issues connected to these AI systems arise from the fact that these tools are normally trained via machine learning methods. Human bias, already present in the criminal justice system, can be reinforced by biased training data, insufficiently calibrated machine learning, or both. This can result in ineffective predictive tools which either do not identify “true positives,” i.e., the people at risk of committing a crime, or which burden the public or a specific minority with unfair and expensive over-policing.Footnote 10 In any case, a risk assessment is a prognosis, and as such it always carries its own risks because it cannot be checked entirely; such risk assessments therefore raise ethical and legal issues when used as the basis for action.Footnote 11
II.B Criminal Investigation and Fact-Finding in Criminal Proceedings
When a criminal case is opened regarding a particular matter, the suspicion that a crime actually occurred must be investigated. The authorities seek to substantiate this suspicion by collecting material to serve as evidence. The search for all relevant leads is an important feature of criminal proceedings, which are shaped by the ideal of finding the truth before a verdict is entered. Currently, the material collected as evidence increasingly includes digital evidence.Footnote 12
Human–robot interactions in daily life can also lead to a targeted criminal investigation in a specific case. For example, a modern car programmed to monitor both driving and driver could record data that suggests a crime has been committed.Footnote 13 Furthermore, a driver’s failure to react to take-over requests could factor into a prediction of the driving standards likely to be exhibited by an individual in the future.Footnote 14
For a while now, new technology has also played an important role in enhancing forensic techniques. DNA sample testing is one area that has benefited, but it has also faced new challenges.Footnote 15 Digitized DNA sample testing is less expensive, but it is based on an opaque data-generating process, which raises questions regarding its acceptability as criminal evidence.Footnote 16
Beyond the forensic technological issues of fact-finding, new technology facilitates the remote testimony of witnesses who cannot come to trial as well as reconstructions of relevant situations through virtual reality.Footnote 17 When courts shut their doors during the COVID-19 pandemic, they underwent a seismic shift, adopting virtual hearings to replace physical courtrooms. It is unclear whether this transformation will permanently alter the justice landscape by offering new perspectives on court design, framing, and “ritual elements” of virtual trials in enhanced courtrooms.Footnote 18
II.B.1 Taming the “Function Creeps”
Human–robot interaction prompts an even broader discussion regarding criminal investigation, as the field of inquiry includes not only AI tools designated as investigative tools, but also devices whose functions reach beyond their original intended purpose, termed “function creep.”Footnote 19 An example would be drowsiness detection alerts, as the driving assistants generating such alerts were only designed to warn the driver about their performance during automated driving, not as evidence in a criminal court.
In her Chapter 9, Erin Murphy addresses the issue that while breathalyzers or DNA sample testing kits were designed as forensic tools, cars and smartphones were designed to meet consumer needs. When the data generated by consumer devices is used in criminal investigations, the technology is employed for a purpose which has not been fully evaluated. For example, the recording of a drowsiness alert, like other data stored by the vehicle,Footnote 20 could be a valuable source of evidence for fact-finding in criminal proceedings, in particular, a driver’s non-response to alerts issued by a lane-keeping assistant or drowsiness detection system.Footnote 21 However, an unresolved issue is how a defendant would defend against such incriminating evidence. Murphy argues for a new empowerment of defendants facing “digital proof,” by providing the defense with the procedural tools to attack incriminating evidence or introduce their own “digital proof.”
A lively illustration of the need to take Murphy’s plea seriously is the Danish data scandal.Footnote 22 Denmark uses historical call data records as circumstantial evidence to prove that someone has phoned a particular person or has been in a certain location. In 2019, it became clear that the data used was flawed because, among other things, the data processing method employed by certain telephone providers had changed without the police authorities’ awareness. The judicial authorities eventually ordered a review of more than 10,000 cases, and consequently several individuals were released from prison. It has also been revealed that the majority of errors in the Danish data scandal were human error rather than machine error.
II.B.2 Need for a New Taxonomy
One lesson that can be learned from the Danish data scandal is that human–robot interaction might not always require new and complex models, but rather common sense, litigation experience, and forensic understanding. Telephone providers, though obliged to record data for criminal justice systems, have the primary task of providing a customer service, not preparing forensic evidence. However, when AI-generated data, produced as a result of a robot assessing human performance, are proffered as evidence, traditional know-how has its limits. If robot testimony is presented at a criminal trial for fact-finding, a new taxonomy and a common language shared by the trier of facts and experts are required. Rules have been established for proving that a driver was speeding or intoxicated, but not for explaining the process that leads an alert to indicate the drowsiness of a human driver. These issues highlight the challenges and possibilities accompanying digital evidence, which must now be dealt with in all legal proceedings, because most information is stored electronically, not in analog form.Footnote 23 It is welcome that supranational initiatives, such as the Council of Europe’s Electronic Evidence Guide,Footnote 24 provide standards for digital evidence, although they do not take up the specific problems of evidence generated through human–robot interactions. To support the meaningful vetting of AI-generated evidence, Chapter 8 by Emily Silverman, Jörg Arnold, and Sabine Gless proposes a new taxonomy that distinguishes raw, processed, and evaluative data. This taxonomy can help courts find new ways to access and test robot testimony in a reliable and fair way.Footnote 25
Part of the challenge in vetting such evidenceFootnote 26 is to support the effective use of defense rights to challenge evidence.Footnote 27 It is very difficult for any fact-finder or defendant to pierce the veil of data, given that robots or other AI systems may not be able to explain their reasoningFootnote 28 and may be protected by trade secrets.Footnote 29
II.C New Agenda on Institutional Safeguards and Defense Rights
The use of AI systems in law enforcement and criminal investigations, and the omnipresence of AI devices that monitor the daily life of humans, impact the criminal trial in significant ways.Footnote 30 One shift is from the traditional investigative-enforcement perspective of criminal investigations to a predictive-preventive approach. This shift could erode the theoretically strong individual rights of defendants in criminal investigations.Footnote 31 A scholarly debate has asked, what government action should qualify as the basis for a criminal proceeding as opposed to mere policing? What individual rights must be given to those singled out by AI systems? What new institutional safeguards are needed? And, given the ubiquity of smartphone cameras and the quality of their recordings, as well as the willingness of many to record what they see, what role can or should commercial technology play in criminal investigations?
In Chapter 7, Andrea Roth argues that the use of AI-generated evidence must be reconciled with the basic goals shared by both adversarial and inquisitorial criminal proceedings: accuracy, fairness, dignity, and public legitimacy. She develops a compilation of principles for every stage of investigation and fact-finding to ensure a reliable and fair process, one that meets the needs of human defendants without losing the benefits of new technology. Her chapter points to the notion that the use of AI devices in criminal proceedings jeopardizes the modern achievement of conceptualizing the defendant not as an object, but as a subject of the proceedings.
It remains to be seen whether future courts and legal scholarship will be able to provide a new understanding of basic principles in criminal proceedings, such as the presumption of innocence. A new understanding is needed in view of the possibility that investigative powers will be exercised on individuals who are not the subjects of criminal investigations, but instead predictive policing,Footnote 32 as these individuals would not be offered traditional procedural protections. This is a complex issue doctrinally, because in Europe the presumption of innocence only applies after the charge. If there is no charge, there is, in principle, no protection. However, once a charge is leveled, the protection applies retroactively.
II.D Robo-Judges
After criminal investigation and fact-finding, a decision must be rendered. Could robots hand down a verdict without a human in the loop? Ideas relating to so-called robo-judges have been discussed for a while now.Footnote 33 In practice, “legal tech” and robot-assisted alternative dispute resolution have made progress,Footnote 34 as has robot-assisted human decision-making in domains where reaching a decision through the identification, sorting, and calibration of numerous variables is crucial. Instances of robots assisting in early release or the bail system in overburdened US systems, or in sentencing in China, have been criticized for various reasons.Footnote 35 However, some decision-making systems stand a good chance of being adopted in certain areas, because human–robot cooperation in making judicial decisions can facilitate faster and more affordable access to justice, which is a human right.Footnote 36 Countries increasingly provide online dispute resolutions that rely almost entirely on AI,Footnote 37 and some may take the use of new technologies beyond that.Footnote 38
When legal punishment entails the curtailment of liberty and property, and in some countries even death, things are different.Footnote 39 The current rejection of robo-judges in criminal matters is, however, not set in stone. Research on the feasibility of developing algorithms to assist in handing down decisions exists in jurisdictions as different as the United States,Footnote 40 Australia,Footnote 41 China,Footnote 42 and Germany.Footnote 43 If human–robot cooperation brings about more efficient and fair sentencing in a petty crime area, this will have wide-ranging implications for other human–robot interactions in legal proceedings, as well as other types of computer-assisted decision-making.
Obviously, this path is not without risk. Defendants today often only invoke their defense rights when they go to trial.Footnote 44 And as has been argued above, their confrontation right, which is necessary for reliable and fair fact-finding, is particularly at risk in the context of some robot evidence. A robot-assisted trial would have to grant an effective set of defense rights. Even the use of a robo-judge in a preliminary judgment could push defendants into accepting a plea bargain without making proper use of their trial rights. Some fear the inversion of the burden of proof, based on risk profiles and possibly even exotic clues like brain research.Footnote 45
As things stand today, using robo-judges to entirely replace humans is a distant possibility.Footnote 46 However, the risks of semi-automated justice comprise a more urgent need.Footnote 47 When an AI-driven frame of reference is admitted into the judging process, humans have difficulty making a case against the robot’s finding, and it is therefore likely that an AI system would set the tone. We may see a robot judge as “fairer” if bias is easier to address in a machine than in a person. Technological advancement could reduce and perhaps eliminate a feared “fairness gap” by enhancing the interpretability of AI-rendered decisions and strengthening beliefs regarding the thoroughness of consideration and the accuracy of the outcome.Footnote 48 But until then, straightforward communication and genuine human connection seem too precious to sacrifice for the possibility of a procedurally more just outcome. As of now, it seems that machine-adjudicated proceedings are considered less fair than those adjudicated by humans.Footnote 49
II.E Robo-Defense
Criminal defendants have a right to counsel, but this right may be difficult to exercise when defense lawyers are too expensive or hard to secure for other reasons. If it is possible for robots to assist judges, so too could they assist defendants. In routine cases with recurring issues, a standard defense could help. This is the business model of the start-up “DoNotPay.”Footnote 50 Self-styled as the “world’s first robot lawyer,”Footnote 51 it aims to help fight traffic tickets in a cheap and efficient way.Footnote 52 When DoNotPay’s creator announced that his AI system could advise defendants in the courtroom using smart glasses that record court proceedings and dictate responses into their ear via AI text generators, he was threatened with criminal prosecution for the unauthorized practice of law.Footnote 53 Yet, the fact that well-funded, seemingly unregulated providers demonstrated a willingness to enter the market for low-cost legal representation might foreshadow a change in criminal defense.
Human–robot interaction might not only lower representation costs, but potentially also assist defendants in carrying out laborious tasks more efficiently. For example, if a large number of texts need to be screened for defense leads, the use of an AI system could speed up the process considerably. Furthermore, if a defendant has been incriminated by AI-generated evidence, it only makes sense to employ technology in response.Footnote 54
II.F Robots as Defendants
Dismissed as science fiction in the past, scholars in the last decade have begun to examine the case for punishing robots that cause harm.Footnote 55 As Tatjana Hörnle rightly points out in her introduction to Part I of the volume, theorizing about attributing guilt to robots and actually prosecuting them in court are two different things. But if the issue is considered, it appears that similar problems arise in substantive and procedural law. Prominent among the challenges is the fact that both imputing guilt and bringing charges requires the defendant to have a legal personality. It only makes sense to pursue robots in a legal proceeding if they can be the subject of a legal obligation.
In 2017, the EU Parliament took a functional approach to confer robots with partial legal capacity via its “Resolution on Civil Law Rules on Robotics,” which proposed to create a specific legal status for robots.Footnote 56 Conferring a legal personality on robots is based on the notion of a “legal personality” of companies or corporations. “Electronic personality” would be applied to cases where robots make autonomous decisions or otherwise interact with third parties autonomously.Footnote 57
In principle, the idea of granting robots personhood dates back a few decades. A prominent early proposal was submitted by Lawrence Solum in 1992.Footnote 58 He posited the idea of a legal personality, although the idea was more akin to a thought experiment.Footnote 59 He highlighted the crucial question of incentivizing “robots”: “what is the point of making a thing – which can neither understand the law nor act on it – the subject of a legal duty?”Footnote 60 More recently, some legal scholars claim that “there is no compelling reason to restrict the attribution of action exclusively to humans and to social systems.”Footnote 61 Yet the EU proposal remains controversial for torts, and the proposal for legal personhood has not been taken up in the debate regarding AI systems in criminal justice.
II.G Risk Assessment Recommendation Systems (Bail, Early Release, Probation)
New technology not only changes how we investigate crime and search for evidence. Human–robot cooperation in criminal matters also has the potential to transform risk assessment connected to individuals in the justice system and the assignment of adequate responsive measures. A robot’s capacity to analyze vast data pools and make recommendations based on this assessment potentially promises better risk assessment than humans.Footnote 62 Robots assist in decision-making during criminal proceedings in particular cases, as when they make recommendations regarding bail, advise on an appropriate sentence, or make suggestions regarding early release. Such systems have been used in state criminal justice branches in the United States, but this has triggered controversial case lawFootnote 63 and a vigorous debate around the world.Footnote 64 What some see as more transparent and rational, i.e., “evidence-based” decision-making,Footnote 65 others denounce as deeply flawed decision-making.Footnote 66 It is important to note that in these cases, the final decision is always taken by a judge. However, the question is whether the human judge will remain the actual decision-maker, or becomes more and more of a figurehead for a system that crunches pools of data.Footnote 67
III Privacy and Fairness Concerns
The use of human–robot interaction in criminal matters raises manifold privacy and fairness concerns, only some of which can be highlighted here.
III.A Enhancing Safety or Paving the Way to a “Surveillance State”?
In a future where human–robot interactions are commonplace, one major concern is the potential for a “surveillance state” in which governments and private entities share tasks, thereby allowing both sides to avoid the regulatory net. David Gray takes on this issue when he asks whether our legal systems have the right tools to preserve autonomy, intimacy, and democracy in a future of ubiquitous human–robot interaction. He argues that the US Constitution’s Fourth Amendment could provide safeguards, but it falls short due to current judicial interpretations of individual standing and the state agency requirement. Gray argues that the language of the Fourth Amendment, as well as its historical and philosophical roots, support a new interpretation, one that could acknowledge collective interests and guard privacy as a public good against threats posed by both state and private agents.
In Europe, the fear of a surveillance state has prompted manifold domestic and European laws. The European Convention on Human Rights (ECHR), adopted in 1950 in the forum of the Council of Europe, grants the right to privacy as a fundamental human right. The EU Member States first agreed on a Data Protection Directive (95/46/EC) in 1995, then proclaimed a right to protection of personal data in the Charter of Fundamental Rights of the EU in 2000, and most recently put into effect the General Data Protection Regulation (GDPR) in 2018. The courts, in particular the Court of Justice of the European Union (CJEU), have also shaped data protection law through interpretations and rulings.
Data processing in criminal justice, however, has always been an exception. It is not covered by the GDPR as such, but by the Directive (EU) 2016/680, which addresses the protection of natural persons regarding the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection, or prosecution of criminal offenses or the execution of criminal penalties.Footnote 68 New proposals, such as regulation laying down harmonized rules on artificial intelligence (AI Act),Footnote 69 have the potential to undo current understandings regarding the dividing line between general regulation of data collection and police matters.
One major issue, concerning policing as well as criminal justice, pertains to facial recognition, conducted by either a fully responsible human via photo matching or by a robot using real-time facial recognition. When scanning masses of visual material, robots outperform humans in detecting matches via superior pattern recognition. This strength, however, comes with drawbacks, among them the reinforcement of inherent bias through the use of biased training materials in the machine learning process.
The use of facial recognition in criminal matters raises a number of issues, including public–private partnerships. Facial recognition systems need huge data pools to function, which can be provided by the authorities in the form of mug shots. Creating such data pools can, however, lead to the reinforcement of bias already existent in policing. Visual material could also be provided by private companies, but this raises privacy concerns if the respective individuals have not consented to be in the data pool. Data quality may also be problematic if the material lacks adequate diversity, which could affect the robot’s capability to correctly match two pictures. In the past, authorities bought pictures and services from companies that later came under scrutiny for their lack of transparency and other security flaws.Footnote 70 If such companies scrape photos from social media and other internet sources without consent from individuals, the material cannot be used for matching, but without an adequate volume of photographs, there may be serious consequences such as wrongful identification. Similar arguments are raised regarding the use of genealogy databases for DNA-sample testing by investigation authorities.Footnote 71 The use of facial recognition for criminal justice matters may have even more profound effects. People might feel safer overall if criminals are identified, but also less inclined to exercise legal rights that put them under the gaze of the authorities, such as taking part in demonstrations.Footnote 72
The worldwide awareness of the use of robots in facial recognition has given rise to an international discussion about the need for universal normative frameworks. These frameworks are based on existing international human rights norms for the use of facial recognition technology and related AI use. In June 2020, the UN High Commissioner for Human Rights published a report concerning the impact of new technologies,Footnote 73 including facial recognition technology, focusing on the effect on human rights.Footnote 74 The report highlighted the need to develop a standard for privacy and data protection, as well as address accuracy and discriminatory impacts. The following year, the Council of Europe published Guidelines on Facial Recognition, suggesting that states should adopt a robust legal framework applicable to the different cases of facial recognition technology and implement a set of safeguards.Footnote 75 At the beginning of 2024, the EU Member States approved a proposal on an AI ActFootnote 76 that aims to ban certain facial recognition techniques in public spaces, but permits its use if prior judicial authorization is provided for the purpose of specific law enforcement.Footnote 77
III.B Fairness and Taking All Interests in Consideration
Notwithstanding the many risks attached to the deployment of certain surveillance technology, it is clear that AI systems and robots can be put to use to support criminal justice in overburdened systems in which individuals face criminal justice systems under strain. For example, advanced monitoring systems might allow for finely adjusted bail or probation measures in many more situations than it is possible with current levels of human oversight.Footnote 78 Crowdsourced evidence from private cameras might provide exonerating evidence needed by the defense.Footnote 79 However, such systems raise fairness questions in many ways and require the balancing of interests in manifold respects, both within and beyond the criminal trial. Problems arising within criminal proceedings include the possible infringement of defense rights, as well as the need to correct bias and prevent discrimination (see Sections II.A and II.B.2).
A different sort of balancing of interests is required when addressing risks regarding the invasion of privacy.Footnote 80 Chapter 10 by Bart Custers and Lonneke Stevens outlines the increasing discrepancy between legal frameworks of data protection and criminal procedure, and the actual practices of using data as evidence in criminal courts. The structural ambiguity they detect has many features. They find that the existing laws in the Netherlands do not obstruct data collection but that the analysis of such evidence is basically unregulated, and data rights cannot yet be meaningfully enforced in criminal courts.
As indicated above, this state of affairs could change. In Europe, new EU initiatives and legislation are being introduced.Footnote 81 If the right to transparency of AI systemsFootnote 82 and the right to accountabilityFootnote 83 can be enforced in criminal proceedings and are not modified by a specialized criminal justice regulation,Footnote 84 courts that want to make use of data gained through such systems might find that data protection regulation actually promises to assist in safeguarding the reliability of fact-finding. As always, the question is whether we can meaningfully identify, understand, and address the possibilities and risks posed by human–robot interaction. If not, we cannot make use of the technology.
The controversial debate on how the criminal justice system can adequately address privacy concernsFootnote 85 and the development of data protection law potentially point the way to a different solution. This solution lies not in law, but in technology, via privacy by design.Footnote 86 This approach can be taken to an extreme, until we arrive at what has been called “impossibility structures,” i.e., design structures that prohibit human use in certain circumstances.Footnote 87 Using the example of driving automation, we find that the intervention systems exist on a spectrum. On one end of the spectrum, there are low intervention systems known as nudging structures, such as intelligent speed assistance and drowsiness warning systems. At the high intervention end of the spectrum are impossibility structures; rather than simply monitor or enhance human driving performance, they prevent human driving entirely. For example, alcohol interlock devices immobilize the vehicle if a potential driver’s breath alcohol concentration is in excess of a certain predetermined level. These structures prevent drunken humans from driving at all, creating “facts on the ground” that replace law enforcement and criminal trials. It is very difficult to say whether it would be good to bypass human agency with such structures, the risk being that such legality-by-design undermines not only the human entitlement to act out of necessity, but perhaps also the privacy that comprises one of the foundations of liberal society, which could undermine democracy as a whole.Footnote 88
IV The Larger Perspective
It seems inevitable that human–robot interaction will impact criminal proceedings, just as it has other areas of the law. However, the exact nature of this impact is unclear. It may help to prevent crime before it happens or it might lead to a merciless application of the law.
Legal scholars primarily point to the risks of AI systems in criminal justice and the need to have adequate safeguards in place. However, many agree that certain robots have the potential to make criminal proceedings faster, and possibly even fairer. One big, not yet fully scrutinized issue will be whether we can and will trust systems that generate information where the decision-making process is opaque to humans, even when it comes to criminal verdicts.Footnote 89
Future lawmakers drafting criminal procedure must keep in mind what Tatjana Hörnle pointed out in her introduction to Part I of the volume, that humans tend to blame other humans rather than machines.Footnote 90 The same is true for bringing charges against humans as opposed to machines, as explained by Jeanne Gaakeer.Footnote 91 Part of the explanation for this view lies in the inherent perspectives of substantive and procedural law.Footnote 92 Criminal justice is tailored to humans, and it is much easier, for reasons rooted in human understanding and ingrained in the legal framework, to prosecute a human.Footnote 93 This appears to be the case when a prosecution can be directed against either a human or a human–robot cooperation,Footnote 94 and it would most probably also be the case if one had to choose between prosecuting a visible human driver or a robot that guided automated driving.
With human–robot interaction now becoming a reality of daily life and criminal justice, it is time for the legal community to reconcile themselves to these challenges, and engage in a new conversation with the computer scientists, behavioral scholars, forensic experts, and other disciplines that can provide relevant knowledge. The digital shift in criminal justice will be manifold and less than predictable. Human–robot interaction might direct more blame in the direction of humans, but it might also open up various new ways to reconstruct the past and possibly assist in exonerating falsely accused humans. A basic condition for benefiting from these developments is to understand the different aspects of human–robot interaction and their ramifications for legal proceedings.