Skip to main content Accessibility help
×
Hostname: page-component-69cd664f8f-tp5d2 Total loading time: 0 Render date: 2025-03-13T09:33:40.579Z Has data issue: false hasContentIssue false

5 - Introduction to Human–Robot Interaction and Procedural Issues in Criminal Justice

from Part II - Human–Robot Interactions and Procedural Law

Published online by Cambridge University Press:  03 October 2024

Sabine Gless
Affiliation:
Universität Basel, Switzerland
Helena Whalen-Bridge
Affiliation:
National University of Singapore

Summary

Robots have not only become part of our everyday life – they have assumed functions in our criminal justice systems. The following chapters from Sara Beale and Hayley Lawrence, Andrea Roth, Erin Murphy, Emily Silverman, Jörg Arnold, and Sabine Gless focus on evidentiary issues arising from human–robot interaction, while Bart Custers and Lonneke Stevens, as well as David Gray, look at the emerging impact of data protection on criminal justice. This introduction places the chapters in context of the broader issues surrounding the deployment of AI systems in criminal investigations and trials, not all of which are dealt with in the chapters.

Type
Chapter
Information
Human–Robot Interaction in Law and Its Narratives
Legal Blame, Procedure, and Criminal Law
, pp. 89 - 110
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

I Mapping the Field

Legal procedure determines how legal problems are processed. Many areas of procedure also raise issues of rights, which are established by substantive law and overarching principles, and allocated in the process of dispute resolution. More broadly, legal procedure reflects how authorities can impose a conflict settlement when the individuals involved are unable to do so.

Criminal procedure is an example of legal processing that has evolved over time and developed special characteristics. The state asks the alleged victim to stand back and allow the people to prosecute an individual’s wrongdoing. The state also grants the defendant rights when accused by the people. However, new developments are demanding that criminal procedure adapt in order to maintain its unique characteristics. Adjustments may have to be made as artificial intelligence (AI)Footnote 1 robots enter criminal investigations and courtrooms.

In Chapter 6, Sara Sun Beale and Hayley Lawrence describe these developments, and using previous research into human–robot interaction,Footnote 2 they explain how the manner in which these developments are framed is crucial. For example, human responses to AI-generated evidence will present unique challenges to the accuracy of litigation. The authors argue that traditional trial techniques must be adapted and new approaches developed, such as new testimonial safeguards, a finding that also appears in other chapters (see Section II.B). Beale and Lawrence suggest that forums beyond criminal courts could be designed as sandboxes to learn more about the basics of AI-enhanced fact-finding.

If we define criminal procedural law broadly to include all rules that regulate an inquiry into whether a violation of criminal law has occurred, then the relevance of new developments such as a “Robo-Judge” become even clearer (see Section II.D). Our broad definition of criminal procedure includes, e.g., surveillance techniques enabled by human–robot interaction, as well as the use of data generated by AI systems for criminal investigation and prosecution or fact-finding in court. This Introduction to Part II of the volume will not address the details of other areas such as sentencing, risk assessment, or punishment, which form part of the sanction regime after a verdict is rendered, but relevant discussions will be referred to briefly (Section II.D).

II The Spectrum of Procedural Issues

AI systems play a role in several areas of criminal procedure. The use of AI tools in forensics or predictive analysis reflects a policy decision to utilize new technology. Other areas are affected simply by human–robot cooperation in everyday life, because law enforcement or criminal investigations today make use of data recorded in everyday activities. This accessible data pool is growing quickly as more robots constantly monitor humans. For example, a modern car records manifold data on its user, including infotainment and braking characteristics.Footnote 3 During automated driving, driving assistants such as lane-keeping assistants or drowsiness-detection systems monitor drivers to ensure they are ready to respond to a take-over request if required.Footnote 4 If an accident occurs, this kind of alert could be used in legal proceedings in various ways.

II.A Using AI to Detect Crime and Predictive Policing

In classic criminal procedural codes, criminal proceedings start with the suspicion that a crime has occurred, and possibly that a specific person is culpable of committing it. From a legal point of view, this suspicion is crucial. Only if such a supposition exists may the government use the intrusive measures characteristic of criminal investigations, which in turn entitle the defendant to make use of special defense rights.

The use of AI systems and human–robot interactions have created new challenges to this traditional understanding of suspicion. AI-driven analysis of data can be used to generate suspicion via predictive policing,Footnote 5 natural language-based analysis of tax documents,Footnote 6 retrospective analysis of GPS locations stored in smartphones,Footnote 7 or even more vague data profiling of certain groups.Footnote 8 In all of these cases, AI systems create a suspicion which allows the authorities to investigate and possibly prosecute a crime, one that would not have come to the government’s attention previously.Footnote 9

Today, surveillance systems and predictive policing tools are the most prominently debated examples of human–robot interaction in criminal proceedings. These tools aim to protect public safety and fight crime, but there are issues of privacy, over-policing, and potentially discrimination.

Broader criminal justice issues connected to these AI systems arise from the fact that these tools are normally trained via machine learning methods. Human bias, already present in the criminal justice system, can be reinforced by biased training data, insufficiently calibrated machine learning, or both. This can result in ineffective predictive tools which either do not identify “true positives,” i.e., the people at risk of committing a crime, or which burden the public or a specific minority with unfair and expensive over-policing.Footnote 10 In any case, a risk assessment is a prognosis, and as such it always carries its own risks because it cannot be checked entirely; such risk assessments therefore raise ethical and legal issues when used as the basis for action.Footnote 11

II.B Criminal Investigation and Fact-Finding in Criminal Proceedings

When a criminal case is opened regarding a particular matter, the suspicion that a crime actually occurred must be investigated. The authorities seek to substantiate this suspicion by collecting material to serve as evidence. The search for all relevant leads is an important feature of criminal proceedings, which are shaped by the ideal of finding the truth before a verdict is entered. Currently, the material collected as evidence increasingly includes digital evidence.Footnote 12

Human–robot interactions in daily life can also lead to a targeted criminal investigation in a specific case. For example, a modern car programmed to monitor both driving and driver could record data that suggests a crime has been committed.Footnote 13 Furthermore, a driver’s failure to react to take-over requests could factor into a prediction of the driving standards likely to be exhibited by an individual in the future.Footnote 14

For a while now, new technology has also played an important role in enhancing forensic techniques. DNA sample testing is one area that has benefited, but it has also faced new challenges.Footnote 15 Digitized DNA sample testing is less expensive, but it is based on an opaque data-generating process, which raises questions regarding its acceptability as criminal evidence.Footnote 16

Beyond the forensic technological issues of fact-finding, new technology facilitates the remote testimony of witnesses who cannot come to trial as well as reconstructions of relevant situations through virtual reality.Footnote 17 When courts shut their doors during the COVID-19 pandemic, they underwent a seismic shift, adopting virtual hearings to replace physical courtrooms. It is unclear whether this transformation will permanently alter the justice landscape by offering new perspectives on court design, framing, and “ritual elements” of virtual trials in enhanced courtrooms.Footnote 18

II.B.1 Taming the “Function Creeps”

Human–robot interaction prompts an even broader discussion regarding criminal investigation, as the field of inquiry includes not only AI tools designated as investigative tools, but also devices whose functions reach beyond their original intended purpose, termed “function creep.”Footnote 19 An example would be drowsiness detection alerts, as the driving assistants generating such alerts were only designed to warn the driver about their performance during automated driving, not as evidence in a criminal court.

In her Chapter 9, Erin Murphy addresses the issue that while breathalyzers or DNA sample testing kits were designed as forensic tools, cars and smartphones were designed to meet consumer needs. When the data generated by consumer devices is used in criminal investigations, the technology is employed for a purpose which has not been fully evaluated. For example, the recording of a drowsiness alert, like other data stored by the vehicle,Footnote 20 could be a valuable source of evidence for fact-finding in criminal proceedings, in particular, a driver’s non-response to alerts issued by a lane-keeping assistant or drowsiness detection system.Footnote 21 However, an unresolved issue is how a defendant would defend against such incriminating evidence. Murphy argues for a new empowerment of defendants facing “digital proof,” by providing the defense with the procedural tools to attack incriminating evidence or introduce their own “digital proof.”

A lively illustration of the need to take Murphy’s plea seriously is the Danish data scandal.Footnote 22 Denmark uses historical call data records as circumstantial evidence to prove that someone has phoned a particular person or has been in a certain location. In 2019, it became clear that the data used was flawed because, among other things, the data processing method employed by certain telephone providers had changed without the police authorities’ awareness. The judicial authorities eventually ordered a review of more than 10,000 cases, and consequently several individuals were released from prison. It has also been revealed that the majority of errors in the Danish data scandal were human error rather than machine error.

II.B.2 Need for a New Taxonomy

One lesson that can be learned from the Danish data scandal is that human–robot interaction might not always require new and complex models, but rather common sense, litigation experience, and forensic understanding. Telephone providers, though obliged to record data for criminal justice systems, have the primary task of providing a customer service, not preparing forensic evidence. However, when AI-generated data, produced as a result of a robot assessing human performance, are proffered as evidence, traditional know-how has its limits. If robot testimony is presented at a criminal trial for fact-finding, a new taxonomy and a common language shared by the trier of facts and experts are required. Rules have been established for proving that a driver was speeding or intoxicated, but not for explaining the process that leads an alert to indicate the drowsiness of a human driver. These issues highlight the challenges and possibilities accompanying digital evidence, which must now be dealt with in all legal proceedings, because most information is stored electronically, not in analog form.Footnote 23 It is welcome that supranational initiatives, such as the Council of Europe’s Electronic Evidence Guide,Footnote 24 provide standards for digital evidence, although they do not take up the specific problems of evidence generated through human–robot interactions. To support the meaningful vetting of AI-generated evidence, Chapter 8 by Emily Silverman, Jörg Arnold, and Sabine Gless proposes a new taxonomy that distinguishes raw, processed, and evaluative data. This taxonomy can help courts find new ways to access and test robot testimony in a reliable and fair way.Footnote 25

Part of the challenge in vetting such evidenceFootnote 26 is to support the effective use of defense rights to challenge evidence.Footnote 27 It is very difficult for any fact-finder or defendant to pierce the veil of data, given that robots or other AI systems may not be able to explain their reasoningFootnote 28 and may be protected by trade secrets.Footnote 29

II.C New Agenda on Institutional Safeguards and Defense Rights

The use of AI systems in law enforcement and criminal investigations, and the omnipresence of AI devices that monitor the daily life of humans, impact the criminal trial in significant ways.Footnote 30 One shift is from the traditional investigative-enforcement perspective of criminal investigations to a predictive-preventive approach. This shift could erode the theoretically strong individual rights of defendants in criminal investigations.Footnote 31 A scholarly debate has asked, what government action should qualify as the basis for a criminal proceeding as opposed to mere policing? What individual rights must be given to those singled out by AI systems? What new institutional safeguards are needed? And, given the ubiquity of smartphone cameras and the quality of their recordings, as well as the willingness of many to record what they see, what role can or should commercial technology play in criminal investigations?

In Chapter 7, Andrea Roth argues that the use of AI-generated evidence must be reconciled with the basic goals shared by both adversarial and inquisitorial criminal proceedings: accuracy, fairness, dignity, and public legitimacy. She develops a compilation of principles for every stage of investigation and fact-finding to ensure a reliable and fair process, one that meets the needs of human defendants without losing the benefits of new technology. Her chapter points to the notion that the use of AI devices in criminal proceedings jeopardizes the modern achievement of conceptualizing the defendant not as an object, but as a subject of the proceedings.

It remains to be seen whether future courts and legal scholarship will be able to provide a new understanding of basic principles in criminal proceedings, such as the presumption of innocence. A new understanding is needed in view of the possibility that investigative powers will be exercised on individuals who are not the subjects of criminal investigations, but instead predictive policing,Footnote 32 as these individuals would not be offered traditional procedural protections. This is a complex issue doctrinally, because in Europe the presumption of innocence only applies after the charge. If there is no charge, there is, in principle, no protection. However, once a charge is leveled, the protection applies retroactively.

II.D Robo-Judges

After criminal investigation and fact-finding, a decision must be rendered. Could robots hand down a verdict without a human in the loop? Ideas relating to so-called robo-judges have been discussed for a while now.Footnote 33 In practice, “legal tech” and robot-assisted alternative dispute resolution have made progress,Footnote 34 as has robot-assisted human decision-making in domains where reaching a decision through the identification, sorting, and calibration of numerous variables is crucial. Instances of robots assisting in early release or the bail system in overburdened US systems, or in sentencing in China, have been criticized for various reasons.Footnote 35 However, some decision-making systems stand a good chance of being adopted in certain areas, because human–robot cooperation in making judicial decisions can facilitate faster and more affordable access to justice, which is a human right.Footnote 36 Countries increasingly provide online dispute resolutions that rely almost entirely on AI,Footnote 37 and some may take the use of new technologies beyond that.Footnote 38

When legal punishment entails the curtailment of liberty and property, and in some countries even death, things are different.Footnote 39 The current rejection of robo-judges in criminal matters is, however, not set in stone. Research on the feasibility of developing algorithms to assist in handing down decisions exists in jurisdictions as different as the United States,Footnote 40 Australia,Footnote 41 China,Footnote 42 and Germany.Footnote 43 If human–robot cooperation brings about more efficient and fair sentencing in a petty crime area, this will have wide-ranging implications for other human–robot interactions in legal proceedings, as well as other types of computer-assisted decision-making.

Obviously, this path is not without risk. Defendants today often only invoke their defense rights when they go to trial.Footnote 44 And as has been argued above, their confrontation right, which is necessary for reliable and fair fact-finding, is particularly at risk in the context of some robot evidence. A robot-assisted trial would have to grant an effective set of defense rights. Even the use of a robo-judge in a preliminary judgment could push defendants into accepting a plea bargain without making proper use of their trial rights. Some fear the inversion of the burden of proof, based on risk profiles and possibly even exotic clues like brain research.Footnote 45

As things stand today, using robo-judges to entirely replace humans is a distant possibility.Footnote 46 However, the risks of semi-automated justice comprise a more urgent need.Footnote 47 When an AI-driven frame of reference is admitted into the judging process, humans have difficulty making a case against the robot’s finding, and it is therefore likely that an AI system would set the tone. We may see a robot judge as “fairer” if bias is easier to address in a machine than in a person. Technological advancement could reduce and perhaps eliminate a feared “fairness gap” by enhancing the interpretability of AI-rendered decisions and strengthening beliefs regarding the thoroughness of consideration and the accuracy of the outcome.Footnote 48 But until then, straightforward communication and genuine human connection seem too precious to sacrifice for the possibility of a procedurally more just outcome. As of now, it seems that machine-adjudicated proceedings are considered less fair than those adjudicated by humans.Footnote 49

II.E Robo-Defense

Criminal defendants have a right to counsel, but this right may be difficult to exercise when defense lawyers are too expensive or hard to secure for other reasons. If it is possible for robots to assist judges, so too could they assist defendants. In routine cases with recurring issues, a standard defense could help. This is the business model of the start-up “DoNotPay.”Footnote 50 Self-styled as the “world’s first robot lawyer,”Footnote 51 it aims to help fight traffic tickets in a cheap and efficient way.Footnote 52 When DoNotPay’s creator announced that his AI system could advise defendants in the courtroom using smart glasses that record court proceedings and dictate responses into their ear via AI text generators, he was threatened with criminal prosecution for the unauthorized practice of law.Footnote 53 Yet, the fact that well-funded, seemingly unregulated providers demonstrated a willingness to enter the market for low-cost legal representation might foreshadow a change in criminal defense.

Human–robot interaction might not only lower representation costs, but potentially also assist defendants in carrying out laborious tasks more efficiently. For example, if a large number of texts need to be screened for defense leads, the use of an AI system could speed up the process considerably. Furthermore, if a defendant has been incriminated by AI-generated evidence, it only makes sense to employ technology in response.Footnote 54

II.F Robots as Defendants

Dismissed as science fiction in the past, scholars in the last decade have begun to examine the case for punishing robots that cause harm.Footnote 55 As Tatjana Hörnle rightly points out in her introduction to Part I of the volume, theorizing about attributing guilt to robots and actually prosecuting them in court are two different things. But if the issue is considered, it appears that similar problems arise in substantive and procedural law. Prominent among the challenges is the fact that both imputing guilt and bringing charges requires the defendant to have a legal personality. It only makes sense to pursue robots in a legal proceeding if they can be the subject of a legal obligation.

In 2017, the EU Parliament took a functional approach to confer robots with partial legal capacity via its “Resolution on Civil Law Rules on Robotics,” which proposed to create a specific legal status for robots.Footnote 56 Conferring a legal personality on robots is based on the notion of a “legal personality” of companies or corporations. “Electronic personality” would be applied to cases where robots make autonomous decisions or otherwise interact with third parties autonomously.Footnote 57

In principle, the idea of granting robots personhood dates back a few decades. A prominent early proposal was submitted by Lawrence Solum in 1992.Footnote 58 He posited the idea of a legal personality, although the idea was more akin to a thought experiment.Footnote 59 He highlighted the crucial question of incentivizing “robots”: “what is the point of making a thing – which can neither understand the law nor act on it – the subject of a legal duty?”Footnote 60 More recently, some legal scholars claim that “there is no compelling reason to restrict the attribution of action exclusively to humans and to social systems.”Footnote 61 Yet the EU proposal remains controversial for torts, and the proposal for legal personhood has not been taken up in the debate regarding AI systems in criminal justice.

II.G Risk Assessment Recommendation Systems (Bail, Early Release, Probation)

New technology not only changes how we investigate crime and search for evidence. Human–robot cooperation in criminal matters also has the potential to transform risk assessment connected to individuals in the justice system and the assignment of adequate responsive measures. A robot’s capacity to analyze vast data pools and make recommendations based on this assessment potentially promises better risk assessment than humans.Footnote 62 Robots assist in decision-making during criminal proceedings in particular cases, as when they make recommendations regarding bail, advise on an appropriate sentence, or make suggestions regarding early release. Such systems have been used in state criminal justice branches in the United States, but this has triggered controversial case lawFootnote 63 and a vigorous debate around the world.Footnote 64 What some see as more transparent and rational, i.e., “evidence-based” decision-making,Footnote 65 others denounce as deeply flawed decision-making.Footnote 66 It is important to note that in these cases, the final decision is always taken by a judge. However, the question is whether the human judge will remain the actual decision-maker, or becomes more and more of a figurehead for a system that crunches pools of data.Footnote 67

III Privacy and Fairness Concerns

The use of human–robot interaction in criminal matters raises manifold privacy and fairness concerns, only some of which can be highlighted here.

III.A Enhancing Safety or Paving the Way to a “Surveillance State”?

In a future where human–robot interactions are commonplace, one major concern is the potential for a “surveillance state” in which governments and private entities share tasks, thereby allowing both sides to avoid the regulatory net. David Gray takes on this issue when he asks whether our legal systems have the right tools to preserve autonomy, intimacy, and democracy in a future of ubiquitous human–robot interaction. He argues that the US Constitution’s Fourth Amendment could provide safeguards, but it falls short due to current judicial interpretations of individual standing and the state agency requirement. Gray argues that the language of the Fourth Amendment, as well as its historical and philosophical roots, support a new interpretation, one that could acknowledge collective interests and guard privacy as a public good against threats posed by both state and private agents.

In Europe, the fear of a surveillance state has prompted manifold domestic and European laws. The European Convention on Human Rights (ECHR), adopted in 1950 in the forum of the Council of Europe, grants the right to privacy as a fundamental human right. The EU Member States first agreed on a Data Protection Directive (95/46/EC) in 1995, then proclaimed a right to protection of personal data in the Charter of Fundamental Rights of the EU in 2000, and most recently put into effect the General Data Protection Regulation (GDPR) in 2018. The courts, in particular the Court of Justice of the European Union (CJEU), have also shaped data protection law through interpretations and rulings.

Data processing in criminal justice, however, has always been an exception. It is not covered by the GDPR as such, but by the Directive (EU) 2016/680, which addresses the protection of natural persons regarding the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection, or prosecution of criminal offenses or the execution of criminal penalties.Footnote 68 New proposals, such as regulation laying down harmonized rules on artificial intelligence (AI Act),Footnote 69 have the potential to undo current understandings regarding the dividing line between general regulation of data collection and police matters.

One major issue, concerning policing as well as criminal justice, pertains to facial recognition, conducted by either a fully responsible human via photo matching or by a robot using real-time facial recognition. When scanning masses of visual material, robots outperform humans in detecting matches via superior pattern recognition. This strength, however, comes with drawbacks, among them the reinforcement of inherent bias through the use of biased training materials in the machine learning process.

The use of facial recognition in criminal matters raises a number of issues, including public–private partnerships. Facial recognition systems need huge data pools to function, which can be provided by the authorities in the form of mug shots. Creating such data pools can, however, lead to the reinforcement of bias already existent in policing. Visual material could also be provided by private companies, but this raises privacy concerns if the respective individuals have not consented to be in the data pool. Data quality may also be problematic if the material lacks adequate diversity, which could affect the robot’s capability to correctly match two pictures. In the past, authorities bought pictures and services from companies that later came under scrutiny for their lack of transparency and other security flaws.Footnote 70 If such companies scrape photos from social media and other internet sources without consent from individuals, the material cannot be used for matching, but without an adequate volume of photographs, there may be serious consequences such as wrongful identification. Similar arguments are raised regarding the use of genealogy databases for DNA-sample testing by investigation authorities.Footnote 71 The use of facial recognition for criminal justice matters may have even more profound effects. People might feel safer overall if criminals are identified, but also less inclined to exercise legal rights that put them under the gaze of the authorities, such as taking part in demonstrations.Footnote 72

The worldwide awareness of the use of robots in facial recognition has given rise to an international discussion about the need for universal normative frameworks. These frameworks are based on existing international human rights norms for the use of facial recognition technology and related AI use. In June 2020, the UN High Commissioner for Human Rights published a report concerning the impact of new technologies,Footnote 73 including facial recognition technology, focusing on the effect on human rights.Footnote 74 The report highlighted the need to develop a standard for privacy and data protection, as well as address accuracy and discriminatory impacts. The following year, the Council of Europe published Guidelines on Facial Recognition, suggesting that states should adopt a robust legal framework applicable to the different cases of facial recognition technology and implement a set of safeguards.Footnote 75 At the beginning of 2024, the EU Member States approved a proposal on an AI ActFootnote 76 that aims to ban certain facial recognition techniques in public spaces, but permits its use if prior judicial authorization is provided for the purpose of specific law enforcement.Footnote 77

III.B Fairness and Taking All Interests in Consideration

Notwithstanding the many risks attached to the deployment of certain surveillance technology, it is clear that AI systems and robots can be put to use to support criminal justice in overburdened systems in which individuals face criminal justice systems under strain. For example, advanced monitoring systems might allow for finely adjusted bail or probation measures in many more situations than it is possible with current levels of human oversight.Footnote 78 Crowdsourced evidence from private cameras might provide exonerating evidence needed by the defense.Footnote 79 However, such systems raise fairness questions in many ways and require the balancing of interests in manifold respects, both within and beyond the criminal trial. Problems arising within criminal proceedings include the possible infringement of defense rights, as well as the need to correct bias and prevent discrimination (see Sections II.A and II.B.2).

A different sort of balancing of interests is required when addressing risks regarding the invasion of privacy.Footnote 80 Chapter 10 by Bart Custers and Lonneke Stevens outlines the increasing discrepancy between legal frameworks of data protection and criminal procedure, and the actual practices of using data as evidence in criminal courts. The structural ambiguity they detect has many features. They find that the existing laws in the Netherlands do not obstruct data collection but that the analysis of such evidence is basically unregulated, and data rights cannot yet be meaningfully enforced in criminal courts.

As indicated above, this state of affairs could change. In Europe, new EU initiatives and legislation are being introduced.Footnote 81 If the right to transparency of AI systemsFootnote 82 and the right to accountabilityFootnote 83 can be enforced in criminal proceedings and are not modified by a specialized criminal justice regulation,Footnote 84 courts that want to make use of data gained through such systems might find that data protection regulation actually promises to assist in safeguarding the reliability of fact-finding. As always, the question is whether we can meaningfully identify, understand, and address the possibilities and risks posed by human–robot interaction. If not, we cannot make use of the technology.

The controversial debate on how the criminal justice system can adequately address privacy concernsFootnote 85 and the development of data protection law potentially point the way to a different solution. This solution lies not in law, but in technology, via privacy by design.Footnote 86 This approach can be taken to an extreme, until we arrive at what has been called “impossibility structures,” i.e., design structures that prohibit human use in certain circumstances.Footnote 87 Using the example of driving automation, we find that the intervention systems exist on a spectrum. On one end of the spectrum, there are low intervention systems known as nudging structures, such as intelligent speed assistance and drowsiness warning systems. At the high intervention end of the spectrum are impossibility structures; rather than simply monitor or enhance human driving performance, they prevent human driving entirely. For example, alcohol interlock devices immobilize the vehicle if a potential driver’s breath alcohol concentration is in excess of a certain predetermined level. These structures prevent drunken humans from driving at all, creating “facts on the ground” that replace law enforcement and criminal trials. It is very difficult to say whether it would be good to bypass human agency with such structures, the risk being that such legality-by-design undermines not only the human entitlement to act out of necessity, but perhaps also the privacy that comprises one of the foundations of liberal society, which could undermine democracy as a whole.Footnote 88

IV The Larger Perspective

It seems inevitable that human–robot interaction will impact criminal proceedings, just as it has other areas of the law. However, the exact nature of this impact is unclear. It may help to prevent crime before it happens or it might lead to a merciless application of the law.

Legal scholars primarily point to the risks of AI systems in criminal justice and the need to have adequate safeguards in place. However, many agree that certain robots have the potential to make criminal proceedings faster, and possibly even fairer. One big, not yet fully scrutinized issue will be whether we can and will trust systems that generate information where the decision-making process is opaque to humans, even when it comes to criminal verdicts.Footnote 89

Future lawmakers drafting criminal procedure must keep in mind what Tatjana Hörnle pointed out in her introduction to Part I of the volume, that humans tend to blame other humans rather than machines.Footnote 90 The same is true for bringing charges against humans as opposed to machines, as explained by Jeanne Gaakeer.Footnote 91 Part of the explanation for this view lies in the inherent perspectives of substantive and procedural law.Footnote 92 Criminal justice is tailored to humans, and it is much easier, for reasons rooted in human understanding and ingrained in the legal framework, to prosecute a human.Footnote 93 This appears to be the case when a prosecution can be directed against either a human or a human–robot cooperation,Footnote 94 and it would most probably also be the case if one had to choose between prosecuting a visible human driver or a robot that guided automated driving.

With human–robot interaction now becoming a reality of daily life and criminal justice, it is time for the legal community to reconcile themselves to these challenges, and engage in a new conversation with the computer scientists, behavioral scholars, forensic experts, and other disciplines that can provide relevant knowledge. The digital shift in criminal justice will be manifold and less than predictable. Human–robot interaction might direct more blame in the direction of humans, but it might also open up various new ways to reconstruct the past and possibly assist in exonerating falsely accused humans. A basic condition for benefiting from these developments is to understand the different aspects of human–robot interaction and their ramifications for legal proceedings.

Footnotes

* I wish to thank Red Preston for the careful language editing and valuable advice.

1 For a definition of AI, see the EU AI Act, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts Brussels, 21.4.2021 COM(2021) 206 final 2021/0106 (COD), Art. 3(1) [Artificial Intelligence Act], “software that is developed with one or more of [certain] approaches and techniques … and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

2 Kate Darling, “‘Who’s Johnny?’: Anthropomorphic Framing in Human–Robot Interaction, Integration, and Policy” in Patrick Lin, Keith Abney, & Ryan Jenkins (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (New York, NY: Oxford University Press, 2017) 173.

3 See Nhien-An Le-Khac, Daniel Jacobs, John Nijhoff et al., “Smart Vehicle Forensics: Challenges and Case Study” (2020) 109 Future Generation Computer System 500 [“Smart Vehicle”].

4 Sabine Gless, Xuan Di, & Emily Silverman, “Ca(r)veat Emptor: Crowdsourcing Data to Challenge the Testimony of In-Car Technology” (2022) 62:3 Jurimetrics 285 [“Ca(r)veat Emptor”] at 286.

5 Athina Sachoulidou, “Going Beyond the ‘Common Suspects’: To Be Presumed Innocent in the Era of Algorithms, Big Data and Artificial Intelligence” (2023) Artificial Intelligence and Law [“Going Beyond”] at section 2.1.

6 Aaron Calafato, Christian Colombo, & Gordon J. Pace, “A Controlled Natural Language for Tax Fraud Detection,” paper delivered at the International Workshop on Controlled Natural Language (2016).

7 Jason Moore, Ibrahim Baggili, & Frank Breitinger, “Find Me If You Can: Mobile GPS Mapping Applications Forensic Analysis & SNAVP the Open Source, Modular, Extensible Parser” (2017) 12:1 Journal of Digital Forensics, Security and Law 15 at 25.

8 Karolina Kremens & Wojciech Jasinski, “Editorial of Dossier ‘Admissibility of Evidence in Criminal Process. Between the Establishment of the Truth, Human Rights and the Efficiency of Proceedings’” (2021) 7:1 Revista Brasileira de Direito Processual Penal 15 at 31.

9 Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Cheltenham, UK: Edward Elgar, 2015) at 159–185; Mathew Zaia, “Forecasting Crime? Algorithmic Prediction and the Doctrine of Police Entrapment” (2020) 18:2 Canadian Journal of Law and Technology 255 at 262; “Going Beyond”, note 5 above, at section 2.1.

10 For details, see Andrew G. Ferguson, “Policing Predictive Policing” (2016) 94:5 Washington University Law Review 1109; for possible remedies, see Sabine Gless, “Predictive Policing – In Defense of ‘True Positives’” in Emre Bayamlıoğlu, Irina Baraliuc, Liisa Albertha Wilhelmina Janssens et al. (eds.), Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen (Amsterdam, Netherlands: Amsterdam University Press, 2018) 76.

11 Matthew Browning & Bruce A. Arrigo, “Stop and Risk: Policing, Data, and the Digital Age of Discrimination” (2021) 46:1 American Journal of Criminal Justice 298 at 310; Oskar J. Gstrein, Anno Bunnik, & Andrej Zwitter, “Ethical, Legal and Social Challenges of Predictive Policing” (2019) 3:3 Católica Law Review, Direito Penal 77 at 86–88.

12 For a discussion on issues of using such material, see Alex Biedermann & Joëlle Vuille, “Digital Evidence, ‘Absence’ of Data and Ambiguous Patterns of Reasoning” (2016) 16 Digital Investigation S86.

13 Andreas Winkelmann, “‘Einzelraser’ nach §315 d Abs. 1 Nr. 3 StGB und der Nachweis durch digitale Fahrzeugdate” (‘Single Speeders’ According to §315 d para. 1 no. 3 StGB and the Proof by Digital Vehicle File) (2023) 19:1 Deutsches Autorecht (German Car Law) 2 at 4–6.

14 Empirical research using naturalistic driving data has been used to predict mild cognitive impairment and (oncoming) dementia in a longitudinal research on aging drivers: The scientists found that atypical changes in driving behaviors can be early signals of mental impairment using machine learning techniques on monthly driving data captured by in-vehicle recording devices; see Xuan Di, Rongye Shi, Carolyn DiGuiseppe et al., “Using Naturalistic Driving Data to Predict Mild Cognitive Impairment and Dementia: Preliminary Findings from the Longitudinal Research on Aging Drivers (LongROAD) Study” (2021) 6:2 Geriatrics 45.

15 Steven P. Lund & Hariharan Iyer, “Likelihood Ratio as Weight of Forensic Evidence: A Closer Look” (2017) 122:27 Journal of Research of National Institute of Standards Technology 1 [“Likelihood Ratio”] at 1; Filipo Sharevski, “Rules of Professional Responsibility in Digital Forensics: A Comparative Analysis” (2015) 10:2 Journal of Digital Forensics, Security and Law 39 [“Digital Forensics”] at 39; Charles E.H. Berger & Klaas Slooten, “The LR Does Not Exist” (2016) 56:5 Science and Justice 388 [“The LR”]; Alex Biedermann & Joelle Vuille, “Understanding the Logic of Forensic Identification Decisions (Without Numbers)” (2018) Sui Generis 397.

16 Erin Murphy, “The New Forensics: Criminal Justice, False Certainty, and the Second Generation of Scientific Evidence” (2007) 95:3 California Law Review 721 [“New Forensics”] at 723–724.

17 Frederic I. Lederer, “Technology-Augmented and Virtual Courts and Courtrooms” in Michael McGuire & Thomas Holt (eds.), The Routledge Handbook of Technology, Crime and Justice (London, UK: Routledge, 2017) 518 at 525–526.

18 Meredith Rossner, David Tait, & Martha McCurdy, “Justice Reimagined: Challenges and Opportunities with Implementing Virtual Courts” (2021) 33:1 Current Issues in Criminal Justice 94 at 94, 97; Deniz Ariturk, William E. Crozier, & Brandon L. Garrett, “Virtual Criminal Courts” (2020) 2020 University of Chicago Law Review Online 57 at 67–68.

19 Paul W. Grimm, Maura R. Grossman, & Gordon V. Cormack, “Artificial Intelligence as Evidence” (2021) 19:1 Northwestern Journal of Technology and Intellectual Property 9 at 51–52.

20 See “Smart Vehicle”, note 3 above, at 501.

21 “Ca(r)veat Emptor”, note 4 above, at 290; Sabine Gless, “AI in the Courtroom: A Comparative Analysis of Machine Evidence in Criminal Trials” (2020) 51:2 Georgetown Journal of International Law 195 [“AI in the Courtroom”] at 213.

22 Lene Wacher Lentz & Nina Sunde, “The Use of Historical Call Data Records as Evidence in the Criminal Justice System – Lessons Learned from the Danish Telecom Scandal” (2021) 18 Digital Evidence and Electronic Signature Law Review 1 at 1–4.

23 Paul W. Grimm, Daniel J. Capra, & Gregory P. Joseph, “Authenticating Digital Evidence” (2017) 69:1 Baylor Law Review 1.

24 Council of Europe, “iPROCEEDS-2: Launching of the Electronic Evidence Guide v.3.0,” www.coe.int/en/web/cybercrime/-/iproceeds-2-launching-of-the-electronic-evidence-guide-v-3-0#.

25 One can bring a computer hard drive or a mobile phone to court, but the information stored is not accessible to the judges in the same way as printed information. Thus, jurisdictions must find a way to access email or mobile phone files or GPS data, and build expertise with computer forensics.

26 For a similar discussion regarding DNA evidence, see: “Likelihood Ratio”, note 15 above, at 1; “Digital Forensics”, note 15 above, at 39; Nils Ommen, Markus Blut, Christof Backhaus et al., “Toward a Better Understanding of Stakeholder Participation in the Service Innovation Process: More than One Path to Success” (2016) 69:7 Journal of Business Research 2409 at 2409; “The LR”, note 15 above, at 388.

27 “AI in the Courtroom”, note 21 above, at 232–250; “New Forensics”, note 16 above, at 723–724.

28 Cynthia Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead” (2019) 1:5 Nature Machine Intelligence 206 at 206.

29 Eli Siems, Katherine J. Strandburg, & Nicholas Vincent, “Trade Secrecy and Innovation in Forensic Technology” (2022) 73:3 UC Hastings Law Journal 773 at 794–799.

30 Mireille Hildebrandt & Bert-Jaap Koops, “The Challenges of Ambient Law and Legal Protection in the Profiling Era” (2010) 73:3 Modern Law Review 428 at 437–438.

31 Brandon L. Garrett, “Big Data and Due Process” (2014) 99 Cornell Law Review Online 207 at 211–212.

32 Lucia M. Sommerer, “The Presumption of Innocence’s Janus Head in Data-Driven Government” in Emre Bayamlıoğlu, Irina Baraliuc, Liisa Albertha Wilhelmina Janssens et al. (eds.), Being Profiled: Cogitas Ergo Sum: 10 Years of Profiling the European Citizen (Amsterdam, Netherlands: Amsterdam University Press, 2018) [“Janus”] at 58–61; “Going Beyond”, note 5 above.

33 Daniel L. Chen, “Machine Learning and the Rule of Law” (2019) 1 Revista Forumul Judecatorilor (Judiciary Forum Review) 19.

34 John Morison & Adam Harkins, “Re-engineering Justice? Robot Judges, Computerised Courts and (Semi) Automated Legal Decision Marking” (2019) 39:4 Legal Studies 618 [“Re-engineering Justice”].

35 Ran Wang, “Legal Technology in Contemporary USA and China” (2020) 39 Computer Law & Security Review Article 105459, 11–14.

36 Jasper Ulenaers, “The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge?” (2020) 11:2 Asian Journal of Law and Economics Article 20200008.

37 For consumer disputes, see Feliksas Petrauskas & Eglė Kybartienė, “Online Dispute Resolution in Consumer Disputes” (2011) 18:3 Jurisprudencija 921 at 930; for family law, see Mavis Maclean & Bregje Dijksterhuis (eds.), Digital Family Justice: From Alternative Dispute Resolution to Online Dispute Resolution? (London, UK: Bloomsbury Publishing, 2019); in general, see “Re-engineering Justice”, note 34 above, at 620–624.

38 Regarding China, see Ray W. Campbell, “Artificial Intelligence in the Courtroom: The Delivery of Justice in the Age of Machine Learning” (2020) 18:2 Colorado Technology Law Journal 323.

39 “Re-engineering Justice”, note 34 above, at 625.

40 Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S.Ct. 2290 (2017).

41 Nigel Stobbs, Daniel Hunter, & Mirko Bagaric, “Can Sentencing Be Enhanced by the Use of Artificial Intelligence?” (2017) 41:5 Criminal Law Journal 261 at 261–277.

42 Yadong Cui, Artificial Intelligence and Judicial Modernization (Shanghai, China: Shanghai People’s Publishing House and Springer, 2020).

43 Tamara Deichsel, Digitalisierung der Streitbeilegung (Digitization of Dispute Resolution) (Baden-Baden, Germany: Nomos, 2022).

44 William Ortman, “Confrontation in the Age of Plea Bargaining” (2021) 121:2 Columbia Law Review 451 at 451.

45 “Janus”, note 32 above, at 58–61.

46 “Re-engineering Justice”, note 34 above, at 632.

48 Benjamin M. Chen, Alexander Stremitzer, & Kevin Tobia, “Having Your Day in Robot Court” (2022) 36:1 Harvard Journal of Law & Technology 128 at 160–164.

50 DoNotPay, https://donotpay.com/ [DoNotPay].

51 See also Maura R. Grossman, Paul W. Grimm, Daniel G. Brown et al., “The GPTJudge: Justice in a Generative AI World” (2023) 23:1 Duke Law & Technology Review 1 at 21.

52 Success rate of DoNotPay, note 50 above.

53 For a news coverage, see Bobby Allyn, “A Robot was Scheduled to Argue in Court, Then Came the Jail Threats,” NPR (January 25, 2023), www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats.

54 “Ca(r)veat Emptor”, note 4 above, at 294–295.

55 Gabriel Hallevy, “The Criminal Liability of Artificial Intelligence Entities – From Science Fiction to Legal Social Control” (2010) 4:2 Akron Intellectual Property Journal 171 at 179; Eric Hilgendorf, “Können Roboter schuldhaft handeln?” (Can Robots Act Culpably?) in Susanne Beck (ed.), Jenseits von Mensch und Maschine (Beyond Man and Machine) (Baden-Baden, Germany: Nomos, 2012) at 119; Susanne Beck, “Intelligent Agents and Criminal Law – Negligence, Diffusion of Liability and Electronic Personhood” (2016) 86:4 Robotics and Autonomous Systems 138 [“Intelligent Agents”] at 141–142; Sabine Gless, Emily Silverman, & Thomas Weigend, “If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability” (2016) 19:3 New Criminal Law Review 412 at 412–424; Monika Simmler & Nora Markwalder, “Guilty Robots? Rethinking the Nature of Culpability and Legal Personhood in an Age of Artificial Intelligence” (2019) 30:1 Criminal Law Forum 1 [“Guilty Robots”] at 4; Ying Hu, “Robot Criminals” (2019) 52:2 University of Michigan Journal of Law 487 at 497–498; Ryan Abbott & Alex Sarch, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction” (2019) 53:1 University of California, Davies Law Review 323 at 351.

56 European Union, The European Parliament, Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), OJ 2015 C 252 (EU: Official Journal of the European Union, 2017) at para. 59.

57 “Guilty Robots”, note 55 above, at 9; “Intelligent Agents”, note 55 above, at 141 f.; Antonio Ianni & Michael W. Monterossi, “Artificial Autonomous Agents and the Question of Electronic Personhood: A Path between Subjectivity and Liability” (2017) 26:4 Griffith Law Review 563 at 570; see also Gunther Teubner, “Digital Personhood? The Status of Autonomous Software Agents in Private Law” (2018) Ancilla Juris 106 at 113.

58 Lawrence B. Solum, “Legal Personhood for Artificial Intelligences” (1992) 70:4 North Carolina Law Review 1231 [“Legal Personhood”] at 1231.

59 For a debate of his arguments, see Bert-Japp Koops, Mireille Hildebrandt, & David-Oliver Jaquet-Chiffelle, “Bridging the Accountability Gap: Rights for New Entities in the Information Society?” (2010) 11:2 Minnesota Journal of Law, Science & Technology 497 at 518–532.

60 “Legal Personhood”, note 58 above, at 1239.

61 Gunther Teubner, “Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law” (2006) 33:4 Journal of Law and Society 497 at 502.

62 Vanessa Franssen & Alyson Berrendorf, “The Use of AI Tools in Criminal Courts: Justice Done and Seen to Be Done?” (2021) 92:1 Revue Internationale de Droit Pénal 199 at 206.

63 Katherine Freeman, “Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis” (2016) 18:5 North Carolina Journal of Law & Technology 75.

64 Arthur Rizer & Caleb Watney, “Artificial Intelligence Can Make Our Jail System More Efficient, Equitable, and Just” (2018) 23:1 Texas Review of Law & Politics 181; Han-Wei Liu, Ching-Fu Lin, & Yu-Jie Chen, “Beyond State v Loomis: Artificial Intelligence, Government Algorithmization and Accountability” (2019) 27:2 International Journal of Law and Information Technology 122 at 133–141; Hans Steege, “Algorithmenbasierte Diskriminierung durch Einsatz von Künstlicher Intelligenz” (Algorithm-Based Discrimination through the Use of Artificial Intelligence) (2019) 11 Multimedia und Recht 715. For a European view on such systems, see Serena Quattrocolo, Artificial Intelligence, Computational Modelling and Criminal Proceedings: A Framework for A European Legal Discussion, Legal Studies in International, European and Comparative Criminal Law, vol. 4 (Cham, Switzerland: Springer Nature, 2020); for a Canadian point of view, see Sara M. Smyth, “Can We Trust Artificial Intelligence in Criminal Law Enforcement?” (2019) 17:1 Canadian Journal of Law and Technology 99; for a comparison, see Simon Chesterman, “Through a Glass, Darkly: Artificial Intelligence and the Problem of Opacity” (2021) 69:2 American Journal of Comparative Law 271 at 287–294.

65 Robert Werth, “Risk and Punishment: The Recent History and Uncertain Future of Actuarial, Algorithmic, and ‘Evidence-Based’ Penal Techniques” (2019) 13:2 Sociology Compass 1 at 8–10.

66 John Lightbourne, “Damned Lies & Criminal Sentencing Using Evidence-Based Tools” (2016) 15:1 Duke Law and Technology Review 327 at 334–342.

67 Marie-Claire Aarts, “The Rise of Synthetic Judges: If We Dehumanize the Judiciary, Whose Hand Will Hold the Gavel?” (2021) 60:3 Washburn Law Journal 511.

68 European Union, The European Parliament, Official Journal of the European Union L 119 of 4 May 2016, OJ 2015 L 119 (EU: Official Journal of the European Union, 2016) [L 119] at 1.

69 Artificial Intelligence Act, note 1 above.

70 Cf. Isadora Neroni Rezende, “Facial Recognition in Police Hands: Assessing the ‘Clearview Case’ from a European Perspective” (2020) 11:3 New Journal of European Criminal Law 375 at 389; for civil society challenges against Clearview AI in Europe, see “Challenge against Clearview AI in Europe,” Privacy International, https://privacyinternational.org/legal-action/challenge-against-clearview-ai-europe.

71 See e.g., Shanni Davidowitz, “23andEveryone: Privacy Concerns with Law Enforcement’s Use of Genealogy Databases to Implicate Relatives in Criminal Investigations” (2019) 85:1 Brooklyn Law Review 185.

72 Kristine Hamann & Rachel Smith, Facial Recognition Technology: Where Will It Take Us? (Prosecutors’ Center for Excellence, 2019), Art. 3, at 11–13; Johnathan W. Penney, “Understanding Chilling Effects” (2022) 106:3 Minnesota Law Review 1451.

73 United Nations, Report of the United Nations High Commissioner for Human Rights, Impact of New Technologies on the Promotion and Protection of Human Rights in the Context of Assemblies, Including Peaceful Protests, UN Doc. A/HRC/44/24 (United Nations: Office of the High Commissioner for Human Rights, 2020).

75 Council of Europe, Guidelines on Facial Recognition, adopted by the Consultative Committee of the Convention for the protection of individuals with regard to automatic processing of personal data (Council of Europe: Consultative Committee of the Convention for the protection of individuals with regard to automatic processing of personal data 2021), https://edoc.coe.int/en/artificial-intelligence/9753-guidelines-on-facial-recognition.html.

76 Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (AI Act) COM/2021/206 final.

77 Michael Veale & Frederik Zuiderveen Borgesius, “Demystifying the Draft EU Artificial Intelligence Act: Analysing the Good, the Bad, and the Unclear Elements of the Proposed Approach” (2021) 22.4 Computer Law Review International 97–112 at 98.

78 Mirko Bagaric, Jennifer Svilar, Melissa Bull et al., “The Solution to the Pervasive Bias and Discrimination in the Criminal Justice System: Transparent and Fair Artificial Intelligence” (2021) 59:1 American Criminal Law Review 95 at 116 and 124; Mike Nellis, “From Electronic Monitoring to Artificial Intelligence: Technopopulism and the Future of Probation Services” in Lol Burke, Nicola Carr, Emma Cluley et al. (eds.), Reimagining Probation Practice, 1st ed. (London, UK: Routledge, 2022) 207.

79 “Ca(r)veat Emptor”, note 4 above, at 300–301.

80 See, for a detailed discussion, Kate Weisburd, “Sentenced to Surveillance: Fourth Amendment Limits on Electronic Monitoring” (2019) 98:4 North Carolina Law Review 717 at 753–757.

81 See e.g. relevant provisions in the Artificial Intelligence Act, note 1 above; European Union, European Commission, Proposal for a directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI liability directive), COM/2022/496 final (Brussels: European Commission, 2022).

82 Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz et al., “Towards Transparency by Design for Artificial Intelligence” (2020) 26:6 Science and Engineering Ethics 3333 [“Towards Transparency”] at 3335–3336.

83 Paul De Hert & Guillermo Lazcoz, “When GDPR-Principles Blind Each Other: Accountability, Not Transparency, at the Heart of Algorithmic Governance” (2022) 8:1 European Data Protection Law Review 31.

84 See e.g., L 119, note 68 above, at 1.

85 For a discussion on the protection offered by US Constitutional law regarding a rapidly developing technology, see Katherine J. Strandburg, “Home, Home on the Web and Other Fourth Amendment Implications of Technosocial Change” (2011) 70:3 Maryland Law Review 614.

86 “Towards Transparency”, note 80 above, at 3343–3344.

87 Sabine Gless & Emily Silverman, “Create Law or Facts? Smart Cars and Smart Compliance Systems,” Oxford Business Law Blog (March 17, 2023), https://blogs.law.ox.ac.uk/oblb/blog-post/2023/03/create-law-or-facts-smart-cars-and-smart-compliance-systems.

88 See Michael L. Rich, “Should We Make Crime Impossible?” (2013) 36:2 Harvard Journal Law & Public Policy 795 at 802–804 for definition of terms, and “Smart Vehicle”, note 3 above, at 500, for a reference to Professor Edward K. Cheng as the originator of the term “impossibility structures.” For other attempts to define the term, see Edward K. Cheng, “Structural Laws and the Puzzle of Regulating Behavior” (2006) 100:2 Northwestern University of Law Review 655 at 664 (“type II structural controls”); Christina M. Mulligan, “Perfect Enforcement of Law: When to Limit and When to Use Technology” (2008) 14:4 Richmond Journal of Law & Technology 1 at 3 (“perfect prevention”); Timo Rademacher, “Of New Technologies and Old Laws: Do We Need a Right to Violate the Law?” (2020) 5:1 European Journal for Security Research 39 at 45.

89 See Chapter 6 in this volume.

90 See also Madeleine Clare Elish & Tim Hwang, “Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation,” Data and Society, Comparative Studies in Intelligent Systems – Working Paper 1 (2015) at 2–3.

91 See Chapter 15 in this volume.

92 In this volume, Frode Pederson’s Chapter 13 discusses how even narrative reflects a human orientation, which creates issues when dealing with robots.

93 Cf. Madeleine Elish, “Moral Crumple Zones: Cautionary Tales in Human–Robot Interaction (Pre-Print)” (2019) 5 Engaging Science, Technology, and Society 40.

94 Laurel Wamsley, “Uber Not Criminally Liable in Death of Woman Hit by Self-Driving Car, Prosecutor Says,” NPR (March 6, 2019), www.npr.org/2019/03/06/700801945/uber-not-criminally-liable-in-death-of-woman-hit-by-self-driving-car-says-prosec (in the death of Elaine Herzberg, unsolved evidentiary issues presumably hampered prosecution: “After a very thorough review of all the evidence presented, this Office has determined that there is no basis for criminal liability for the Uber corporation arising from this matter …”).

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×