We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The final chapter serves to draw the various strands of the book together, surveying what has been discovered, and expanding on the fundamental arguments of the book. It therefore begins with an analysis of Pinterest, which stands as an emblem of all that literacy means in postdigital times, whether that be sophisticated multimodal practices, durational time, or algorithmic logic. Looking back over the screen lives discussed in the book, including those of the crescent voices and of Samuel Sandor, this chapter crystallizes the personal take on screen lives that the book offers, reiterating the need to ‘undo the digital’ and find the human in, on, with, at, and against screens. It also presents some of the problems scholarship must meet, such as digital inequalities, whether that be in terms of time, awareness, or skill with technology. However, despite the considerable negative forces at work in screen lives which the book has taken care to unravel, this concluding chapter advocates ‘taking the higher ground’ and enacting wonder in interactions with screens.
Moving on to AI and algorithms, the penultimate chapter of the book focuses on the importance of vigilance and criticality when engaging with screens. The influence of AI and algorithms on day-to-day interactions, their inherent potential to steal content, and their tendencies to stir up racism and intolerance all mean that it is becoming increasingly vital for researchers, policymakers, and educators to understand these technologies. This chapter argues that being informed and armed with meta-awareness about AI and algorithmic processes is now key to critical digital literacy. In arguing towards this conclusion, it starts by presenting scholarly perspectives and research on AI and literacy, before turning to Ruha Benjamin and Safiya Umoja Noble’s research into racism in AI and algorithms, including Benjamin’s concept of the ‘New Jim Code’. Crescent voices are invoked to contextualize these ideas in real world experiences with algorithmic culture, where encounters with blackboxed practices and struggles to articulate experiences of algorithmic patterns serve to demonstrate further the importance of finding new constructs for critical literacy that encompass algorithmic logic.
The implementation of the General Data Protection Regulation (GDPR) in the EU, rather than the regulation itself, is holding back technological innovation. The EU’s data protection governance architecture is complex, leading to contradictory interpretations among Member States. This situation is prompting companies of all kinds to halt the deployment of transformative projects in the EU. The case of Meta is paradigmatic: both the UK and the EU broadly have the same regulation (GDPR), but the UK swiftly determined that Meta could train its generative AI model using first-party public data under the legal basis of legitimate interest, while in the EU, the European Data Protection Board (EDPB) took months to issue an Opinion that national authorities must still interpret and implement individually, leading to legal uncertainty. Similarly, the case of Deepseek has demonstrated how some national data protection authorities, such as the Italian Garante, have moved to ban the AI model outright, while others have opted for investigations. This fragmented enforcement landscape exacerbates regulatory uncertainty and hampers EU’s competitiveness, particularly for startups, which lack the resources to navigate an unpredictable compliance framework. For the EU to remain competitive in the global AI race, strengthening the EDPB’s role is essential.
Not a day goes by without a new story on the perils of technology. We hear of increasingly clever machines that surpass human capability and comprehension, of tech billionaires imploring each other to stop the ‘out-of-control race’ to produce the most powerful artificial intelligence which poses ‘profound risks to society’, we hear of genetic technologies capable of altering the human genome in ways we cannot predict and a future two-tier humanity consisting of those of us who are genetically enhanced and those who are not. How can we respond to these stories? What should we do politically? By way of exploring these questions (using the UK as the primary example of context), I want to move beyond the usual arguments and legal devices that serve to identify tech developers, and users, as being at fault for individual acts of wrongdoing, recklessness, incompetence or negligence, and ask instead how we might address the broader structural dynamics intertwined with the increasing use of AI and Repro-tech. My argument will be that to take a much sharper structural perspective on these transformative technologies is a vital requirement of contemporary politics.
Humanity’s increasing reliance on AI and robotics is driven by compelling narratives of efficiency in which the human is a poor substitute for the extraordinary computational power of machine learning, the creative competences of generative AI as well as the speed, accuracy and consistency of automation in so many spheres of human activity. Indeed, AI is increasingly becoming the core technological foundation of many contemporary societies. Most thinking on how to manage the downside risks to humanity of this seismic societal shift is set out in a direct fault-based relationship such as the innovative EU AI Act which is by far the most comprehensive political attempt to locate (or deter) those directly responsible for AI-generated harm. I argue that while such approaches are vital for combating injustice exacerbated by AI and robotics, too little thought goes into political approaches to the structural dynamics of AI’s impact on society. By way of example, I examine the UK ‘pro-innovation’ approach to AI governance and explore how it fails to address the structural injustices inherent in increasing AI usage.
The third industrial revolution saw the creation of computers and an increased use of technology in industry and households. We are now in the fourth industrial revolution: cyber, with advances in artificial intelligence, automation and the internet of things. The third and fourth revolutions have had a large impact on health care, shaping how health and social care are planned, managed and delivered, as well as supporting wellness and the promotion of health. This growth has seen the advent of the discipline of health informatics with several sub-specialty areas emerging over the past two decades. Informatics is used across primary care, allied health, community care and dentistry, with technology supporting the primary health care continuum. This chapter explores the development of health informatics as a discipline and how health care innovation, technology, governance and the workforce are supporting digital health transformation.
How exactly is technology transforming us and our worlds, and what (if anything) can and should we do about it? Heidegger already felt this philosophical question concerning technology pressing in on him in 1951, and his thought-full and deliberately provocative response is still worth pondering today. What light does his thinking cast not just on the nuclear technology of the atomic age but also on more contemporary technologies such as genome engineering, synthetic biology, and the latest advances in information technology, so-called “generative AIs” like ChatGPT? These are some of the questions this book addresses, situating the latest controversial technologies in the light of Heidegger's influential understanding of technology as an historical mode of ontological disclosure. In this way, we seek to take the measure of Heidegger's ontological understanding of technology as a constellation of intelligibility with an important philosophical heritage and a dangerous but still promising future.
The promise of artificial intelligence (AI) is increasingly invoked to ‘revolutionize’ practices of global security governance, including in the domain of border control. Legal scholarship tends to confront these changes by foregrounding the rule of law challenges associated with nascent forms of governance by data, and by imposing new regulatory standards. Yet, little is known about how these algorithmic systems are already reconfiguring legal norms and processes, while generating novel security techniques and practices for knowing and governing “risk” before the border. Exploring these questions, this article makes three important contributions to the literature. On an empirical level, it provides an original socio-legal study of the processes constructing and implementing Cerberus – an AI-based risk-analysis platform deployed by the UK Home Office. This analysis provides unique insight into the institutional frictions, legal mediations and emergent governance formations involved in the introduction of this algorithmic bordering system. On a methodological level, the article directly engages with the focus on ‘legal infrastructures’ in this special issue. It uses an original approach (infra-legalities) which follows how legal and infrastructural elements are relationally and materially tied together in practice. Rather than trying to conceptually settle the relation between law and infrastructure – or qualifying law as a sui generis infrastructure – the article traces incipient modes of governmentality and regulatory ordering in which both legal and infrastructural elements are metabolized. In its account of Cerberus, the article analyzes this emergent composition as a dispositif of speculative suspicion. Finally, on a normative and political level, the article signals the significant stakes involved in this algorithmic enactment of risk. It shows how prevailing regulatory tropes revolving around ‘debiasing’ and retention of a ‘human in the loop’ offer a limited register of remedy, and work to amplify the reach of Cerberus. We conclude with reflections on critiquing algorithmic systems like Cerberus through the emergent infrastructural relations they enact.
AI brings risks but also opportunities for consumers. When it comes to consumer law, which traditionally focuses on protecting consumers’ autonomy and self-determination, the increased use of AI also poses major challenges. This chapter discusses both the challenges and opportunities of AI in the consumer context (Section 10.2 and 10.3) and provides a brief overview of some of the relevant consumer protection instruments in the EU legal order (Section 10.4). A case study on dark patterns illustrates the shortcomings of the current consumer protection framework more concretely (Section 10.5).
This chapter discusses how AI technologies permeate the media sector. It sketches opportunities and benefits of the use of AI in media content gathering and production, media content distribution, fact-checking, and content moderation. The chapter then zooms in on ethical and legal risks raised by AI-driven media applications: lack of data availability, poor data quality, and bias in training datasets, lack of transparency, risks for the right to freedom of expression, threats to media freedom and pluralism online, and threats to media independence. Finally, the chapter introduces the relevant elements of the EU legal framework which aim to mitigate these risks, such as the Digital Services Act, the European Media Freedom Act, and the AI Act.
Despite their centrality within discussions on AI governance, fairness, justice, and equality remain elusive and essentially contested concepts: even when some shared understanding concerning their meaning can be found on an abstract level, people may still disagree on their relation and realization. In this chapter, we aim to clear up some uncertainties concerning these notions. Taking one particular interpretation of fairness as our point of departure (fairness as nonarbitrariness), we first investigate the distinction between procedural and substantive conceptions of fairness (Section 4.2). We then discuss the relationship between fairness, justice, and equality (Section 4.3). Starting with an exploration of Rawls’ conception of justice as fairness, we then position distributive approaches toward issues of justice and fairness against socio-relational ones. In a final step, we consider the limitations of techno-solutionism and attempts to formalize fairness by design (Section 4.4). Throughout this chapter, we illustrate how the design and regulation of fair AI systems is not an insular exercise: attention must not only be paid to the procedures by which these systems are governed and the outcomes they produce, but also to the social processes, structures, and relationships that inform, and are co-shaped by, their functioning.
The rules of war, formally known as international humanitarian law, have been developing for centuries, reflecting society’s moral compass, the evolution of its values, and technological progress. While humanitarian law has been successful in prohibiting the use of certain methods and means of warfare, it is nevertheless destined to remain in a constant catch-up cycle with the atrocities of war. Nowadays, the widespread development and adoption of digital technologies in warfare, including AI, are leading to some of the biggest changes in human history. Is international humanitarian law up to the task of addressing the threats those technologies can present in the context of armed conflicts? This chapter provides a basic understanding of the system, principles, and internal logic of this legal domain, which is necessary to evaluate the actual or potential role of AI systems in (non-)international armed conflicts. The chapter aims to contribute to the discussion of the ex-ante regulation of AI systems used for military purposes beyond the scope of lethal autonomous weapons, as well as to recognize the potential that AI carries for improving the applicability of the basic principles of international humanitarian law, if used in an accountable and responsible way.
The main goal of this chapter is to introduce one type of AI used for law enforcement, namely predictive policing, and to discuss the main legal, ethical, and social concerns this raises. In the last two decades, police forces in Europe and in North America have increasingly invested in predictive policing applications. Two types of predictive policing will be discussed: predictive mapping and predictive identification. After discussing these two practices and what is known about their effectiveness, I discuss the legal, ethical, and social issues they raise, covering aspects relating to their efficacy, governance, and organizational use, as well as the impact they have on citizens and society.
Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation of your relationship with a deceased loved one, though it might support one’s continuing bonds with the dead. To the second question, we argue that, in and of themselves, relationships with thanabots cannot benefit us as much as rewarding and healthy intimate relationships with other humans, though we explain why it is difficult to make reliable comparative generalizations about the instrumental value of these relationships.
Currently, methods for mapping agricultural crops have been predominantly developed for a number of the most important and popular crops. These methods are often based on remote sensing data, scarce information about the location and boundaries of fields of a particular crop, and involve analyzing phenological changes throughout the growing season by utilizing vegetation indices, e.g., the normalized difference vegetation index. However, this approach encounters challenges when attempting to distinguish fields with different crops growing in the same area or crops that share similar phenology. This complicates the reliable identification of the target crops based solely on vegetation index patterns. This research paper aims to investigate the potential of advanced techniques for crop mapping using satellite data and qualitative information. These advanced approaches involve interpreting features in satellite images in conjunction with cartographic, statistical, and climate data. The study focuses on data collection and mapping of three specific crops: lavender, almond, and barley, and relies on various sources of information for crop detection, including satellite image characteristics, regional statistical data detailing crop areas, and phenological information, such as flowering dates and the end of the growing season in specific regions. As an example, the study attempts to visually identify lavender fields in Bulgaria and almond orchards in the USA. We test several state-of-the-art methods for semantic segmentation (U-Net, UNet++, ResUnet). The best result was achieved by a ResUnet model (96.4%). Furthermore, the paper explores how vegetation indices can be leveraged to enhance the precision of crop identification, showcasing their advanced capabilities for this task.
Language is the natural currency of most social communication. Until the emergence of more powerful computational methods, it simply was not feasible to measure its use in mainline social psychology. We now know that language can reveal behavioral evidence of mental states and personality traits, as well as clues to the future behavior of individuals and groups. In this chapter, we first review the history of language research in social personality psychology. We then survey the main methods for deriving psychological insights from language (ranging from data-driven to theory-driven, naturalistic to experimental, qualitative to quantitative, holistic to granular, and transparent to opaque) and describe illustrative examples of findings from each approach. Finally, we present our view of the new capabilities, real-world applications, and ethical and psychometric quagmires on the horizon as language research continues to evolve in the future.
Nigeria has a significant gender financial inclusion gap with women disproportionately represented among the financially excluded. Artificial intelligence (AI) powered financial technologies (fintech) present distinctive advantages for enhancing women’s inclusion. This includes efficiency gains, reduced transaction costs, and personalized services tailored to women’s needs. Nonetheless, AI harbours a paradox. While it promises to address financial inclusion, it can also inadvertently perpetuate and amplify gender bias. The critical question is thus, how can AI effectively address the challenges of women’s financial exclusion in Nigeria? Using publicly available data, this research undertakes a qualitative analysis of AI-powered Fintech services in Nigeria. Its objective is to understand how innovations in financial services correspond to the needs of potential users like unbanked or underserved women. The research finds that introducing innovative financial services and technology is insufficient to ensure inclusion. Financial inclusion requires the availability, accessibility, affordability, appropriateness, sustainability, and alignment of services with the needs of potential users, and policy-driven strategies that aid inclusion.
In the literature, there are polarized views regarding the capabilities of technology to embed societal values. One aisle of the debate contends that technical artifacts are value-neutral since values are not peculiar to inanimate objects. Scholars on the other side of the aisle argue that technologies tend to be value-laden. With the call to embed ethical values in technology, this article explores how AI and other adjacent technologies are designed and developed to foster social justice. Drawing insights from prior studies, this paper identifies seven African moral values considered central to actualizing social justice; of these, two stand out—respect for diversity and ethnic neutrality. By introducing use case analysis along with the Discovery, Translation, and Verification (DTV) framework and validating via Focus Group Discussion, this study revealed novel findings: first, ethical value analysis is best carried out alongside software system analysis. Second, to embed ethics in technology, interdisciplinary expertise is required. Third, the DTV approach combined with the software engineering methodology provides a promising way to embed moral values in technology. Against this backdrop, the two highlighted ethical values—respect for diversity and ethnic neutrality—help ground the pursuit of social justice.
Discussions of the development and governance of data-driven systems have, of late, come to revolve around questions of trust and trustworthiness. However, the connections between them remain relatively understudied and, more importantly, the conditions under which the latter quality of trustworthiness might reliably lead to the placing of ‘well-directed’ trust. In this paper, we argue that this challenge for the creation of ‘rich’ trustworthiness, which we term the Trustworthiness Recognition Problem, can usefully be approached as a problem of effective signalling, and suggest that its resolution can be informed by a multidisciplinary approach that relies on insights from economics and behavioural ecology. We suggest, overall, that the domain specificity inherent to the signalling theory paradigm offers an effective solution to the TRP, which we believe will be foundational to whether and how rapidly improving technologies are integrated in the healthcare space. We suggest that solving the TRP will not be possible without taking an interdisciplinary approach and suggest further avenues of inquiry that we believe will be fruitful.