We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The final chapter serves to draw the various strands of the book together, surveying what has been discovered, and expanding on the fundamental arguments of the book. It therefore begins with an analysis of Pinterest, which stands as an emblem of all that literacy means in postdigital times, whether that be sophisticated multimodal practices, durational time, or algorithmic logic. Looking back over the screen lives discussed in the book, including those of the crescent voices and of Samuel Sandor, this chapter crystallizes the personal take on screen lives that the book offers, reiterating the need to ‘undo the digital’ and find the human in, on, with, at, and against screens. It also presents some of the problems scholarship must meet, such as digital inequalities, whether that be in terms of time, awareness, or skill with technology. However, despite the considerable negative forces at work in screen lives which the book has taken care to unravel, this concluding chapter advocates ‘taking the higher ground’ and enacting wonder in interactions with screens.
Opening with an analysis of Instagram, Chapter 2 is concerned with how to think about postdigitality. Touching on multimodality, time-space-place, and responsive loops, this chapter highlights the contrast between digital life and postdigital life, unravelling the many dimensions of postdigitality. It concludes that postdigitality represents a world of symbiosis, whether that be of body and mind, physical life and screen life, representation and non-representation, immersion and connectivity, or interaction and convergence. These combinations are what lends digital media its unique power to move across time, space, and place. To explore these ideas, Chapter 2 analyses data which has been processed through ATLAS.ti to produce a list of postdigital keywords used by crescent voices.
Chapter 4 delves deeper into screen life, adopting an even more human-centred focus, in order to uncover the affective aspect of screen lives. Maintaining an embodied approach, this chapter explores how affective experiences with screens are intentionally elicited through how media is designed, how affect on screens might differ from affect outside screens, and how digital affect can inform practices, and practices induce affect. The chapter begins by defining affect, then digital affect more specifically, before turning to interviewees for their perspectives on how they feel and sense on screens, touching on topics such as micro digital affect, algorithms, and the pandemic. Crescent voices in this chapter help illustrate how digital affect is vital to understanding digital literacy practices and screen lives, especially the double-edged aspects of our affective relationships to screens.
Moving on to AI and algorithms, the penultimate chapter of the book focuses on the importance of vigilance and criticality when engaging with screens. The influence of AI and algorithms on day-to-day interactions, their inherent potential to steal content, and their tendencies to stir up racism and intolerance all mean that it is becoming increasingly vital for researchers, policymakers, and educators to understand these technologies. This chapter argues that being informed and armed with meta-awareness about AI and algorithmic processes is now key to critical digital literacy. In arguing towards this conclusion, it starts by presenting scholarly perspectives and research on AI and literacy, before turning to Ruha Benjamin and Safiya Umoja Noble’s research into racism in AI and algorithms, including Benjamin’s concept of the ‘New Jim Code’. Crescent voices are invoked to contextualize these ideas in real world experiences with algorithmic culture, where encounters with blackboxed practices and struggles to articulate experiences of algorithmic patterns serve to demonstrate further the importance of finding new constructs for critical literacy that encompass algorithmic logic.
There is currently no definitive method for identifying individuals with psychosis in secondary care on a population-level using administrative healthcare data from England.
Aims
To develop various algorithms to identify individuals with psychosis in the Mental Health Services Data Set (MHSDS), guided by national estimates of the prevalence of psychosis.
Method
Using a combination of data elements in the MHSDS for financial years 2017–2018 and 2018–2019 (mental health cluster (a way to describe and classify a group of individuals with similar characteristics), Health of the Nation Outcome Scale (HoNOS) scores, reason for referral, primary diagnosis, first-episode psychosis flag, early intervention in psychosis team flag), we developed 12 unique algorithms to detect individuals with psychosis seen in secondary care. The resulting numbers were then compared with national estimates of the prevalence of psychosis to ascertain whether they were reasonable or not.
Results
The 12 algorithms produced 99 204–138 516 and 107 545–134 954 cases of psychosis for financial years 2017–2018 and 2018–2019, respectively, in line with national prevalence estimates. The numbers of cases of psychosis identified by the different algorithms differed according to the type and number (3–6) of data elements used. Most algorithms identified the same core of patients.
Conclusions
The MHSDS can be used to identify individuals with psychosis in secondary care in England. Users can employ several algorithms to do so, depending on the objective of their analysis and their preference regarding the data elements employed. These algorithms could be used for surveillance, research and/or policy purposes.
At a time when the European Union and its Member States are constantly adopting measures to combat serious crime and terrorism, particularly through the prism of data protection rules, the CJEU is acting as a bulwark by imposing compliance with strict conditions, thereby encroaching on national rules of criminal procedure, which are initially the responsibility of the Member States. In this contribution, we will examine how and on what basis the Ligue des Droits Humains was able to get the CJEU to rule on the Passenger Name Records Directive, and to what extent this action was indeed “strategic.”
This chapter examines ways in which longstanding features of the legal system serve to counteract the forces outlined in Chapters 2 and 3 and thereby minimize the influence of improper factors on judicial behavior. It considers the adversarial process, the doctrine of precedent (or stare decisis), and the practice of justifying decisions via written opinions, and examines the ways in which the nature of each – and thus its effectiveness in channeling judges – has decreased. It further explores changes in the practice of law and in the culture more generally, including automation, that have altered the manner and depth in which lawyers and judges engage with the law.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. This approach consists of iteratively performing (steps of) existing algorithms for ordinary least squares (OLS) fitting of the same model. The approach is based on minimizing a function that majorizes the WLS loss function. The generality of the approach implies that, for every model for which an OLS fitting algorithm is available, the present approach yields a WLS fitting algorithm. In the special case where the WLS weight matrix is binary, the approach reduces to missing data imputation.
As its name indicates, algorithmic regulation relies on the automation of regulatory processes through algorithms. Examining the impact of algorithmic regulation on the rule of law hence first requires an understanding of how algorithms work. In this chapter, I therefore start by focusing on the technical aspects of algorithmic systems (Section 2.1), and complement this discussion with an overview of their societal impact, emphasising their societal embeddedness and the consequences thereof (Section 2.2). Next, I examine how and why public authorities rely on algorithmic systems to inform and take administrative acts, with special attention to the historical adoption of such systems, and their impact on the role of discretion (Section 2.3). Finally, I draw some conclusions for subsequent chapters (Section 2.4).
This Element endeavors to enrich and broaden Southeast Asian research by exploring the intricate interplay between social media and politics. Employing an interdisciplinary approach and grounded in extensive longitudinal research, the study uncovers nuanced political implications, highlighting the platform's dual role in both fostering grassroots activism and enabling autocratic practices of algorithmic politics, notably in electoral politics. It underscores social media's alignment with communicative capitalism, where algorithmic marketing culture overshadows public discourse, and perpetuates affective binary mobilization that benefits both progressive and regressive grassroots activism. It can facilitate oppositional forces but is susceptible to authoritarian capture. The rise of algorithmic politics also exacerbates polarization through algorithmic enclaves and escalates disinformation, furthering autocraticizing trends. Beyond Southeast Asia, the Element provides analytical and conceptual frameworks to comprehend the mutual algorithmic/political dynamics amidst the contestation between progressive forces and the autocratic shaping of technological platforms.
This last chapter summarizes most of the material in this book in a range of concluding statements. It provides a summary of the lessons learned. These lessons can be viewed as guidelines for research practice.
In 1997 Amazon started as a small online bookseller. It is now the largest bookseller in the US and one of the largest companies in the world, due, in part, to its implementation of algorithms and access to user data. This Element explains how these algorithms work, and specifically how they recommend books and make them visible to readers. It argues that framing algorithms as felicitous or infelicitous allows us to reconsider the imagined authority of an algorithm's recommendation as a culturally situated performance. It also explores the material effects of bookselling algorithms on the forms of labor of the bookstore. The Element ends by considering future directions for research, arguing that the bookselling industry would benefit from an investment in algorithmic literacy.
We study the problem of fitting a piecewise affine (PWA) function to input–output data. Our algorithm divides the input domain into finitely many regions whose shapes are specified by a user-provided template and such that the input–output data in each region are fit by an affine function within a user-provided error tolerance. We first prove that this problem is NP-hard. Then, we present a top-down algorithmic approach for solving the problem. The algorithm considers subsets of the data points in a systematic manner, trying to fit an affine function for each subset using linear regression. If regression fails on a subset, the algorithm extracts a minimal set of points from the subset (an unsatisfiable core) that is responsible for the failure. The identified core is then used to split the current subset into smaller ones. By combining this top-down scheme with a set-covering algorithm, we derive an overall approach that provides optimal PWA models for a given error tolerance, where optimality refers to minimizing the number of pieces of the PWA model. We demonstrate our approach on three numerical examples that include PWA approximations of a widely used nonlinear insulin–glucose regulation model and a double inverted pendulum with soft contacts.
While governments have long discussed the promise of delegating important decisions to machines, actual use often lags. Consequently, we know little about the variation in the deployment of such delegations in large numbers of similar governmental organizations. Using data from crime laboratories in the United States, we examine the uneven distribution over time of a specific, well-known expert system for ballistics imaging for a large sample of local and regional public agencies; an expert system is an inference engine joined with a knowledge base. Our statistical model is informed by the push-pull-capability theory of innovation in the public sector. We test hypotheses about the probability of deployment and provide evidence that the use of this expert system varies with the pull of agency task environments and the enabling support of organizational resources—and that the impacts of those factors have changed over time. Within this context, we also present evidence that general knowledge of the use of expert systems has supported the use of this specific expert system in many agencies. This empirical case and this theory of innovation provide broad evidence about the historical utilization of expert systems as algorithms in public sector applications.
Within Holocaust studies, there has been an increasingly uncritical acceptance that by engaging with social media, Holocaust memory has shifted from the ‘era of the witness’ to the ‘era of the user’ (Hogervorst 2020). This paper starts by problematising this proposition. This claim to a paradigmatic shift implies that (1) the user somehow replaces the witness as an authority of memory, which neglects the wealth of digital recordings of witnesses now circulating in digital spaces and (2) agency online is solely human-centric, a position that ignores the complex negotiations between corporations, individuals, and computational logics that shape our digital experiences. This article proposes instead that we take a posthumanist approach to understanding Holocaust memory on, and with, social media. Adapting Barad's (2007) work on entanglement to memory studies, we analyse two case studies on TikTok: the #WeRemember campaign and the docuseries How To: Never Forget to demonstrate: (1) the usefulness of reading Holocaust memory on social media through the lens of entanglement which offers a methodology that accounts for the complex network of human and non-human actants involved in the production of this phenomenon which are simultaneously being shaped by it. (2) That professional memory institutions and organisations are increasingly acknowledging the use of social media for the sake of Holocaust memory. Nevertheless, we observe that in practice the significance of technical actancy is still undervalued in this context.
Network science is a broadly interdisciplinary field, pulling from computer science, mathematics, statistics, and more. The data scientist working with networks thus needs a broad base of knowledge, as network data calls for—and is analyzed with—many computational and mathematical tools. One needs good working knowledge in programming, including data structures and algorithms to effectively analyze networks. In addition to graph theory, probability theory is the foundation for any statistical modeling and data analysis. Linear algebra provides another foundation for network analysis and modeling because matrices are often the most natural way to represent graphs. Although this book assumes that readers are familiar with the basics of these topics, here we review the computational and mathematical concepts and notation that will be used throughout the book. You can use this chapter as a starting point for catching up on the basics, or as reference while delving into the book.
As governments increasingly adopt algorithms and artificial intelligence (AAI), we still know comparatively little about citizens’ support for algorithmic government. In this paper, we analyze how many and what kind of reasons for government use of AAI citizens support. We use a sample of 17,000 respondents from 16 OECD countries and find that opinions on algorithmic government are divided. A narrow majority of people (55.6%) support a majority of reasons for using algorithmic government, and this is relatively consistent across countries. Results from multilevel models suggest that most of the cross-country variation is explained by individual-level characteristics, including age, education, gender, and income. Older and more educated respondents are more accepting of algorithmic government, while female and low-income respondents are less supportive. Finally, we classify the reasons for using algorithmic government into two types, “fairness” and “efficiency,” and find that support for them varies based on individuals’ political attitudes.
Services offered by genealogy companies are increasingly underpinned by computational remediation and algorithmic power. Users are encouraged to employ a variety of mobile web and app plug-ins to create progressively more sophisticated forms of synthetic media featuring their (often deceased) ancestors. As the promotion of deepfake and voice-synthesizing technologies intensifies within genealogical contexts – aggrandised as mechanisms for ‘bringing people back to life’ – we argue it is crucial that we critically examine these processes and the socio-technical infrastructures that underpin them, as well as their mnemonic impacts. In this article, we present a study of two AI-enabled services released by the genealogy company MyHeritage: Deep Nostalgia (launched 2020), and DeepStory (2022). We carry out a close critical reading of these services and the outputs they produce which we understand as examples of ‘remediated memory’ (Kidd and Nieto McAvoy 2023) shaped by corporate interests. We examine the distribution of agency where the promotion by these platforms of unique and personalised experiences comes into tension with the propensity of algorithms to homogenise. The analysis intersects with nascent ethical debates about the exploitative and extractive qualities machine learning. Our research unpacks the social and (techno-)material implications of these technologies, demonstrating an enduring individual and collective need to connect with our past(s), and to test and extend our memories and recollections through increasingly intense and proximate new media formats.
Chapter 3 expands on the diabolical aspects of the contemporary political soundscape and develops initial deliberative responses to its key problematic aspects. These aspects include an overload of expression that overwhelms the reflective capacities of listeners; a lack of argumentative complexity in political life; misinformation and lies; low journalistic standards in “soft news”; cultural cognition, which means that an individual’s commitment to a group determines what gets believed and denied; algorithms that condition what people get to hear (which turn out to fall short of creating filter bubbles in which they hear only from the like-minded); incivility; and extremist media. The responses feature reenergizing the public sphere through means such as the cultivation of spaces for reflection both online and offline, online platform regulation and design, restricting online anonymity, critical journalism, media literacy education, designed forums, social movement practices, and everyday conversations in diverse personal networks. Formal institutions (such as legislatures) and political leaders also matter.