We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Disappointment aversion has been suggested as an explanation for non-truthful rankings in strategy-proof school-choice matching mechanisms. We test this hypothesis using a novel experimental design that eliminates important alternative causes of non-truthful rankings. The design uses a simple contingent choice task with only two possible outcomes. Between two treatments, we manipulate the possibility for disappointment aversion to have an effect on ranking. We find a small and statistically marginally significant treatment effect in the direction predicted by disappointment aversion. We therefore conclude that disappointment aversion is a minor contributor to non-truthful rankings in strategy-proof school-choice matching mechanisms.
We introduce two novel matching mechanisms, Reverse Top Trading Cycles (RTTC) and Reverse Deferred Acceptance (RDA), with the purpose of challenging the idea that the theoretical property of strategy-proofness induces high rates of truth-telling in economic experiments. RTTC and RDA are identical to the celebrated Top Trading Cycles (TTC) and Deferred Acceptance (DA) mechanisms, respectively, in all their theoretical properties except that their dominant-strategy equilibrium is to report one’s preferences in the order opposite to the way they were induced. With the focal truth-telling strategy being out of equilibrium, we are able to perform a clear measurement of how much of the truth-telling reported for strategy-proof mechanisms is compatible with rational behaviour and how much of it is caused by confused decision-makers following a default, focal strategy without understanding the structure of the game. In a school-allocation setting, we find that roughly half of the observed truth-telling under TTC and DA is the result of naïve (non-strategic) behaviour. Only 14–31% of the participants choose actions in RTTC and RDA that are compatible with rational behaviour. Furthermore, by looking at the responses of those seemingly rational participants in control tasks, it becomes clear that most lack a basic understanding of the incentives of the game. We argue that the use of a default option, confusion and other behavioural biases account for the vast majority of truthful play in both TTC and DA in laboratory experiments.
Following the advice of economists, school choice programs around the world have lately been adopting strategy-proof mechanisms. However, experimental evidence presents a high variation of truth-telling rates for strategy-proof mechanisms. We crash test the connection between the strategy-proofness of the mechanism and truth-telling. We employ a within-subjects design by making subjects take two simultaneous decisions: one with no strategic uncertainty and one with some uncertainty and partial information about the strategies of other players. We find that providing information about the out-of-equilibrium strategies played by others has a negative and significant effect on truth-telling rates. That is, most participants in our within-subjects design try and fail to best-respond to changes in the environment. We also find that more sophisticated subjects are more likely to play the dominant strategy (truth-telling) across all the treatments. These results have potentially important implications for the design of markets based on strategy-proof matching mechanisms.
A growing number of papers theoretically study the effects of introducing a preference signaling mechanism. However, the empirical literature has had difficulty proving a basic tenet, namely that an agent has more success when the agent uses a signal. This paper provides evidence based on a field experiment in an online dating market. Participants are randomly endowed with two or eight “virtual roses” that a participant can use for free to signal special interest when asking for a date. Our results show that, by sending a rose, a person can substantially increase the chance of the offer being accepted, and this positive effect is neither because the rose attracts attention from recipients nor because the rose is associated with unobserved quality. Furthermore, we find evidence that roses increase the total number of dates, instead of crowding out offers without roses attached. Despite the positive effect of sending roses, a substantial fraction of participants do not fully utilize their endowment of roses and even those who exhaust their endowment on average do not properly use their roses to maximize their dating success.
Functional form assumptions are central ingredients of a model specification. Just as there are many possible control variables, there is also an abundance of estimation commands and strategies one could invoke, including ordinary least squares (OLS), logit, matching, and many more. How much do empirical results depend on the choice of functional form? In this chapter we demonstrate the functional form multiverse with two empirical applications: how job loss affects wellbeing in panel data and the effect of education on voting for Trump. We find in our cases that OLS and logit produce very similar results, but that matching estimators can be surprisingly unstable. We also reconsider an important many-analysts study and find that human researchers produce a much wider range of results than does the multiverse algorithm.
Brokken has proposed a method for orthogonal rotation of one matrix such that its columns have a maximal sum of congruences with the columns of a target matrix. This method employs an algorithm for which convergence from every starting point is not guaranteed. In the present paper, an iterative majorization algorithm is proposed which is guaranteed to converge from every starting point. Specifically, it is proven that the function value converges monotonically, and that the difference between subsequent iterates converges to zero. In addition to the better convergence properties, another advantage of the present algorithm over Brokken's one is that it is easier to program. The algorithms are compared on 80 simulated data sets, and it turned out that the new algorithm performed well in all cases, whereas Brokken's algorithm failed in almost half the cases. The derivation of the algorithm is given in full detail because it involves a series of inequalities that can be of use to derive similar algorithms in different contexts.
The Procrustes criterion is a common measure for the distance between two matrices X and Y, and can be interpreted as the sum of squares of the Euclidean distances between their respective column vectors. Often a weighted Procrustes criterion, using, for example, a weighted sum of the squared distances between the column vectors, is called for. This paper describes and analyzes the performance of an algorithm for rotating a matrix X such that the column-weighted Procrustes distance to Y is minimized. The problem of rotating X into Y such that an aggregate measure of Tucker's coefficient of congruence is maximized is also discussed.
This paper provides a generalization of the Procrustes problem in which the errors are weighted from the right, or the left, or both. The solution is achieved by having the orthogonality constraint on the transformation be in agreement with the norm of the least squares criterion. This general principle is discussed and illustrated by the mathematics of the weighted orthogonal Procrustes problem.
Static analysis of logic programs by abstract interpretation requires designing abstract operators which mimic the concrete ones, such as unification, renaming, and projection. In the case of goal-driven analysis, where goal-dependent semantics are used, we also need a backward-unification operator, typically implemented through matching. In this paper, we study the problem of deriving optimal abstract matching operators for sharing and linearity properties. We provide an optimal operator for matching in the domain $\mathtt{ShLin}^{\omega }$, which can be easily instantiated to derive optimal operators for the domains $\mathtt{ShLin}^2$ by Andy King and the reduced product $\mathtt{Sharing} \times \mathtt{Lin}$.
from
Part II
-
The Practice of Experimentation in Sociology
Davide Barrera, Università degli Studi di Torino, Italy,Klarita Gërxhani, Vrije Universiteit, Amsterdam,Bernhard Kittel, Universität Wien, Austria,Luis Miller, Institute of Public Goods and Policies, Spanish National Research Council,Tobias Wolbring, School of Business, Economics and Society at the Friedrich-Alexander-University Erlangen-Nürnberg
Vignette experiments are vignettes are brief descriptions of social objects including a list of varying characteristics, on the basis of which survey respondents state their evaluations or judgments. The respondents’ evaluations typically concern positive beliefs, normative judgments, or their own intentions or actions. Using a study on the gender pay gap and an analysis of trust problems in the purchase of used cars as examples, we discuss the design characteristics of vignettes. Core issues are the selection of the vignettes that are included out of the universe of possible combinations, the type of dependent variables, such as rating scales or ranking tasks, the presentation style, differentiating text vignettes from a tabular format, and issues related to sampling strategies.
Confounding refers to a mixing or muddling of effects that can occur when the relationship we are interested in is confused by the effect of something else. It arises when the groups we are comparing are not completely exchangeable and so differ with respect to factors other than their exposure status. If one (or more) of these other factors is a cause of both the exposure and the outcome, then some or all of an observed association between the exposure and outcome may be due to that factor.
This chapter moves from regression to methods that focus on the pattern presented by multiple variables, albeit with applications in regression analysis. A strong focus is to find patterns that beg further investigation, and/or replace many variables by a much smaller number that capture important structure in the data. Methodologies discussed include principal components analysis and multidimensional scaling more generally, cluster analysis (the exploratory process that groups “alike” observations) and dendogram construction, and discriminant analysis. Two sections discuss issues for the analysis of data, such as from high throughput genomics, where the aim is to determine, from perhaps thousands or tens of thousands of variables, which are shifted in value between groups in the data. A treatment of the role of balance and matching in making inferences from observational data then follows. The chapter ends with a brief introduction to methods for multiple imputation, which aims to use multivariate relationships to fill in missing values in observations that are incomplete, allowing them to have at least some role in a regression or other further analysis.
Cinque (2020) presents a unified theory positing that various types of relative clauses (RCs) originate from a single, double-headed universal structure via raising or matching. The Frame Noun-Modifying Clause (FRC) as described and analyzed by Matsumoto et al. (2017a, 2017b) presents a significant challenge to Cinque's framework, as it does not conform to any of Cinque's identified RC types, which include amount RCs, kind(-defining) RCs, restrictive RCs and non-restrictive RCs. The FRC eludes derivation via the proposed matching or raising mechanisms. Determining the semantic link between the head noun and the FRC, as well as its external merger position, remains elusive. One might suggest that inserting additional material into the FRC, which incorporates a plausible internal head, could clarify their connection. This approach falls short of providing a systematic and coherent syntactic criterion, relying instead on semantic intuition that lacks operational reliability.
In this paper, we analyze two types of refutations for Unit Two Variable Per Inequality (UTVPI) constraints. A UTVPI constraint is a linear inequality of the form: $a_{i}\cdot x_{i}+a_{j} \cdot x_{j} \le b_{k}$, where $a_{i},a_{j}\in \{0,1,-1\}$ and $b_{k} \in \mathbb{Z}$. A conjunction of such constraints is called a UTVPI constraint system (UCS) and can be represented in matrix form as: ${\bf A \cdot x \le b}$. UTVPI constraints are used in many domains including operations research and program verification. We focus on two variants of read-once refutation (ROR). An ROR is a refutation in which each constraint is used at most once. A literal-once refutation (LOR), a more restrictive form of ROR, is a refutation in which each literal ($x_i$ or $-x_i$) is used at most once. First, we examine the constraint-required read-once refutation (CROR) problem and the constraint-required literal-once refutation (CLOR) problem. In both of these problems, we are given a set of constraints that must be used in the refutation. RORs and LORs are incomplete since not every system of linear constraints is guaranteed to have such a refutation. This is still true even when we restrict ourselves to UCSs. In this paper, we provide NC reductions between the CROR and CLOR problems in UCSs and the minimum weight perfect matching problem. The reductions used in this paper assume a CREW PRAM model of parallel computation. As a result, the reductions establish that, from the perspective of parallel algorithms, the CROR and CLOR problems in UCSs are equivalent to matching. In particular, if an NC algorithm exists for either of these problems, then there is an NC algorithm for matching.
While the mechanisms that economists design are typically static, one-shot games, in the real world, mechanisms are used repeatedly by generations of agents who engage in them for a short period of time and then pass on advice to their successors. Hence, behavior evolves via social learning and may diverge dramatically from that envisioned by the designer. We demonstrate that this is true of school matching mechanisms – even those for which truth-telling is a dominant strategy. Our results indicate that experience with an incentive-compatible mechanism may not foster truthful revelation if that experience is achieved via social learning.
In recent years there has been a great deal of interest in designing matching mechanisms that can be used to match public school students to schools (the student matching problem). The premise of this chapter is that, when testing mechanisms, we must do so in the environment in which they are used in the real world rather than in the environment envisioned by theory. More precisely, in theory, the school matching problem is a static one-shot game played by parents of children seeking places in a finite number of schools and played non-cooperatively without any form of communication or commitment between parents. However, in the real world, the school choice program is played out in a different manner. Typically, parents choose their strategies after consulting with other parents in their social networks and exchanging advice on both the quality of schools and the proper way they should play the “school matching game”. The question we ask here is whether chat between parents affects the strategies they choose, and if so, whether it does so in a welfare-increasing or welfare-decreasing manner. We find that advice received by chatting has proven to have a very powerful influence on decision makers, in the sense that advice tends not only to be followed but typically has a welfare-increasing consequence.
Participants drank either regular root beer or sugar-free diet root beer before working on a probability-learning task in which they tried to predict which of two events would occur on each of 200 trials. One event (E1) randomly occurred on 140 trials, the other (E2) on 60. In each of the last two blocks of 50 trials, the regular group matched prediction and event frequencies. In contrast, the diet group predicted E1 more often in each of these blocks. After the task, participants were asked to write down rules they used for responding. Blind ratings of rule complexity were inversely related to E1 predictions in the final 50 trials. Participants also took longer to advance after incorrect predictions and before predicting E2, reflecting time for revising and consulting rules. These results support the hypothesis that an effortful controlled process of normative rule-generation produces matching in probability-learning experiments, and that this process is a function of glucose availability.
Crop insurance has been linked to changes in farm production decisions. In this study, we examine the effects of crop insurance participation and coverage on farm input use. Using a 1993–2016 panel of Kansas farms, evidence exists that insured farms apply more farm chemicals and seed per acre than uninsured farms. We use a fixed effects instrumental variable estimator to obtain the effects of change in crop insurance coverage on farm input use accounting farm-level heterogeneity. Empirical evidence suggests that changes in the levels of crop insurance coverage do not significantly affect farm chemical use. Thus, moral hazard effects from purchasing crop insurance are not large on a per acre basis but can lead to expenditures of $6,100 per farm.
This paper studies the structure and origin of prenominal and postnominal restrictive relative clauses in Pharasiot Greek. Though both patterns are finite and introduced by the invariant complementizer tu, they differ in two important respects. First, corpus data reveal that prenominal relatives are older than their postnominal counterparts. Second, in the present-day language only prenominal relatives involve a matching derivation, whereas postnominal ones behave like Head-raising structures. Turning to diachrony, we suggest that prenominal relatives came into being through morphological fusion of a determiner t- with an invariant complementizer u. This process entailed a reduction of functional structure in the left periphery of the relative clause, to the effect that the landing site for a raising Head was suppressed, leaving a matching derivation as the only option. Postnominal relatives are analyzed as borrowed from Standard Modern Greek. Our analysis corroborates the idea that both raising and matching derivations for relatives must be acknowledged, sometimes even within a single language.
Motivated by applications to a wide range of areas, including assemble-to-order systems, operations scheduling, healthcare systems, and the collaborative economy, we study a stochastic matching model on hypergraphs, extending the model of Mairesse and Moyal (J. Appl. Prob.53, 2016) to the case of hypergraphical (rather than graphical) matching structures. We address a discrete-event system under a random input of single items, simply using the system as an interface to be matched in groups of two or more. We primarily study the stability of this model, for various hypergraph geometries.