Fewer misunderstandings occur when conversing with someone face-to-face than when deciphering whether an all-caps text message is expressing excitement or anger. Nonverbal cues are vital to human communication (Argyle Reference Argyle1972). Yet they are ignored in most political science research. For example, a researcher using an online survey asks questions without any face-to-face interaction, leaving much information on the table as a result—whether participants are paying attention, whether they are confused by the questions, what specific information in the question they are responding to, whether they viewed all of the stimuli they were asked to evaluate, and their emotional reactions. Without the ability to see the nonverbal responses of participants, we are missing a crucial aspect of human communication.
Here, we introduce eye tracking, an innovative method that systematically and accurately measures the nonverbal cues given by respondents—their attention, salience, emotion, and understanding—by tracking eye movements and pupil dilation. While survey and experimental methods have made great strides (Druckman and Green Reference Druckman and Green2021; Jamieson et al. Reference Jamieson2023), they allow primarily for self-reported measurements of these constructs. Self-awareness of such complicated concepts is limited, and social desirability bias restricts the usefulness of self-reports in measuring them (Clifford and Jerit Reference Clifford and Jerit2015). Eye tracking offers insight that no existing method in political science does into mental processes that are key to understanding some of the most important questions asked by political behaviorists and methodologists.
We detail best practices in conducting and reporting eye tracking studies and discuss new developments such as webcam eye tracking, which is now a reliable method for simple studies. Eye tracking can be conducted on online, nationally representative samples quickly and inexpensively. Hence, eye tracking is now an affordable, relatively easy-to-use method offering a range of valuable measures to political scientists.
1 Eye Tracking Measurements and Inferences
Eye tracking facilitates inference regarding mental and emotional states by measuring a respondent’s gaze. Gaze and associated thoughts typically co-occur (Just and Carpenter Reference Just and Carpenter1976),Footnote 1 allowing features of a respondent’s thought processes to be deduced by characteristics of gaze (e.g., the amount of time spent gazing at a stimulus and the dilation of the pupil). Four inferences can be made from eye tracking measures: information accrual, the importance of the information, the strength of a respondent’s emotional responses, and the cognitive load involved in processing the information. These measures will be of interest to both political psychologists and public opinion and voting behavior researchers more generally. As participants may lack full access to or awareness at all of their current psychological state or be unwilling to report it, relatively inconspicuous measures can help gain insight into these difficult-to-assess phenomena.
1.1 Information Accrual
The most basic measurement that eye tracking offers is what respondents look at and what they ignore. Because most eye trackers measure the location of the gaze at least 60 times per second, or every 16.66 milliseconds (ms), eye trackers precisely measure what information has been attended to and for how long. When an individual views a scene, their eyes stop and rest on one area and then rapidly dart to another target. For the purpose of most researchers, these are the two key components of eye gaze: fixations, when the eyes are resting on a target (typically for 180–330 ms per fixation) and collecting information, and rapid saccades from target to target. During saccades, the eyes move too quickly to process new information, meaning that we are effectively blind during these brief (30–50 ms) windows. It is a simple task for eye tracking software to classify gaze as either a fixation or saccade, and it is typical to exclude saccades when analyzing eye gaze data. At least one fixation on an area of interest (AOI) indicates that the area, or information item, has been viewed and processed.Footnote 2
One potential substantive application of information accrual is identifying inattention in surveys. Much ink has been spilled over the effectiveness of attention check questions in identifying inattentive respondents (Aronow, Baron, and Pinson Reference Aronow, Baron and Pinson2019; Berinsky et al. Reference Berinsky, Margolis, Sances and Warshaw2021), and multiple alternative measures of and methods for dealing with inattentiveness have been suggested (Clayton et al. Reference Clayton2023; Kane and Barabas Reference Kane and Barabas2019). One way to identify inattentive respondents is by short response times (Alvarez et al. Reference Alvarez, Atkeson, Levin and Li2019); but recent work has pointed out that inattentive respondents may take the average amount of time or longer to answer a question due to distractions (Read, Wolters, and Berinsky Reference Read, Wolters and Berinsky2022). Eye tracking has been used to identify respondents’ comprehension and task-unrelated thought in surveys (Hutt et al. Reference Hutt, Wong, Papoutsaki, Baker, Gold and Mills2024), suggesting that it may allow researchers to robustly identify whether a respondent has looked at a certain text or image without making assumptions about response times or entering the attention check question fracas.
1.2 Importance of Information Items in a Decision
Eye tracking also allows researchers to measure an information item’s importance in a decision. People tend to look more at objects that are important and relevant to a decision; hence, the proportion of gaze to a stimulus indicates its relative influence in the decision process (Krajbich, Armel, and Rangel Reference Krajbich, Armel and Rangel2010). Eye tracking measures of gaze correlate with both self-reported importance measures (Du and MacDonald Reference Du and MacDonald2014) and implicit measures of importance from conjoint experiments (Bansak and Jenke Reference Bansak and Jenke2025; Jenke et al. Reference Jenke, Bansak, Hainmueller and Hangartner2021; Meißner and Decker Reference Meißner and Decker2010; Meißner, Musalem, and Huber Reference Meißner, Musalem and Huber2016). Additionally, in economic gambling tasks, respondents’ focus on the amounts versus the probabilities of the payoffs has been found to correlate with the weights given to these factors in the decision (Glöckner and Herbold Reference Glöckner and Herbold2011; Wedell and Senter Reference Wedell and Senter1997).Footnote 3 These correlations tend to be quite high. For instance, Du and MacDonald (Reference Du and MacDonald2014) found an average correlation of 0.67 between self-reported importance measures and an eye tracking measure over multiple stimuli.
Four eye tracking metrics can be used to estimate the importance of an information item in a decision:
-
• Fixation density/proportion: Also termed fixation frequency, fixation density is the number of fixations in an AOI. Researchers alternatively use the fixation proportion (the fixation density in an AOI divided by the total number of fixations in all AOIs).
-
• Dwell time: A dwell represents one visit in gaze (which will typically involve multiple fixations) in an AOI, from entry to exit. Whether fixations or dwell time is used as the measure of gaze amount is typically of little consequence—they tend to be highly correlated (Holmqvist et al. Reference Holmqvist, Nyström, Andersson, Dewhurst, Jarodzka and Van de Weijer2011)—though this should be checked in each particular data set.
-
• Number of returns: A transformation of the number of dwells is the number of returns to an AOI (the number of dwells minus one), which also offers a measure of importance (Poole and Ball Reference Poole and Ball2006).
-
• The first-fixated option represents the AOI to which attention is first directed upon stimulus presentation. Information considered first or earlier in the decision process has a significantly greater impact on choice (Sullivan and Huettel Reference Sullivan and Huettel2021; Sullivan et al. Reference Sullivan, Hutcherson, Harris and Rangel2015).
These measures can be calculated within a single trial or as an average over all trials in an experiment.
Substantive areas in which these eye tracking measures can contribute abound. One area is voting behavior researchers’ study of policy issue importance in voting decisions. Contrary to logical expectations, people do not weight self-reported important issues more heavily than unimportant issues in candidate choices (Jenke and Munger Reference Jenke and Munger2022; Leeper and Robison Reference Leeper and Robison2020). Although this discrepancy could be because decisions are based on something other than issue positions (e.g., partisanship, the candidates’ physical attributes, or social group affinity), recent work has reconfirmed the relevance of policy issues to candidate choice (Orr and Huber Reference Orr and Huber2020; Simas Reference Simas2023). This suggests that self-reported measures fail to accurately capture issue importance, and a different measurement method may be necessary. Eye tracking can provide alternative, non-self-reported measures of issue importance by tracking which information voters attend to more when viewing information about candidates.
Second, eye tracking can help to ascertain the circumstances under which policy versus social group identity impact political phenomena such as candidate preference and affective polarization. A number of recent articles contrast the importance of these two inputs in the formation of political preferences, with different conclusions (Iyengar, Sood, and Lelkes Reference Iyengar, Sood and Lelkes2012; Orr and Huber Reference Orr and Huber2020; Simas Reference Simas2023). As stated by Orr, Fowler, and Huber (Reference Orr, Fowler and Huber2023), “Scholars will have to find new research designs if they want to convincingly estimate the effects of identity or loyalty [on affective polarization] independent of policy substance.” Gaze amount may be used to measure the importance of information about policy versus social group identity in a political decision.
Third, eye tracking measures of importance can be used to examine how candidates’ attributes influence public opinion. For example, candidates’ races may determine voters’ attention to different types of information about them. Citizens may attend to different types of information for a Black candidate than they do for a white candidate or for a female candidate than they do for a male candidate, and this differential attention may impact their candidate choices. There is some work in this vein (Coronel, Moore, and B. deBuys Reference Coronel, Moore and deBuys2021; Jenke Reference Jenke2024), but the surface has barely been scratched. This is one area, in particular, in which an unobtrusive measure of the decision process could make a large difference; some participants are unaware of their racial/gender bias, and others are likely hoping to conceal it.
1.3 Emotion
A third use of eye tracking is to measure emotion. Most eye trackers measure dilation of the pupil, the hole in the iris through which light passes to reach the retina. The pupil’s primary role is to adapt to lighting conditions to best facilitate vision, but the iris is wired to the body”s autonomic nervous system and contracts or expands depending on levels of emotional arousal. Therefore, if light levels are held constant,Footnote 4 pupil dilation provides a real-time measure of the strength of an emotion at a very fine temporal resolution. Importantly, this measure is free of the social desirability bias sometimes found in self-reported measures of emotion; people cannot control the dilation of their pupils.
Although pupillometry provides an excellent assessment of emotional arousal, it is unable to provide information about the valence of an emotion (positive or negative) or its category (happy, excited, etc.). That said, some eye tracking software, such as that provided by iMotions, provide emotion categorization algorithms. These use the computer’s webcam to classify emotions using facial recognition. Pairing pupillometry with such information—or even simple self-reports of emotional categories—can provide evidence for both the strength and type of emotion being felt by a participant.
Pupillometry measures of emotion are potentially very useful for political science work. Pupillometry has been used to measure disgust (Hubert and Järvikivi Reference Hubert and Järvikivi2019), an emotion that has been of recent interest to political scientists (Clifford and Simas Reference Clifford and Simas2024; Hatemi, Crabtree, and Smith Reference Hatemi, Crabtree and Smith2019). Researchers may want to examine emotional responses to certain types of political advertisements, politicians, or issue information. Affective polarization is another important concept for which emotion is central.
1.4 Cognitive Load
Pupil dilation also indicates the level of cognitive difficulty a participant is currently experiencing. Cognitive load refers to the amount of mental effort it takes to process information, and the peak dilation of the pupil during viewing quantifies this state (i.e., more dilation indicates more cognitive load). A respondent’s cognitive load can also be measured by pupil oscillation, or rapid changes in pupil dilation. The Index of Cognitive Activity tracks pupil oscillation as a proxy for the level of cognitive activity over time (Vogels, Demberg, and Kray Reference Vogels, Demberg and Kray2018). However, oscillation occurs at a high frequency, so an expensive high temporal frequency eye tracker is required for its measurement. For example, many studies use a 500 Hz, or once every 2 ms, sampling rate to measure pupil oscillation. However, if only mean or maximum dilation metrics are to be calculated rather than oscillation, a lower resolution and more affordable eye tracker can be used [e.g., Kret, Tomonaga, and Matsuzawa (Reference Kret, Tomonaga and Matsuzawa2014) used a 60 Hz eye tracker to great effect for pupil analyses].Footnote 5
Pupil dilation has potential as a technique for measuring political sophistication and knowledge. Despite the frequent usage of this concept, political scientists have struggled with its operationalization (Lupia Reference Lupia2016; Mondak Reference Mondak1999). One issue is that response options fail to accurately capture respondents who are uninformed versus partially informed, as well as respondents who are informed versus guessing.Footnote 6 By capturing the mental effort involved in processing information, eye tracking can help to delineate these types of respondents.
1.5 A Cautionary Note on Reverse Inference
There are concerns with using any decision process measure to learn about mental states. For example, one cannot conclude definitively that pupil dilation is the result of cognitive load in an observational study, as greater dilation could instead result from surprise, intense emotion, bright light, or a number of other causes. Similarly, in certain contexts, fixation density could be an indication of difficulty understanding the AOI rather than the importance of the AOI. This does not mean we must abandon the hope of using eye tracking to infer mental states, but rather that we must be careful in how we execute studies and draw conclusions from their results. If a researcher presents a series of choices to participants in a controlled setting with steady light conditions, the inference that pupil dilation reflects changes in cognitive load is much more reasonable.Footnote 7 Such “reverse inferences” have been discussed at length elsewhere, and we direct the reader to several helpful sources for guidance (Glymour and Hanson Reference Glymour and Hanson2016; Hutzler Reference Hutzler2014; Krueger Reference Krueger2017; Poldrack Reference Poldrack2006, Reference Poldrack2011).
1.6 Prior Eye Tracking Applications to Political Topics
Eye tracking has been utilized to answer both methodological and substantive political questions. Table 1 contains a sample of prior studies. Most of these studies are from psychology or communication; of the sample, only four of the papers are from the political science literature, and three of these four are methodological. Substantively, eye tracking has been used to investigate questions about campaigns (Gupta, Verma, and Kapoor Reference Gupta, Verma and Kapoor2024; Lindholm, Carlson, and Högväg Reference Lindholm, Carlson and Högväg2021), voters’ responses to candidate gender (Coronel et al. Reference Coronel, Moore and deBuys2021; Jenke Reference Jenke2024), motivated reasoning (Schmuck et al. Reference Schmuck, Tribastone, Matthes, Marquart and Bergel2020), and political news on social media (e.g., Kruikemeier, Lecheler, and Boyer Reference Kruikemeier, Lecheler and Boyer2018; Ohme, Maslowska, and Mothes Reference Ohme, Maslowska and Mothes2022).
Table 1 Eye tracking studies of politics.

Note: Papers are from the fields of political science, psychology, and communication.
Some of these papers focus on the effect of individual differences on gaze. Respondents’ political interest (Bode, Vraga, and Troller-Renfree Reference Bode, Vraga and Troller-Renfree2017) and ideology (Dodd et al. Reference Dodd, Balzer, Jacobs, Gruszczynski, Smith and Hibbing2012) have been explored as determinants of fixations on political information. Such studies tend to measure individual differences observationally, which—as is the case in all experimental studies using observationally measured variables as moderators—introduces additional complexity into the interpretation of the underlying causal mechanism. Gaze behaviour has been found to differ over gender (Sammaknejad et al. Reference Sammaknejad, Pouretemad, Eslahchi, Salahirad and Alinejad2017), ethnicity (Amatya, Gong, and Knox Reference Amatya, Gong and Knox2011), and age (Açık et al. Reference Açık, Sarwary, Schultze-Kraft, Onat and König2010) for reasons other than semantic importance, emotion, or cognitive load. For instance, older adults take part in a greater number of returns to AOIs due to relative memory weakness (Veiel, Storandt, and Abrams Reference Veiel, Storandt and Abrams2006). Consequently, memory weakness would act as a confounder to claims that a greater number of returns among elderly respondents was a function of the stimulus’ semantic importance. Researchers examining individual differences should take care in their interpretation of the causal mechanism. Follow-up experimental research—that either manipulates the characteristic or the hypothesized mechanism using an instrument—can help to confirm or rule out potential causal pathways.
2 The Simplest Way to Run an Eye Tracking Study: Study Design, Implementation, and Data Analyses
Most universities have at least one eye tracker in their neuroscience, psychology, or marketing departments. While eye trackers typically belong to a lab or a consortium within the university, many scholars are willing to share this resource with others at their institution. The gold standard eye tracker is the EyeLink 1000 Plus due to its top of the line sampling frequency at 2,000 hertz (Hz). But the sampling frequency required for a study depends on what one is trying to measure. Fixations and saccades can be cleanly measured by a 60 Hz eye tracker, such as the Tobii T60.
2.1 Study Design
The first step in the design process is the same across any experiment, eye tracking or behavioral: formulating the research question. Here, one must consider how visual attention or information search may play a role in the phenomenon under examination. Next, one must prepare the stimuli—this can be a set of static stimuli such as photographs, advertisements, and websites or dynamic stimuli such as videos.
A basic eye tracking study can be designed using the built-in software of an eye tracker for easy, drag and drop creation of simple experiments.Footnote 8 During the design stage, researchers must decide how participants will view and interact with stimuli, though most software provide default settings for each decision. For example, should the stimulus auto-advance after a specified duration or should participants press a key to continue? See Section A.3 of the Supplementary Material for additional design decisions and Jenke and Sullivan (Reference Jenke and Sullivan2024) for a preregistration template for eye tracking studies.
One decision researchers must make is the number of repetitions of stimuli (number of trials) and participants. More trials provide a more accurate estimate of a phenomenon but may reduce ecological validity. Most commonly, researchers collect data on a large number of trials (e.g., 200–300) on a relatively small sample (e.g., 75–150) due to the large time costs of collecting data. However, one can also run an experiment using one or a few trials on a large number of individuals. This is now made practicable with online samples using webcam eye tracking, which we detail below. As with all experimental designs, sample size should be determined based on a power analysis.
If using dynamic stimuli such as videos or scrolling websites, AOIs must be changed throughout the course of the trial to align with the stimuli of interest. This can be done easily with the eye tracker’s software. For instance, iMotions provides a machine learning algorithm (the “automated AOI module”), which generates an AOI that moves with a particular stimulus.Footnote 9 Researchers should ensure that the AOIs for multiple moving stimuli do not overlap. Otherwise, the data analysis can proceed in the same fashion as when one uses static stimuli.
2.2 Study Implementation
Next, one must prepare for data collection. The following choices (and additional choices; see Section B of the Supplementary Material) should be reported in papers to strengthen their interpretability and replicability. First, respondent setup is critical. The distance of the respondent from the screen affects data quality toward the top and bottom of the screen, and the ideal distance depends on the eye tracking system (see the operating manuals). Respondents should be either alone in the room or visually isolated.
Calibration is a necessary part of data collection. It allows the system to identify where a respondent is looking by having him or her look at a series of dots (typically five to nine) on the screen. Calibration measures the relationship between the pupil’s and cornea’s reflections at each dot. It takes place at the beginning of an experimental session, but additional calibrations should be included if an experiment lasts for more than 15 minutes. Most eye tracking software reports calibration accuracy (Figure 1). Holmqvist et al. (Reference Holmqvist, Nyström, Andersson, Dewhurst, Jarodzka and Van de Weijer2011) suggest a maximum average deviation of 0.5° for most studies. That said, the accuracy necessary for a study will depend on the size and location of the AOIs; gazes on larger AOIs with more space between them can be effectively identified with lower accuracy.

Figure 1 Eye tracking calibration results.
Note: The red dot indicates the location of the calibration dot, and the green and blue dots represent the estimated position of the left and right eyes. In the top image, the estimated gaze and red dot are in very close proximity to each other, indicating excellent calibration. The bottom image indicates a poor calibration in which the estimated gaze is significantly displaced from the red target. Additionally, the eye tracker could not estimate gaze location at all for the bottom-left calibration point and for only one eye for the top-left point.
Another choice that must be made is whether to use a chin rest to stabilize participant position. A chin rest is mounted to the table in front of the eye tracker, and the participant places his or her chin on the padded rest before eye calibration and for the remainder of the study. This prevents vertical and horizontal drift in participant head position, which cause the location of fixations read by the eye tracker to become displaced. Chin rests were very common when eye trackers first came out but are less so today as they can be awkward for the participant to use, and calibration has improved.
In our own research, the requirements that calibration is accurate and that respondents are relatively still has led to the exclusion of a very small proportion of respondents. For instance, in the data collection for Jenke (Reference Jenke2024), only eight respondents out of 140 needed to be excluded due to data quality issues.
2.3 Data Analyses
Eye tracking data analysis can be made simple by utilizing the eye tracker’s software.Footnote 10 The most straightforward way to parse attention to a stimulus is to break the screen up into AOIs, for which summary statistics can be computed. Most software packages include point-and-click analysis tools to indicate a box or circle around an AOI. In Figure 2, AOIs have been drawn around key elements of an example stimulus using iMotions software. Researchers can then export the raw data. In these datasets, each row corresponds to a distinct fixation point in the gaze sequence and parameters such as fixation duration and position are detailed. We provide an R package that computes relevant metrics from raw data, EyeMetrics (Jenke and Sullivan Reference Jenke and Sullivan2025a). Output from this function is shown in Table 2 for a single simulated respondent. The output includes, for each AOI, the number of fixations, number of returns, total dwell time (ms), and total fixation percentage. Additionally, the first fixated AOI is shown in the rightmost column.Footnote 11 Alternatively, the software can compute statistics for each AOI using its default processing pipeline. For example, pre-calculated statistics such as the number of revisits to each AOI and number of fixations on the AOI can be exported directly from the eye tracker. One can also export across-participant as well as across trial averages.

Figure 2 AOIs drawn on an example stimulus.
Table 2 Statistics for a single respondent, calculated with EyeMetrics.

With this data, quantitative analyses can be carried out. For example, a researcher might examine the difference in average total dwell time in an AOI between treatment and control groups. The cognitive process pinpointed by each AOI metric can vary depending on the experimental design; for example, the time to first fixation may measure bottom-up visual salience of an AOI if the stimulus is a very complicated visual scene. However, if using simple stimuli like two images, TTFF is unlikely to measure visual salience as the images are so simplistic. See Section A of the Supplementary Material for more on data analyses.
The software applies automatic default settings to process results. Although it is often reasonable to accept or only slightly modify these based on one’s task, it is crucial for both understanding a study and replication that these choices are reported in the paper. For more about the automatic defaults that need to be reported, see Section B of the Supplementary Material.
Participants’ fixations can be visualized to facilitate qualitative examination of gaze patterns. One such visualization is a fixation plot, which plots each fixation’s location overlaid on the stimulus screen. Another way to visualize eye tracking data is to use a scanpath, which overlays the location and order of fixations on the stimulus. Each fixation is visualized with a circle and labeled with a fixation number inside of the circle. Larger circles represent longer fixations. This facilitates visual inspection of the order in which information in a scene was sampled and the areas of longer fixations. We have provided the R packages PlotEyeFix (Jenke and Sullivan Reference Jenke and Sullivan2025b) and PlotScanpath (Jenke and Sullivan Reference Jenke and Sullivan2025c) to create these plots. Figure 3 demonstrates the output from our sample participant using these packages.

Figure 3 Visualizing eye tracking data.
These qualitative investigations into visual attention can be used to motivate quantitative analyses (e.g., helping to position AOIs) or to provide visual demonstration of a respondent’s focus in a paper. They can also be used to gain intuition as to the quality of respondents’ calibration prior to data analysis. For instance, Figure A.1 in the Supplementary Material shows a fixation plot and a scanpath of a participant with poor calibration. It is apparent that the offset in calibration is to the lower right-hand side of the screen. Offsets like these can be corrected using drift correction algorithms (Carr et al. Reference Carr, Pescuma, Furlan, Ktori and Crepaldi2022).
2.4 Data Quality
Characteristics of the environment in which the eye tracker is located make a difference in data quality. It is crucial that no direct sunlight is in the room and that lighting conditions are consistent within and between respondents. Additionally, there must be no source of vibration in the surrounding environment; air conditioners, hard floors with people walking nearby, or closely located elevator shafts can all be a source of vibration that negatively affects the quality of recorded data.
Most researchers add steps to either calibration or data processing to minimize respondent exclusions due to data quality issues. Participants may, over the course of a long task, relax into a different position than the one in which they were calibrated, leading to decreased gaze accuracy over time. It is therefore important to emphasize to participants that they should get into the most comfortable position possible prior to calibration. Although some studies perform eye calibration only once at the start of an experiment, others pause data collection for multiple calibrations. For example, Sullivan et al. (Reference Sullivan, Dore, Breslav, Bachman and Huettel2024) paused to re-calibrate participants four times within the same experiment in one study (Study 2), which increased participant retention substantially compared to their first study in which calibration was done only once. There are, however, trade-offs to this approach; multiple calibrations can increase the length of an experiment and disrupt participant concentration. Alternatively, the researcher can apply a drift calibration correction algorithm to the data after collection which detects slow drift trends in gaze data and shifts the centers of mass of fixation clusters back to a more accurate position (e.g., Amasino et al. Reference Amasino, Sullivan, Kranton and Huettel2019).
Because eye tracking data involves some degree of offset in accuracy, AOI sizes are important. The average accuracy of eye trackers ranges from 0.4° to 2°. The average precision is 0.005° root-mean squared (RMS) error in the best eye trackers and 0.5° RMS in the poorest (Holmqvist et al. Reference Holmqvist, Zemblys, Cleveland, Mulvey, Borah and Pelz2015). It is consequently important that researchers know the average accuracy and precision of their specific eye tracker (found in the manufacturer information). The smallest recommended AOI size (on the best eye tracking systems) is 1–1.5° (Orquin and Holmqvist Reference Orquin and Holmqvist2018). In creating AOIs, margins should be added around the objects on the screen that account for gaze offsets. Stimuli must be spaced on the screen far enough apart to allow for these margins. AOIs should never overlap.Footnote 12
3 Webcam Eye Tracking
Webcam eye tracking turns a participant’s at-home webcam into an eye tracker with a set of algorithms that detect gaze location using a webcam’s video feed in real time. The online environment’s low temporal and financial costs along with access to diverse samples make it an exciting development.Footnote 13
The most popular algorithm for tracking gaze through a webcam is the open-source package WebGazer.js (Papoutsaki, Laskey, and Huang Reference Papoutsaki, Laskey and Huang2017). This package has been validated on a variety of simple tasks but has limitations in some areas. While it replicated the results of an infrared eye tracker, it did so with less spatial accuracy (Semmelmann and Weigelt Reference Semmelmann and Weigelt2018) and with significantly smaller effect sizes (Bogdan et al. Reference Bogdan, Dolcos, Buetti, Lleras and Dolcos2024). Webcam eye tracking also appears to cause centering bias, where gaze points at the edges of the screen are recorded as closer to the center of the screen (Bogdan et al. Reference Bogdan, Dolcos, Buetti, Lleras and Dolcos2024).Footnote 14
Companies such as iMotions, Lucid theorem, RealEye, Lumen Research, Gorilla, and GazeRecorder use WebGazer or their own proprietary systems integrated into an accessible plug-and-play wrapper, reducing the coding requirements of webcam eye tracking. Researchers should ensure that accuracy metrics are provided for a company’s system—ideally ones tested by external sources.Footnote 15 For example, Gorilla, Labvanced, and RealEye have had their systems’ efficacy examined by outside sources. Gorilla’s system embeds the WebGazer software and has replicated effects found on an infrared eye tracker (Prystauka, Altmann, and Rothman Reference Prystauka, Altmann and Rothman2024).Footnote 16 LabVanced’s system likewise has been found to produce results comparable to those produced on an infrared eye tracker (Kaduk et al. Reference Kaduk, Goeke, Finger and König2024). However, the small sample size (N = 23) and laboratory setting, which does not allow for lighting and distraction issues with online samples, limits the robustness of Kaduk et al. (Reference Kaduk, Goeke, Finger and König2024)’s comparison. RealEye has similarly been validated in only a laboratory setting (Wisiecka et al. Reference Wisiecka2022), by a team that included several RealEye employees. Even so, it has been used successfully in academic publications (e.g., Jain, Nayakankuppam, and Gaeth Reference Jain, Nayakankuppam and Gaeth2021; Wielgopolan and Imbir Reference Wielgopolan and Imbir2024).
3.1 Designing Studies with Webcam Eye Tracking
Webcam eye tracking is not yet a direct substitute for infrared eye tracking. Yet we are confident that, for simple experiments with few AOIs (i.e., requiring lower spatial accuracy), results should approximate those of stationary in-lab infrared eye trackers. Yang and Krajbich (Reference Yang and Krajbich2021) conclude that six AOIs can be used without significant degradation in data quality. These AOIs must be evenly spaced on the screen to prevent fixations on one AOI being counted as a fixation on another AOI. Figure 2 depicts an example that has few enough AOIs to qualify as a good design for webcam eye tracking.Footnote 17 As another example, Simonov, Valletti, and Veiga (Reference Simonov, Valletti and Veiga2024) investigate attention spillover effects of news articles onto gaze to advertisements and effectively use only two AOIs, the news article and the ads. Of note, they demonstrate the efficacy of not only using participants with desktop computers but also webcam eye tracking via mobile phone.
Researchers should be aware that the accuracy, precision, and data loss of webcam eye tracking depends not only on the study design but also on respondent factors such as the hardware of respondents’ computers, their lighting conditions, and their willingness to sit still. It is therefore important to check the quality of each participant’s eye tracking data. It is typical for one-third of respondents’ data to be unusable, but recently in our own work using Gorilla we have found that up to 90% of data is usable. Given that online studies are not usually conducted on probability samples, a high exclusion rate is not necessarily an issue.
Studies that use pupillometry should not be conducted using webcam eye trackers at this time, due to the low sampling rate (60 Hz at its maximum), lack of specialized hardware used to measure pupil size, and variable light conditions found in home environments (Mahanama et al. Reference Mahanama2022).
4 An Alternative Method for Measuring Attention: Mouse Tracking
Political scientists have used mouse tracking in order to examine information accrual (e.g., Andersen, Redlawsk, and Lau Reference Andersen, Redlawsk and Lau2019; Ditonto Reference Ditonto2017; Jenke and Munger Reference Jenke and Munger2022; Lau and Redlawsk Reference Lau and Redlawsk2006). The Dynamic Process Tracing Environment (DPTE) is a valuable tool that depicts obscured information items on the computer screen that respondents can hover the mouse over in order to reveal.Footnote 18 Mouse tracking allows researchers to measure item importance via the information pieces that respondents choose to view and the amount of time they spend looking at them. However, respondents’ information accrual differs when seeing an item requires moving and clicking the mouse compared to when it only requires moving one’s eyes (Franco-Watkins and Johnson Reference Franco-Watkins and Johnson2011; Galesic et al. Reference Galesic, Tourangeau, Couper and Conrad2008; Lohse and Johnson Reference Lohse and Johnson1996). The former process takes time, during which respondents may reflect on how their choice of items appears. Consequently, mouse tracking may measure respondents’ conscious choices, which are potentially biased by self monitoring. Additionally, when viewing elaborate stimuli, the connection between importance and item selection is more tenuous due to the effort involved in choosing between many information items. Eye tracking comes closer to capturing unconscious information acquisition. Although a participant is aware that their gaze is being tracked, controlling one’s gaze is unnatural and difficult.
That said, the DPTE allows for information items that scroll down the screen and respondents to respond to items using “share” or “like” buttons, making it an excellent contextual equivalent to Twitter or Facebook. This environment would be difficult to code and analyze with eye tracking, making mouse tracking the preferred method in some contexts. The researcher must weigh these pros and cons to determine which method is best suited to a study.
5 Discussion
Until now, individuals’ decision making processes have mostly been a black box for political scientists. Eye tracking has been used across several fields to help scholars understand and predict preferences and choices and can also do so in political science. Eye tracking allows for a detailed, comprehensive, and real-time analysis of individuals’ engagement with stimuli. Researchers can examine the cognitive processes leading to choices: the viewing of stimuli, the importance of stimuli, the emotional relevance of stimuli, and the cognitive load involved in processing stimuli. Eye tracking can be used to support methodological inference as well as in the substantive examination of political topics.
One perceived barrier to using this method is that running eye tracking studies can require a good deal of technical expertise. We encourage political scientists to use eye trackers’ software in order to minimize the coding requirements involved. That said, it is important that researchers pay attention to the preprocessing settings and choose these settings to match the particularities of their research design. It is also crucial that best practices regarding study design are used; eye tracking studies should be experimental, account for competing explanations of eye tracking measures (i.e., semantic importance versus emotional relevance), and incorporate appropriate margins around AOIs.
The recent provision of webcam eye tracking is attractive, as it offers access to representative samples and quicker data collection and can be implemented with less technical expertise. While excited by these possibilities, we caution researchers to use this new resource according to its limitations, specifically a maximum of six far-spaced AOIs. As technologies develop and improve, we expect a dramatic increase in the use of this form of eye tracking in academic work.
Separate from but related to eye tracking are psychophysiological measures such as skin conductance and electromyography (EMG), which are valuable tools for political scientists. These measures have been shown to correlate with political attitudes (Oxley et al. Reference Oxley2008; Smith et al. Reference Smith, Oxley, Hibbing, Alford and Hibbing2011) and framing effects (Coe et al. Reference Coe, Canelo, Vue, Hibbing and Nicholson2017). Settle et al. (Reference Settle2020) provide a detailed and useful overview of this method, which we hope will combine with eye tracking to gain valuable insight into psychological processes.
We hope to see political scientists take advantage of eye tracking. It will take a relatively small amount of effort for political scientists to use this tool effectively. The results are likely to be a large advance in our understanding of political decision making.
Data Availability Statement
Replication code for this article is available at Jenke and Sullivan (Reference Jenke and Sullivan2025d). A preservation copy of the same code and data can also be accessed via https://doi.org/10.7910/DVN/BQWZJF.
Supplementary Material
For supplementary material accompanying this paper, please visit https://doi.org/10.1017/pan.2025.8.
Ethical Standards
The authors affirm that this article adheres to the APSA’s Principles and Guidance on Human Subject Research.