Implicit Association Test (IAT)

The Implicit Association Test (IAT) measures the strength of automatic associations between mental categories — for example, between insects and unpleasant, or between flowers and pleasant. Originally developed by Greenwald, McGhee and Schwartz (1998), the IAT is widely used in social, environmental and consumer psychology to capture attitudes that respondents cannot, or will not, report directly.

In tickStat, the IAT is a dedicated question type that runs entirely in the browser and captures sub-second response latencies for each trial.

When to use it

Use an IAT when you want to measure implicit attitudes, biases or preferences — evaluative associations that operate below conscious deliberation. Typical applications include:

  • Implicit attitudes towards social groups, brands, food, environmental practices or conservation policies.
  • Validation of explicit measures (do stated preferences match implicit ones?).
  • Detection of social-desirability bias in self-report data.

How it works in tickStat

A complete IAT pairs two target categories (for example, coastal birds vs inland birds) with two attribute categories (for example, positive vs negative). The respondent classifies a stream of stimuli — words or images — into one of the two combined categories shown at the top-left and top-right of the screen, using the E and I keys.

You configure the test by linking four source questions that supply the category labels and stimulus items:

  • First left and second left categories — the two concepts that appear on the left side of the screen.
  • First right and second right categories — the two concepts that appear on the right side of the screen.

The stimuli (the items the respondent sees one by one) are taken from the answer options of the linked source questions, so the full stimulus set is managed from the standard question editor.

Additional configuration options:

  • Error validation — show a red cross when the respondent miscategorises a stimulus, requiring them to correct their response before the next trial appears.
  • Audio feedback (beep on error) — optional audible signal on incorrect classifications.
  • Number of items per block — limit how many stimuli are shown in each block to control test duration.
  • Introduction screen — include the recommended HTML snippet from Section 7.3.7 to instruct respondents on the keyboard mapping and the importance of responding quickly.

Captured data

For every trial, tickStat records:

  • The stimulus that was shown.
  • The category the respondent selected.
  • The response time in milliseconds (high-resolution timing in the browser).
  • An error flag (correct vs incorrect classification).
  • The presentation order within the block.

These per-trial micro-data are exported in the SPSS-format complete report and can be used to compute the standard D-score (Greenwald, Nosek and Banaji, 2003) and any of its variants in your statistical software of choice.

Practical tips

  • Keep instructions short and emphasise speed — a slow test produces noisy data.
  • Use blocks of 20–40 trials; long blocks fatigue respondents.
  • Desktop responses are preferred for D-score studies because of touch-vs-keyboard latency differences on mobile devices.