Reporting
The reporting functionality is accessible through the Reports menu in the left sidebar. Clicking it opens the reports list directly with all available reports, including the Quota Report. The reporting page provides the following options:
Note: All interactive analyses now live under the Analysis menu (see Section 10): clicking it opens a hub with Contingent valuation, Discrete Choice Analysis and Interactive Dashboard. The Discrete Choice Simulator is reached from the Discrete Choice Analysis Report itself (see Section 10.4) since it consumes the model that was just estimated. The Careless Reports entry, previously in the Analysis menu, now lives in this Reports page since it produces a downloadable artefact.
9.1 Complete Report (SPSS Export)¶
Generates a comprehensive Excel file containing all survey response data. This report includes all respondent states: Complete (Done), Early Screen Out, Low Quality, and Quota Full. The file is formatted for easy import into SPSS or other statistical software. A progress bar is displayed during generation for large surveys.
Important: This report only includes respondents who advanced beyond the first step of the survey. If a respondent views the first screen but abandons without advancing, no response data is recorded and they will not appear in the report. This means the number of respondents in the report may be lower than the total number of views shown in the survey listing.
9.2 Quota Filters Report¶
Downloads an Excel report showing the current status of personalized quota filters, including how many slots have been filled and how many remain for each filter combination.
9.3 Translation Report¶
Generates an Excel file containing all configured texts in the survey with identification codes. This report is used for the manual translation workflow: download the report, translate Column 4 into Column 5, and upload the translated file to a duplicated survey. See Section 2.11 - How to Translate the Survey for the complete translation process.
9.4 Step Times Report¶
Downloads an Excel report with average response times per step. This is useful for identifying steps where respondents spend too much or too little time, which can indicate problematic or confusing questions.
9.5 Workflow Graph¶
Displays the survey's step-and-rule structure as an interactive visual graph. Features include: - Zoom and pan navigation - Node selection with outgoing branch highlighting - Tooltips on branches showing the condition for each transition - Contextual help button (?) explaining all available interactions
9.6 Choice Analysis (Multinomial Logit)¶
Provides statistical analysis of discrete choice experiment questions, including attribute importance calculations and part-worth utilities. This uses the R engine for computation.
Scope: This analysis is specifically designed and validated for choice experiments that include a Status Quo alternative (e.g. a "None of them" or "Keep current situation" option). The Status Quo position can be anywhere in the alternatives (first, middle or last — the engine reads it from the choice question's configuration). If you need to run a Choice analysis on an experiment without a Status Quo, please email tickStat support so we can enable the no-SQ variant for your survey.
Minimum sample: The multinomial logit model is only estimated when at least 5 distinct respondents have answered the choice questions in the group. Below that threshold you still see the descriptive panels (Choice Frequencies, Responses per Choice Question, Status Quo Analysis), and a warning banner replaces the model outputs. Five is a hard floor; estimates from very small samples are unstable and should be treated as a smoke test, not as final numbers.
Remembered configuration. The choices you make in the Variables for this analysis card (DIS/CONT per attribute and, in Mixed Logit, the Distribution column, Halton draws and seed) are remembered per choice experiment. Next time you open the report and pick the same group, your last setup is pre-filled automatically. The defaults from the parent choice question are never modified — this remembered configuration lives only at the analysis layer.
The configuration is saved only when you actually launch a run — clicking Run Analysis in MNL or Queue RPL Analysis in Mixed Logit. Editing the selectors and closing the tab without running anything does not persist anything. This way only configurations you have effectively used are remembered, and short experiments you abandon do not pollute the saved state.
Grouping choice questions with the Choice Report Group tag¶
A discrete choice experiment is often split across several blocks — for example one choice question per block with the same attributes and levels but different randomized cards. Statistically it is still one experiment and the responses must be pooled into a single conditional logit estimation.
The Choice Report Group field in each choice question's configuration controls how the Choice Analysis Report dropdown is built:
- Empty tag on all choice questions — the dropdown shows a single option labelled "Choice". Selecting it pools the responses from every choice question in the survey.
- Same non-empty tag on several questions (e.g.
choice1on blocks 1–4) — those questions are grouped together and the dropdown shows one option per distinct tag (choice1,choice2, …). Each option pools only the questions that carry that tag. - Mixed — tagged and untagged questions can coexist; untagged questions form the default "Choice" group while each tag value forms its own group.
Each dropdown option displays the group's question IDs for clarity — e.g. Choice (2 questions: P186, P212) or choice1 (4 questions: P10, P12, P14, P16).
All choice questions in the same group must share the same attribute definition (number of attributes, names, types and number of levels). The analysis refuses to run and returns an error if any question in the group mismatches the reference.
Loaded data panel and the Refresh button¶
Once a choice is selected in the dropdown, a Loaded data panel shows how many respondents the analysis will be run on and when the dataset was last refreshed:
- N respondents loaded — the distinct Done respondents already persisted to the choice-response cache for this group.
- Last refresh — timestamp of the most recent refresh for any of the group's questions.
- N new respondents available — Done respondents who have not yet been synced into the cache; clicking Refresh data loads only those (incremental).
The first time a choice group is refreshed, the process reads every user XML once and populates the cache — this may take several minutes for large surveys. Subsequent refreshes only pick up the respondents that completed the survey since the previous refresh, and subsequent Run Analysis clicks read directly from the cache (typically ~15 seconds for the R estimation).
Run Analysis is disabled until the cache has at least one respondent for the selected group. If no respondents are loaded, click Refresh data first.
Reviewing attribute types before running (DIS / CONT overrides)¶
Between the choice selector and the Loaded data panel the report shows a Variables for this analysis card that lists every attribute of the selected group's reference question together with its current variable type and its levels. Each row has a dropdown to choose the type used when estimating the model:
- DIS (discrete) — the attribute is treated as a categorical factor. The engine estimates one dummy coefficient per level (except the mid level, which is the reference with part-worth 0). Use this when the levels are categories (e.g. gender, colour, "high/medium/low" threat) or when the effect is not expected to be monotonic or linear.
- CONT (continuous) — the attribute is treated as a numeric variable. The engine estimates a single slope coefficient and the level values (as stored in the attribute definition) are used directly. Use this for genuinely numeric attributes (price in €, duration in years, counts) where you expect a monotonic linear effect — one parameter instead of k − 1 gives the model more statistical power.
Changing a dropdown highlights the row in amber and applies the chosen type only to this analysis run. The override is never saved to the survey definition: reloading the page restores each attribute to its persisted type, the XML of the mother question is not modified, and other reports or previews continue to use the original configuration. This is the safest way to experiment with DIS ↔ CONT on a survey that already has responses, since the previous restriction (no editing the mother question once responses exist) would otherwise force the researcher to edit the XML by hand.
Switching an attribute to CONT requires its levels to be numeric (e.g. 0, 30, 60). If they are not (e.g. "Low", "Medium", "High"), a small warning appears under the dropdown and the R estimation will fail with an "Analysis failed" message — no data is corrupted, just rerun with the correct type.
Automatic loading of the last analysis¶
When you select a choice from the dropdown, the page automatically shows the most recent analysis that was run for that group (the result CSV produced by the last Run Analysis click is persisted on the server). A Last analysis: dd/mm/yyyy, hh:mm:ss indicator appears under the title. Because the model is deterministic once the response data is fixed, you only need to click Run Analysis again when new respondents have been loaded via Refresh data.
Output panels¶
After a successful analysis, the report shows the following panels:
- Choice Frequencies — percentage and absolute count of respondents that chose each alternative (A, B, C, …). Derived purely from the cached responses (does not require the R model).
- Responses per Choice Question — distinct respondents that answered each choice question in the group. Useful to verify block balance in designs split across multiple blocks (e.g. P186 and P212 of the same experiment).
- Attribute Importance — relative weight of each attribute expressed as % of the total part-worth range. The label includes the attribute's descriptive name in parentheses (e.g.
attr1 (Amenaza para especies de pescado)). - Part-Worths by Attribute — estimated utility contribution of each level of each attribute. For discrete attributes, the mid level is the reference (part-worth = 0) and the other levels are shown relative to it. For continuous attributes the part-worth is
coefficient × level value. - Model Coefficients — raw multinomial-logit estimates with their standard error, z-value, p-value and 95% confidence interval. Significance is marked with stars (*** p<0.01, ** p<0.05, * p<0.10). Variable names follow the convention
attrN (description) - levelLiteralfor dummies andattrN (description)for continuous attributes; the alternative-specific constant appears asASC. - Model Statistics — number of observations, number of alternatives, number of attributes, log-likelihood and McFadden's pseudo R-squared.
- Status Quo Analysis — shown when the choice question has a Status Quo alternative configured. Displays the percentage of respondents that picked the Status Quo in every single task and how many respondents that represents out of the group's total.
Simulate market shares¶
Below the Model Coefficients table the report shows a Simulate market shares button. Clicking it opens the Discrete Choice Simulator (see Section 10.4) pre-loaded with the coefficients of the choice group you are currently looking at. Use it to predict shares for any hypothetical product configuration without re-running the model.
Partial results when the sample is too small¶
If fewer than 5 respondents have been loaded, or the estimation is numerically degenerate (singular Hessian, infinite standard errors), the engine suppresses the model output and shows an amber warning: "Not enough data to estimate the model. Showing response counts and choice frequencies only." In that case you still see Choice Frequencies, Responses per Choice Question and Status Quo Analysis — just not the model-dependent panels (Importance, Part-Worths, Coefficients, Model Statistics).
9.7 Mixed Logit / Random Parameters Logit¶
The same Choice Analysis Report offers a second, more sophisticated estimator alongside the Multinomial Logit described above. A Model selector at the top of the report lets the researcher pick between the two:
- Multinomial Logit — the default. Assumes all respondents share the same preferences, estimates one coefficient per attribute (or per level for discrete attributes), and runs in seconds. Ideal for pilots and sanity checks.
- Mixed Logit / Random Parameters Logit (RPL) — the publication-grade estimator used in Ecological Economics, JEEM, Land Economics, Environmental & Resource Economics and similar journals. Each coefficient is estimated as a distribution over respondents rather than a single point: the report shows an estimated mean (μ) and standard deviation (σ) per parameter, together with the simulated WTP distribution and a density visual of each attribute's preference heterogeneity. The engine is the Apollo R package.
Status Quo and minimum sample. The same restrictions documented for the Multinomial Logit also apply here: the estimator is validated for Choice designs that include a Status Quo alternative, and it only runs when at least 5 distinct respondents have answered the choice questions in the group. For designs without a Status Quo, email tickStat support and we will enable the appropriate estimation for your survey.
When Mixed Logit is selected the Variables for this analysis card gains two extra columns: Distribution and Cost. The card is collapsed by default (with a "Click to expand and edit" button on the right); open it only if you want to change defaults:
- Normal — default for most attributes. The coefficient can take any real value across respondents; σ measures how much preferences vary.
- Lognormal — applied by default to the cost attribute. Forces the coefficient to be negative on every single respondent, which is economically sensible and avoids the classical Mixed Logit pathology where a respondent with a cost coefficient near zero produces an exploding WTP.
- Triangular — bounded symmetric distribution on [μ-σ, μ+σ]. The half-width σ is the parameter Apollo estimates (the marginal standard deviation is σ/√6). Useful when you need to keep a coefficient in a finite range — for example, an attribute whose effect should not flip sign across the population.
- Fixed — degenerate distribution (no σ estimated); the parameter behaves like an MNL coefficient.
The Cost column is a single-select radio: mark which attribute represents the monetary cost of each alternative. This drives the Willingness-to-Pay distribution panel — each non-cost parameter's draws are divided by the cost draws to derive per-respondent WTP. The radio is pre-checked using a heuristic that scans the attribute name and description for cost, coste, price, precio, wtp or pago (case-insensitive), so surveys whose technical names are placeholders like attr6 but whose description is "Coste anual adicional" still work out of the box. Move the radio if the heuristic picked the wrong attribute. If no attribute matches the heuristic and you do not pick one manually, the Willingness-to-Pay panel does not render and a warning row is logged in the run's results.
The card header summarises the current state — "7 attributes, defaults" normally, or "7 attributes · 2 overridden" in amber when you've changed something (type, distribution or cost selection) so it is obvious at a glance.
Three further inputs sit next to the Variables card and apply only to Mixed Logit runs:
- Halton draws per respondent — how many simulation draws Apollo uses to approximate the mixed-logit likelihood. See the table below.
- Seed — RNG seed used to generate the Halton sequence. Defaults to 13 so two consecutive runs return identical estimates. Change it (e.g. 42, 7, …) to confirm your model is robust to alternative random starts; share it with collaborators so they reproduce your numbers exactly.
- Allow correlation between random parameters — when checked, the NORMAL random parameters are estimated jointly with a full variance-covariance matrix via Cholesky decomposition (publication-grade Mixed Logit, the form used in JEEM, EERE, Land Economics and similar journals). Adds K(K-1)/2 extra parameters where K is the number of NORMAL random parameters, so convergence is slower — recommended draws are 500 or more when correlation is on. Lognormal cost and Triangular parameters stay independent regardless of this setting (they have different distribution families and would require a copula-style construction to be correlated, which is outside the scope of standard Mixed Logit). When correlation is on, a new Random Parameter Correlations panel appears below Preference Heterogeneity showing the marginal ρ for every pair of NORMAL parameters.
Like the rest of the Variables for this analysis card, draws, seed and the correlation toggle are remembered for next time you open this experiment.
Expected runtimes on a typical survey with ~10 random parameters and a few hundred respondents:
| Draws | Typical runtime | Use |
|---|---|---|
| 50 | 2–3 minutes | Smoke test / quick sanity check |
| 200 | 8–12 minutes | Early iteration of model specification |
| 500 | 20–30 minutes | Piloto-quality estimates |
| 1000 | ~1 hour | Near-publication quality |
| 2000 | ~2 hours | Publication / journal submission |
More draws increase accuracy and cost roughly linearly. Below 500 the σ standard errors can be noisy; above 2000 the improvement becomes marginal.
How the job runs
Clicking Queue RPL Analysis submits the job to a FIFO queue. Only one Mixed Logit estimation runs at a time across the whole server — others wait with their queue position shown on screen. The Rscript child process runs at low operating-system priority so respondents answering surveys in parallel do not notice any slowdown.
The "Mixed Logit jobs for this group" panel shows a live list of recent and in-flight jobs for the selected choice group. Each row carries:
- Job id (#NN), status badge (QUEUED / RUNNING / COMPLETED / FAILED / CANCELLED)
- Elapsed or queued-position info, plus a one-line summary of the configuration ("50 draws", "500 draws · Lognormal: attr6 · 2 type overrides") so different runs are easy to tell apart
- Cancel button for queued/running jobs, or View results for completed ones
You can close the browser and come back later — when you re-open the page, Mixed Logit mode is re-selected automatically if a job is in flight, the list reappears with the live cronometer, and any completed run is available via its View results button.
Output panels (in addition to the shared descriptive panels)
When you click View results on any COMPLETED job, the page shows the estimates for that specific run (not just the most recent one). Panels:
- Choice Frequencies, Responses per Choice Question, Status Quo Analysis — descriptive, identical to the MNL view.
- Model Coefficients (Mixed Logit — Mean & SD) — per-parameter μ and σ with standard errors, z, p and the significance stars.
- Preference Heterogeneity — for each attribute, a density sparkline of the estimated distribution plus the fraction of respondents with the economically "correct" sign (for Lognormal cost this is 100% by construction).
- Willingness-to-Pay Distribution — simulated per-respondent WTP (attribute coefficient divided by the respondent's cost coefficient), summarised as mean, median and 95% interval.
The MNL-only panels (Attribute Importance, Part-Worths by Attribute, Model Statistics in MNL format) are hidden when viewing a Mixed Logit result — those numbers belong to a different model and would be misleading alongside the RPL estimates.
Robustness and reversibility
- A Mixed Logit job started before a server restart is automatically marked as failed when the server comes back up (with a clear "Server restarted during estimation" message), so the queue never stays blocked by ghost jobs.
- Every job's result is persisted on the server with its full configuration — you can run many specifications in a row and compare them side by side via the job list.
- Switching the model selector back to Multinomial Logit and re-running reproduces the original MNL numbers bit-for-bit; the RPL flow never modifies the survey definition, the cached response data, or the MNL results.
9.8 Interactive Dashboard¶
Displays survey results as interactive charts and graphs with cross-filtering capability. Click on any value in a chart to filter all other visualizations. Supports radio, checkbox, matrix (radio, checkbox, dropdown, numeric), dropdown multiple, and predefined-list (listaValores) question types — list-based demographics such as country, region, or province are now available both as charts and as cross-filter chips. Includes an Export PNG button to download the dashboard as an image.
The dashboard exposes two sync controls:
- Refresh Data runs an incremental sync — it processes only respondents whose surveys have been completed since the last sync. Use it to bring fresh answers in without reprocessing what is already there.
- Reset Data wipes all dashboard data for the survey, leaving the dashboard empty. After confirming the reset, press Refresh Data to repopulate from the first respondent. Use this when a backend update adds support for new question types (so already-synced respondents include the new fields) or when the data looks inconsistent.
9.9 Access Statistics¶
Displays respondent traffic data for the survey, including total accesses, completed surveys, in-progress count, early screen-outs, quota full, and quality terminates. Shows daily access trends and hourly distribution for the most recent day.
9.10 Attention Index Report¶
Downloads an Excel report with all Attention Index ratings for respondents in the survey. This report is available when the Attention Index feature is enabled in the survey settings. The report includes: - Respondent ID and Step: Identifies which respondent and step each rating belongs to - Rating and Justification: The attention score (0-10) and the LLM's reasoning - Flags: Any attention flags detected (e.g., rushing, straightlining) - Interaction Metrics: Click count, scroll count, focus/blur times, response changes - Mouse/Touch Metrics: Mouse distance, speed, pauses, hover on options (desktop) or touch count, distance, taps (mobile) - LLM Metrics: Latency, token usage, and model used for the evaluation
Respondent Gauge with Trend Indicator¶
When the Attention Index gauge is enabled, respondents see a small floating semicircular gauge that shows their average attention score (0-10). The gauge includes a trend indicator that compares the latest step rating with the previous one: - ▲ Green arrow: The last evaluated step scored higher than the previous one - ▼ Red arrow: The last evaluated step scored lower than the previous one - ● Yellow dot: The last evaluated step scored the same as the previous one
This visual feedback encourages respondents to maintain or improve their attention throughout the survey.
Gauge Position¶
The gauge position is configurable from the survey Settings (Quality Control section). There are two available positions:
- Bottom-right corner (default): The gauge floats in the bottom-right corner of the screen. On desktop, it automatically shifts up when the Next/Finalizar button is visible to avoid overlapping.
- Above Next button: The gauge is displayed inline, directly to the left of the navigation buttons (Previous / Next). On mobile devices, the Previous button is shortened to an arrow icon (←) to save space.
To change the position, toggle the "Gauge position: above Next button" checkbox in Settings. When unchecked, the gauge appears in the bottom-right corner.
Mobile Layout¶
On mobile devices (screens narrower than 768px), the gauge automatically adapts: - In above-button mode, it appears inline with the navigation buttons, aligned to the left. - The gauge size and border match the navigation buttons for a consistent look.
Internationalization¶
The gauge label ("Your attention") is automatically displayed in the respondent's language. The following languages are supported: Spanish, English, Catalan, Basque, Galician, French, German, Italian, Polish, Portuguese, Swedish, Greek, Croatian, and Montenegrin.