Transparency
Methodology
This page describes how experiments are conducted, what data is stored, and the limitations that affect interpretation. We believe transparency strengthens trust.
5,080
anonymous responses across all experiments
Consent & Privacy
By participating, you consent to anonymous data collection.
No personal data is stored. This means no names, email addresses, IP addresses in a form that could identify you, device fingerprints, or behavioral tracking. Participation is entirely voluntary and you may close your browser at any time without submitting data.
We store only: (1) the experiment slug, (2) your response value, (3) the experimental variant you were assigned, and (4) a randomly generated session ID that is created fresh each browser session and cannot be linked to any person.
No cookies are used. No analytics providers are embedded. This site operates without any third-party tracking.
Data Collection
Responses are submitted via a standard HTTPS POST request to our API. Each request is rate-limited to 30 responses per 10-minute window per IP address to prevent data flooding.
The database schema for each response:
{
experimentSlug: string, // e.g. "anchoring"
variant: string?, // e.g. "high-anchor"
choiceValue: string, // the user's response
numericValue: float?, // numeric form where applicable
sessionId: string, // random, ephemeral, non-identifying
createdAt: datetime
}Variant assignment is performed client-side using Math.random(). This produces roughly equal group sizes over time but does not guarantee exact balance in small samples.
Limitations
Self-selection bias
Participants choose to visit this platform, likely because they have prior interest in psychology or behavioral economics. This is not a random sample of any population. Results may underestimate biases (curious, reflective individuals) or overestimate them (users deliberately testing their limits).
Non-random sample
The participant pool is an internet convenience sample — skewed toward people with access to computers, English fluency, and higher education. Findings cannot be generalized to the global population.
Social desirability & demand characteristics
Knowing the experiment is about cognitive biases may influence responses. Participants aware of the anchoring effect, for example, may attempt to compensate, potentially attenuating or reversing the effect.
No experimental control
Unlike laboratory studies, we cannot control the environment in which participants respond: distractions, time pressure, mood, and prior task engagement all vary freely. This introduces noise that reduces statistical power.
Sample size variability
Small experiments (N < 30 per condition) should be interpreted with caution. Aggregate statistics may be unstable and confidence intervals wide. We display raw counts to help users assess reliability.
Client-side randomization
Condition assignment uses JavaScript's Math.random(), which is pseudorandom and seeded by the browser. In theory, a determined user could manipulate their condition assignment. In practice this is unlikely to affect aggregate results meaningfully.
Replication Fidelity
Each experiment is based on a canonical paradigm from the behavioral economics literature. Where possible, we use the original stimuli or close adaptations. Deviations are noted below:
| Experiment | Original | Adaptation |
|---|---|---|
| Anchoring | Spinning wheel + estimation (T&K 1974) | Random number display; same UN Africa question |
| Framing | Asian Disease Problem (T&K 1981) | Verbatim replication |
| Loss Aversion | Mixed gambles (K&T 1979) | 3 rounds with varying stake sizes |
| Overconfidence | Calibration curves (L,F&P 1982) | Short 5-question quiz; prediction slider |