Project Leads: Hengchen Dai & Silvia Saccardo

Nudge Unit Team: Daniel Croymans, Sitaram Vangala

Collaborators: Maria A Han, Naveen Raja

Project description: Field experimentation and behavioral science have the potential to inform policy. Yet, many initially promising interventions show substantially lower efficacy at scale, reflecting the broader issue of the instability of scientific findings across contexts. We identify important factors that can explain variation in estimated intervention efficacy across evaluations and help policymakers better predict behavioral responses to interventions in their settings. We leverage data from (1) 123 randomized control trials (RCTs) involving over 20 million people that were conducted by either academics or a government agency to evaluate nudges and (2) two RCTs (N=187,134 and 149,720) that we conducted to nudge COVID-19 initial vaccinations at UCLA Health. Across those datasets, we find that nudges' estimated efficacy tends to be smaller 1) among individuals with low (vs. high) baseline motivation to engage in the target activity, a finding that is masked when only focusing on average effects, and 2) when outcomes are more broadly (vs. narrowly) defined and measured over a longer (vs. shorter) horizon. These findings help reconcile seemingly inconsistent evidence in the literature, contribute to explain the recent finding that interventions evaluated by academics (vs. government agencies) show substantially larger effect sizes on average, and have implications for selecting and scaling interventions.