Discussion paper

DP18794 How Much Should We Trust Observational Estimates? Accumulating Evidence Using RCTs with Imperfect Compliance

The use of observational methods remains common in program evaluation. How much should we trust these studies, which lack clear identifying variation? We propose adjusting confidence intervals to incorporate the uncertainty due to observational bias. Using data from 44 development RCTs with imperfect compliance (ICRCTs), we estimate the parameters required to construct our confidence intervals. The results show that, after accounting for potential bias, observational studies have low effective power. Using our adjusted confidence intervals, a hypothetical infinite sample size observational study has a minimum detectable effect size of over 0.3 standard deviations. We conclude that – given current evidence – observational studies are uninformative about many programs that in truth have important effects. There is a silver lining: collecting data from more ICRCTs may help to reduce uncertainty about bias, and increase the effective power of observational program evaluation in the future.

£6.00
Citation

Bernard, D, G Bryan, S Chabe-Ferret, J de Quidt, J Fliegner and R Rathelot (2024), ‘DP18794 How Much Should We Trust Observational Estimates? Accumulating Evidence Using RCTs with Imperfect Compliance‘, CEPR Discussion Paper No. 18794. CEPR Press, Paris & London. https://cepr.org/publications/dp18794