Files

Abstract

This paper presents what we believe to be the most comprehensive suite of comparison criteria regarding multinomial discrete-choice experiment elicitation formats to date. We administer a choice experiment focused on ecosystem-service valuation to three independent samples: single-choice, repeated-choice, and best-worst elicitation. We test whether results differ by parameter estimates, scale factors, preference heterogeneity, status-quo/action bias, attribute non-attendance, and magnitude and precision of welfare measures. Overall, we find very limited evidence of differences in attribute parameter estimates, scale factors, and attribute increment values across elicitation treatments. However, we find significant differences in status-quo/action bias across elicitation treatments, with repeated-choice resulting in greater proportions of “Yes” votes, and consequently, higher program-level welfare estimates. Also, we find that single-choice yields drastically less-precise welfare estimates. Finally, we find significant differences in attribute non-attendance behavior across elicitation formats, although there appears to be little consistency in class shares even within a given elicitation treatment.

Details

PDF

Statistics

from
to
Export
Download Full History