Skip to main content

Evaluation 1 of "Willful Ignorance and Moral Behavior"

Evaluation of "Willful Ignorance and Moral Behavior" for The Unjournal.

Published onAug 08, 2024
Evaluation 1 of "Willful Ignorance and Moral Behavior"
·

Abstract 1

The paper is professionally written and the experiment is well-designed. The main result – avoiders of a VR animal advocacy message exhibit larger treatment effects on consumption than information seekers – is novel and important. I find the claims about the dynamics of the treatment effect to be less supported by the evidence. It is more speculative.

Summary Measures

We asked evaluators to give some overall assessments, in addition to ratings across a range of criteria. See the evaluation summary “metrics” for a more detailed breakdown of this. See these ratings in the context of all Unjournal ratings, with some analysis, in our data presentation here.2

Rating

90% Credible Interval

Overall assessment

90/100

75 - 96

Journal rank tier, normative rating

4.0/5

3.5 - 4.3

Overall assessment: We asked evaluators to rank this paper “heuristically” as a percentile “relative to all serious research in the same area that you have encountered in the last three years.” We requested they “consider all aspects of quality, credibility, importance to knowledge production, and importance to practice.”

Journal rank tier, normative rating (0-5): “On a ‘scale of journals’, what ‘quality of journal’ should this be published in? (See ranking tiers discussed here)” Note: 0= lowest/none, 5= highest/best”.

See here for the full evaluator guidelines, including further explanation of the requested ratings.

Written report

The authors conduct an experiment on the effect of a virtual-reality (VR) video depicting the course of a farmed pig’s life, from birth to slaughter. Student subjects have their demand for this video elicited with real stakes in a positive and negative domain. Subjects are then conditionally randomly assigned to either watch the VR or watch a control VR video. There are two main outcome variables. After watching the video subjects choose whether they would like to receive a voucher for a meat meal or a vegetarian meal, redeemable at the campus canteen. Subjects’ meal choices are also observed for the beginning of the academic year to March (onset of [the Covid] pandemic). The analysis period is from two weeks prior [to] three weeks post-intervention. The authors find that the video reduces the preference for the meat voucher by 15.6 [percentage points]. The key result is that this effect is driven by the avoiders, with an effect on them of 34.1pp (relative to 7.7pp ns for seekers).3 Results are consistent, if less precise using the consumption data. Finally, the authors present evidence on [the] mechanism. They show that avoidance is uncorrelated with beliefs about animal intelligence or factory farming and preferences for animal welfare or meat. Information shifts beliefs slightly. But it seems that avoiders self-report much more sensitivity to violence and experience more negative affect from the video. The evidence suggests that the intervention operates through an emotion channel.

There is much to like about this paper. I think understanding information preferences in the domain of farmed animal welfare is important because information is readily available and actionable, but there seems to be very little action on the part of consumers and voters. The experiment is well-designed, and the data set is substantial. The main result, that avoiders exhibit larger treatment effects, is consistent with several behavioral economic models, but not “rational”/expected-utility; so this is of basic scientific interest. It is also of policy interest because it suggests that pecuniary incentives may be required for an effective messaging campaign. The authors also push the claim that the field effects only last about one week and then attenuate. This may or may not be true, but I do not find the evidence on attenuation particularly convincing which I discuss below. The paper is already well-polished, and I think ready to be published in a high-profile economics journal.

Comments

[Field results, dynamics, statistical methods, sample window]

  1. Perhaps my biggest concerns are about the field results. I believe that there is an effect in the time window, but I do not understand the choice of this window. Why not use the full sample window? Won’t this have greater power?

    • a. Why not use a panel regression or a difference-in-difference estimation where the observation is whether a specific meal is meat or not? This is the approach used in Jalil, Tasoff, and Vargas-Bustamante (2020, 2023)[1][2].

    • b. The analysis appears underpowered to estimate any sort of dynamics. How many meals are in the analysis window? How many meals are in the entire data set? It would help to have this reported in Table A.14.

      • i. If there are on average 3 meals in the two weeks prior to the intervention, that implies about 1.5 meals/week. If there are about 28 weeks in the sample with 261 subjects that implies about 10,000 meals in the whole sample. However, if you’re considering a window five weeks long, that implies a sample of about 2,000 meals. I think it is difficult to estimate dynamics from a sample so small.

        • 1. Indeed, Figs 3 and 4 show that 95% confidence are quite wide, and the immediate post-treatment mean is within the confidence intervals of the mean a few weeks later.

        • 2. Jalil et al (2020)[1] came to the incorrect conclusion from their larger sample (~50,000 meals) that the treatment effects attenuated over time. But they discovered in their 2023 follow-up that there was no attenuation when they expanded the window over three years (~100,000 meals).

      • ii. What would happen, if rather than using a moving average, which multiply counts the same observations, you estimate treatment effects for specific post-windows: week 1, week 2, week 3 or month 1 vs. month 2?

    • c. That said, I still think there is value in the field study. I am surprised that there is any significant effect from such a short 5-minute intervention. I think this shows robustness as well as showing the effects persist outside of the lab. It is truly impressive that such a quick intervention can last for weeks. That is my takeaway, I’m less credulous about the claims on the dynamics.

    [Belief, emotion and preference questions]

  2. The battery of belief, emotion, and preference questions is a valuable addition to the paper. I often wonder how much interventions such as this shift behavior through the belief channel vs. more of an emotional/attentional channel. I think this paper presents very nice evidence on the latter (PANAS and distaste of violence). This stands in contrast to the other studies in this area that seem to focus on information content that is less emotionally-charged: Schwitzgebel, Cokelet, and Singer (2020, 2021)[3][4] and Jalil, Tasoff, and Vargas-Bustamante (2020, 2023)[1][2].

    [IPW estimator vs standard approach]

  3. As far as design goes, I am not familiar with the IPW [inverse probability weighting] estimator, but I understand that the assignment is random conditional on WTP (but unbalanced). I wonder how well this method performs compared to alternative designs in experimental economics in which assignment is random for a large fraction of subjects and […] a small fraction (5 or 10% usually) are placed in an “incentive compatible group” where they get assignment based on choices and chance. Usually, this IC group is thrown out. I wonder which method is more efficient. Or perhaps the obvious best approach is to use the above method (random assignment) and use the IPW on the IC group only?

References

[1]Jalil, A. J., Tasoff, J., & Bustamante, A. V. (2020). Eating to save the planet: Evidence from a randomized controlled trial using individual-level food purchase data. Food Policy95, 101950.

[2]Jalil, A. J., Tasoff, J., & Bustamante, A. V. (2023). Low-cost climate-change informational intervention reduces meat consumption among students for 3 years. Nature Food4(3), 218-222.

[3]Schwitzgebel, E., Cokelet, B., & Singer, P. (2020). Do ethics classes influence student behavior? Case study: Teaching the ethics of eating meat. Cognition203, 104397.

[4]Schwitzgebel, E., Cokelet, B., & Singer, P. (2021). Students eat less meat after studying meat ethics. Review of philosophy and psychology14(1), 113-138.

Evaluator details

  1. How long have you been in this field?

    • 10-15 years

  2. How many proposals and papers have you evaluated?

    • In the vicinity of 100

Comments
0
comment
No comments here
Why not start the discussion?