Description
Evaluation Summary and Metrics: "Willful Ignorance and Moral Behavior" for The Unjournal.
Evaluation of "Willful Ignorance and Moral Behavior" for The Unjournal.
This study is an outstanding work that will become a major reference in the empirical literature about information avoidance and meat consumption. First, the authors make here a highly valuable contribution to the economic discipline and, in particular, to the research on dietary transitions by determining the causal impact of information provision on individuals conditional on their a priori willingness to get informed. This is an important research point both for economics in general (where the question of information avoidance has been discussed intensively over the past five years) and for dietary transitions, where researchers and society struggle to induce dietary changes (changes which are needed in light of the environmental, health, and ethical issues arising from large scale meat consumption in developed countries). Second, the authors came up with a lab & field design with real consumption choices, which has been relatively rare in the empirical literature on dietary changes. Following individuals outside of the lab is a key element [for considering] issues like displacement or long-term effects. In addition, looking at effective dietary choices is central, as the attitude-behavior gap in nutrition is a major limiting factor for this research literature. Here, the authors also show that the effect of watching the video on effective consumption choices is short-lived. Third, on the estimation side, the authors propose a neat empirical strategy to elicit the conditional impact (addressing the issue of self-selection), discuss the potential problems at length (e.g., displacement of meat consumption), clearly expose the deviations from their pre-analysis plan, and carefully thought about numerous design issues (e.g., the alternative video). Overall, I am very supportive of this work.
We asked evaluators to give some overall assessments, in addition to ratings across a range of criteria. See the evaluation summary “metrics” for a more detailed breakdown of this. See these ratings in the context of all Unjournal ratings, with some analysis, in our data presentation here.2
Rating | 90% Credible Interval | |
Overall assessment | 88/100 | 83 - 93 |
Journal rank tier, normative rating | 4.7/5 | 4.4 - 5.0 |
Overall assessment: We asked evaluators to rank this paper “heuristically” as a percentile “relative to all serious research in the same area that you have encountered in the last three years.” We requested they “consider all aspects of quality, credibility, importance to knowledge production, and importance to practice.”
Journal rank tier, normative rating (0-5): “On a ‘scale of journals’, what ‘quality of journal’ should this be published in? (See ranking tiers discussed here)” Note: 0= lowest/none, 5= highest/best”.
See here for the full evaluator guidelines, including further explanation of the requested ratings.
This work addresses the question of information avoidance related to meat consumption. More precisely, it analyzes to which extent individuals reluctant to get informed about farmed animals’ rearing conditions change their meat consumption when they are effectively exposed to the information they seek to avoid. The authors investigate this question by offering participants in a lab experiment (N=330) the possibility to watch a video showing the rearing conditions of intensively farmed pigs and eliciting their reservation price to accept watching the video. Using a multiple-price list design with random price selection, they identify the causal effect of watching the video for different reservation prices. The authors analyze the conditional treatment effect for both in-lab meal choices (voucher) and out-of-the-lab meal choices (university canteens). They provide evidence suggesting that information avoiders are more likely to change their meat consumption after exposure to the video than information seekers.
While I find the work extremely well done, I report here some comments that arose while reading the manuscript. I hope that some of these comments can help the authors in improving their document or can bring some ideas for their future work.
My first major comment concerns one of the core elements of the paper, namely the willingness-to-pay (WTP) for watching the video. At first, I did not understand that the prices were relative, i.e., they were opportunity costs of watching the video. The authors write that they are ‘relative prices’ in the body of the manuscript, but it became clear what they mean by that only after looking at the instructions.
While this is not a concern for the validity of the paper, I have two behavioral concerns about the wording here. First, I think that it is important to underline in the main body of the manuscript that the prices are opportunity costs. I would suspect participants to give different answers if they had to pay EUR 8 to avoid watching the video about animal farming compared to the current situation where they would not gain EUR 8 if they decided to watch the alternative video. Actively paying money to avoid watching a video is a more active behavior. It relates to the status quo and loss aversion. The initial endowment would also matter here. The experimental design captures information avoidance as a more passive phenomenon than what the readers could understand from the current wording.
Second, I am not sure that what the authors measure is a WTP. Here, the authors offer participants to receive some amount of money if they agree to watch a video. It is much more like compensating people for doing a task than asking them to actively pay to watch the video. Thus, I feel that this is closer to a willingness-to-accept (WTA) than a WTP. I think that it is of particular importance in the case of information acquisition where there are different behaviors at stake: actively looking for information, passively accepting the information, actively avoiding information (and possibly passively avoiding information, if this makes sense).
Again, these questions do not affect the results of the paper, but I see them as important to understand what is measured and what we learn from it.
My second major comment relates to the beliefs and belief updating. First of all, we can note that the belief items are not as precisely elicited as the behaviors of the participants (which is fine given that the focus is on behaviors). However, I would suggest being more careful when discussing the beliefs given this. First, the beliefs are stated by the participants. The authors cite one of my [papers] (Espinosa & Stoop, 2021).[1] As we show in this work, there are significant differences between incentivized and non-incentivized reported beliefs in the case of meat consumption. If people engage in motivated reasoning somehow after watching the video to limit cognitive dissonance, if the video makes the belief question more salient, if on the contrary cognitive dissonance becomes impossible after watching the video, the authors might misestimate the impact on beliefs of the treatment effect.
Second, I also think that the belief items are relatively vague such that it is unclear what conclusions we can draw from it. Note that I did not find the precise wording of the belief questions in the paper nor on the PAP (https://www.socialscienceregistry.org/trials/5015). I think that only evaluating beliefs on a 1-to-5 Likert scale about the ‘pigs’ living conditions’ is not the most efficient design if we really care about understanding what cognitive process happens here and what participants learn from the video. These questions seem to reflect a relatively general evaluation of the pigs’ welfare rather than accurately assessing the knowledge of their living conditions. While I do not see this point as a threat to the validity of the paper, I would have appreciated that the authors mentioned this issue and recognized the possible limits in terms of what we can learn from the results.
Third, I did not understand in the manuscript the effective design regarding the beliefs. On page 18, in the first paragraph, the authors discuss the difference in beliefs between information seekers and information avoiders. They say that the average belief deteriorated with the video and that belief updating is not statistically significant between the two groups. However, the authors said on page 13 that they asked the belief and preference questions before watching the video. So, how did the authors evaluate the change in beliefs? Was this question asked twice (within- subject) or conditional on exposure (between-subject)?
Assuming that the authors evaluated the beliefs twice, I might have some concerns here. One issue is that most of the participants on this question are distributed at the highest level of the Likert scale (about 70% of the participants report the maximum value looking at Figure A5). When assessing a difference between treatment groups or a treatment effect, ceiling effects are important as they can lead to considerably underestimate the difference. I would suggest using here a Tobit model to take this issue into account. (I assume that there is no issue with combining it with inverse probability weighting.) Another related issue concerns the difference in beliefs. The authors write that the difference in beliefs is 0.15 for information avoiders and 0.20 for information seekers (page 18). However, note that the difference is non-negligible (it is about 33% larger). The lack of significance for the difference does not mean that there is no difference (well-known moto: the absence of evidence is not the evidence of absence). This is particularly true in the case of underpowered tests. And as I mentioned above, this is likely to be the case here because of the ceiling effects. If we look at Table A9, we see that the average beliefs are 4.69 for information avoiders and 4.59 for information seekers. It seems that information seekers have more room to update their beliefs on this Likert scale than information avoiders (because of the ceiling effect).
Overall, I would suggest being more careful about the conclusions the authors draw about the beliefs (ex: ‘Hence, differences in baseline beliefs or in belief updating do not explain why some individuals engage in willful ignorance while others seek information.’).
My third major comment is about what drives the change in behavior and what the authors measure. I think that the authors test the overall effect of the video, which could (in theory) be decomposed into two effects: a pure informative effect (people learn about the state of the world) and an emotional/affective effect of the video (people feel negative affects when exposed to the video). Of course, it is very difficult to decompose the overall effect into these two sub-effects. It might also be that such a decomposition is not relevant because the cognitive and affective processes are interconnected. However, the authors provide evidence supporting the idea that avoiders and seekers show different affects associated with the video (see Section 4.2 and Table A13). Given that (i) avoiders and seekers have similar priors, (ii) they have similar posteriors, and (iii) have different affective/emotional reactions, it seems to me that the former are more negative-affect[s] avoiders than information avoiders. So, are the authors evaluating here willful ignorance in the sense people do not want to get information that will change their beliefs or are they measuring emotional protection, i.e., avoiding exposure to information they already know but which they expect to generate negative emotions? To be fully transparent: I think that this question is beyond the current knowledge and discussions in economics and I think that, while the paper might not be able to address it, it might open the path to new research on the topic.
I have two relatively minor comments:
First, the authors state several times in the paper that the canteens ‘typically serve meat from intensive farming’. While it might be correct, it would have been useful to have some numbers here. Ideally, I would have preferred to have the participants’ beliefs on this. In fact, in my second work cited by the authors, [2] we find that participants who watched such a video are likely to accuse the intensive farming industry but a large share of them then say that it is not representative of the animal industry in their country. So, an obvious protective mechanism to maintain his/her meat consumption is to say that it does not concern their own consumption. While it is not a concern for the paper (it would imply that the authors underestimate the treatment effect), I think it is an important point.
Second, on a side note, I would like to stress that increasing the costs of information avoidance (discussed at the end of the paper) could also lead to other behavioral issues. For instance, the authors mention the strategy of activists to display information on product packages or approach pedestrians on the street. Increasing the coercion level could lead to backlash effects such as reactance where people feel restricted in their choice and could start criticizing the activists rather than considering the information from the video. This effect could be even stronger for information avoiders. Thus, strategies affecting the information acquisition costs as discussed in Section 5 might induce other and possibly backfiring behaviors.
How long have you been in this field?
I have been working on animal welfare economics and meat consumption since 2018. So, about 6 years.
How many proposals and papers have you evaluated?
I've written 71 referee reports for journals, and a handful of reports for funding agencies, and reviewed about a hundred applications for recruitment committees.