Description
Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, and business professionals.
Evaluation Summary and Metrics: "Do Celebrity Endorsements Matter? A Twitter Experiment Promoting Vaccination In Indonesia"
Paper: “Do Celebrity Endorsements Matter? A Twitter Experiment Promoting Vaccination In Indonesia”
Note on versions:
Each evaluator considered the most recent version of the working paper that was available at the time of their evaluation.1
Evaluator 1 (anonymous) considered the February 2022 (Stanford) working paper version titled “Designing Effective Celebrity Messaging Results From a Nationwide Twitter Experiment Promoting Vaccination in Indonesia”
Evaluator 2 (Anirudh Tagat): Considered the May 2023 (MIT) working paper version titled “Do Celebrity Endorsements Matter: A Twitter Experiment Promoting Vaccination In Indonesia”
Authors: Vivi Alatas, Arun G. Chandrasekhar, Markus Mobius, Benjamin A. Olken, And Cindy Paladines
This paper was selected as part of our (NBER) direct evaluation track.
We organized two evaluations of this paper. To read these evaluations, please click the link at the bottom.
This work seems important methodologically and practically, both for understanding the effect of social media (and perhaps ‘polarization’ as well) and for health and other interventions involving debiasing and education (e.g., Development Media International).
We sought expertise in
Empirical (econometric-style) analysis with peer-effects/networks, direct and indirect effects, causal inference
Field experiments (on social media), social media data (esp. Twitter)
Vaccine adoption, global health, Indonesian context
We shared THIS document with evaluators, suggesting some ways in which the paper might be considered in more detail
This process took over 8 months — far longer than expected and targeted. Delays occurred because:
We had difficulty commissioning qualified evaluators.
One highly qualified evaluator agreed to the assignment but was not able to find the time to complete it
With a third evaluator (in between the two mentioned above) we had a communication error. The evaluator considered a much earlier version of the paper (the 2019 NBER version). Thus we are not posting this evaluation.
Because of these delays we requested Anirudh Tagat, a member of our Management Team, to write the second (final) evaluation. We do not see any obvious conflicts of interest here. Anirudh did not select this paper for evaluation, did not reach out to evaluators, and had no strong connection to the authors. Anirudh will exempt himself for consideration of the ‘most informative evaluation’ prize (or will exempt himself from the adjudication of this).
As per The Unjournal’s policy, the paper’s authors were invited and given two weeks to provide a public response to these evaluations before we posted them. They did not provide a response, but they are invited to do so in the future (and if they do, we will post and connect it here).
Evaluator 1: Anonymous
Rating category | Rating (0-100) | Confidence (low to high)* Evaluation manager’s note: Evaluators were asked to either give a 90% CI or a ‘confidence rating’ on a scale of 1-5 | Additional comments (optional) |
---|---|---|---|
Overall assessment | 62 | 3 dots | I think this is a topic which really needs empirical research, and is also difficult to test empirically- bumped up a little bit because of this. |
Advancing knowledge and practice | 55 | 3 dots | I think this paper advances our knowledge and tackles a real gap in the field, but is also far off from being implemented into policy (many uncertainties remaining, unclear generalisability) |
Methods: Justification, reasonableness, validity, robustness | 55 | 2 dots | I am unsure if the potential methodological problems I spotted are real problems or not; may change judgement based on author’s response |
Logic & communication | 70 | 3 dots | |
Open, collaborative, replicable | 45 | 2 dots | Could change this view if the code/ data is available somewhere and I’ve missed it |
Engaging with real-world, impact quantification; practice, realism, and relevance | 55 | 3 dots | |
Relevance to global priorities | 70 | 3 dots |
Evaluator 2: Tagat
Rating category | Rating (0-100) | 90% CI for this rating Please give a range: E.g., with a rating of 50, you might give a CI of (42, 61) | Confidence (low to high)* |
---|---|---|---|
Overall assessment | 85 | (78, 90) | 4 |
Advancing knowledge and practice | 90 | (88,92) | 4 |
Methods: Justification, reasonableness, validity, robustness | 80 | (74, 83) | 3 |
Logic & communication | 85 | (80, 89) | 4 |
Open, collaborative, replicable | 80 | (70, 81) | 3 |
Engaging with real-world, impact quantification; practice, realism, and relevance2 | 100 | (91, 100) | 4 |
Relevance to global priorities | 100 | (89, 100) | 5 |
See here for details on the categories above.
Evaluator 1: Anonymous | Evaluator 2: Tagat | |||
---|---|---|---|---|
Prediction metric | Rating (0-5) (low to high) | Confidence (0-5) | Rating (0-5) | Confidence (0-5) |
What ‘quality journal’ do you expect this work will be published in? | 3 | 2 | 4 | 5 |
On a ‘scale of journals’, what tier journal should this be published in? | 3 | 2 | 5 | 5 |
See here for details on the metrics above