Skip to main content

Evaluation Summary and Metrics: "Economic vs. Epidemiological Approaches to Measuring the Human Capital Impacts of Infectious Disease Elimination"

Evaluation Summary and Metrics: "Economic vs. Epidemiological Approaches to Measuring the Human Capital Impacts of Infectious Disease Elimination" for The Unjournal.

Published onJul 16, 2024
Evaluation Summary and Metrics: "Economic vs. Epidemiological Approaches to Measuring the Human Capital Impacts of Infectious Disease Elimination"
·
history

You're viewing an older Release (#2) of this Pub.

  • This Release (#2) was created on Jul 16, 2024 ()
  • The latest Release (#3) was created on Sep 07, 2024 ().

Abstract

We organized two evaluations of the paper: "Economic vs. Epidemiological Approaches to Measuring the Human Capital Impacts of Infectious Disease Elimination".[1] To read these evaluations, please see the links below.

Evaluations

1. Anonymous evaluator 1

2. Anonymous evaluator 2

Overall ratings

We asked evaluators to provide overall assessments as well as ratings for a range of specific criteria.

I. Overall assessment: We asked them to rank this paper “heuristically” as a percentile “relative to all serious research in the same area that you have encountered in the last three years.” We requested they “consider all aspects of quality, credibility, importance to knowledge production, and importance to practice.”

II. Journal rank tier, normative rating (0-5):1 On a ‘scale of journals’, what ‘quality of journal’ should this be published in? (See ranking tiers discussed here.) Note: 0= lowest/none, 5= highest/best.

Overall assessment (0-100)

Journal rank tier, normative rating (0-5)

Anonymous evaluation 12

Anonymous evaluation 2

80

3.3

See “Metrics” below for a more detailed breakdown of the evaluators’ ratings across several categories. To see these ratings in the context of all Unjournal ratings, with some analysis, see our data presentation here.3

See here for the current full evaluator guidelines, including further explanation of the requested ratings.4

Evaluation summaries

Anonymous evaluator 1

This evaluator did not provide a summary. We present the following summary list (written with some assistance from AI tools; see this chatbot conversation for context).

Positives:

(1) Novel epidemiological method for imputing historical infection rates.

(2) Interesting comparison of epidemiological vs economic approaches.

Limitations/suggestions:

(3) Conclusion favoring the economic approach needs stronger support.

(4) Concerned about identifying assumptions for ‘sharp cohort design’ — would like to see tests for ‘no cohort pre-trends’.

(5) Needs more discussion of conceptual differences between mortality and infection rates.

(6) Could elaborate more on applicability to other diseases.

Anonymous evaluator 2

This paper presents an interesting comparison of two methods for estimating the impact of measles on long-term health and economic outcomes: an epidemiological model to estimate variation in measles infection across cohorts and, as more standard in economics, a reduced-form model that uses variation in pre-vaccination measles mortality across place. It is a fascinating and important study that highlights the strengths and weaknesses of the two approaches. However, pushing a bit further on the assumptions and measurement issues associated with each approach and having a fuller explanation of the differences in the empirical results would make for an even stronger contribution.

Metrics

Ratings

See here for details on the categories below, and the guidance given to evaluators.

Evaluator 1

Anonymous5

Evaluator 2

Anonymous

Rating category

Rating (0-100)

90% CI

(0-100)*

Rating (0-100)

90% CI

(0-100)*

Overall assessment6

7

80

(65, 90)

Advancing knowledge and practice8

76

(64, 85)

Methods: Justification, reasonableness, validity, robustness9

77

(70, 83)

Logic & communication10

91

(81, 100)

Open, collaborative, replicable11

96

(92, 100)

Real-world relevance 12

95

(90, 100)

Relevance to global priorities13

86

(77, 95)

Journal ranking tiers

See here for more details on these tiers.

Evaluator 1

Anonymous14

Evaluator 2

Anonymous

Judgment

Ranking tier (0-5)

90% CI

Ranking tier (0-5)

90% CI

On a ‘scale of journals’, what ‘quality of journal’ should this be published in?

15

3.3

(2.6, 4.1)

What ‘quality journal’ do you expect this work will be published in?

3.3

(2.6, 4.1)

See here for more details on these tiers.

We summarize these as:

  • 0.0: Marginally respectable/Little to no value

  • 1.0: OK/Somewhat valuable

  • 2.0: Marginal B-journal/Decent field journal

  • 3.0: Top B-journal/Strong field journal

  • 4.0: Marginal A-Journal/Top field journal

  • 5.0: A-journal/Top journal

Evaluation manager’s discussion16

The evaluators make several suggestions that could improve the paper.

Evaluator 1 points out that the paper is lacking an event study to test for differential cohort trends. One would like to see evidence of an absence of pre-trends and an absence of effects for cohorts too old to benefit from the vaccine (perhaps trends with tight statistical bounds around zero).

They also ask why the measles fatality rate would vary geographically. Since almost everyone was infected with measles in the pre-vaccine era, variation in mortality would seem to be driven by initial health levels or quality of health care; moreover, variation in reported measles cases could be driven by reporting capacity. (This raises questions about how to interpret the treatment variation in Atwood 2022 [2], which uses reported measles incidence.)

Finally, they ask how the epidemiological approach could be applied to other infectious diseases.

Evaluator 2 focuses on the relationship between measles mortality rates and infection rates. Given that the reduced-form results here (using mortality) agree with Atwood 2022 (using incidence), we would naturally expect mortality and infection rates to be positively correlated. However, this correlation is not reported, and the subsample presented in Figure 4 in fact suggests a negative correlation. The authors should be clearer in distinguishing their contribution from Atwood, and explaining why the reduced-form results agree.

Lastly, it is surprising that the results from the epidemiological model change sign when including cohort fixed effects, when the model shows credibility through predicting future outbreaks.

Unjournal process notes

I (David Reinstein) served as the evaluation manager and guided correspondence with the authors. However, I benefitted from strong and detailed advice from several members of The Unjournal’s team who have a greater familiarity with this research area.

Why we chose this paper17

 This paper studies the long-term effects of the 1963 US measles vaccine, and is relevant for funding vaccines in developing countries. 

This line of research holds potential for improving our understanding of how to measure vaccine effects. This work might cause updates in claims about the importance of vaccine rollouts or research.

Evaluation process and suggestions for evaluation

We shared the following content with evaluators.

Why does it need (more) review, and what are some key issues and claims to vet?

  1. The paper seems to interpret its finding as supporting the reduced-form approach. From the abstract: “Our results suggest that differences in disease severity are more relevant for long-term human capital impacts than raw differences in actual infection rates, supporting the reduced-form approach used in the economic literature.”

  2. Their reduced-form approaching the literature  uses reported incidence. Pre-vaccine mortality could arguably be a proxy for this.  But the authors don’t seem to consider whether pre-vaccine mortality correlates with reported incidence. Does this matter?

  3. The authors claim that reported incidence is a proxy for disease burden. Is this justified? Could they test this empirically?

  4. How does the concept of  ‘disease burden’ apply to a universal disease like measles (with nearly 100% infection rates). In light of this, what is the identifying variation?

  5. Do the authors clearly state their identifying assumptions? Is their identification justified?

  6. They do not present an event study to test for pre-trends — is this an important limitation?

Author engagement

The authors responded to our initial note about evaluating their paper, shared their most recent draft, and offered useful feedback on our process. We shared the evaluations with them, and they acknowledged that they found the comments and suggestions interesting, and agreed with most of them. They decided not to write an authors’ response, but noted that they may consider a response in future.

Comments
0
comment
No comments here
Why not start the discussion?