Skip to main content

Evaluation summary and metrics: “Money (Not) to Burn: Payments for Ecosystem Services to Reduce Crop Residue Burning”

Evaluation summary and metrics: “Money (Not) to Burn: Payments for Ecosystem Services to Reduce Crop Residue Burning”

Published onOct 30, 2023
Evaluation summary and metrics: “Money (Not) to Burn: Payments for Ecosystem Services to Reduce Crop Residue Burning”
·

Money (Not) to Burn: Payments for Ecosystem Services to Reduce Crop Residue Burning

Preamble

Paper: “Money (Not) to Burn: Payments for Ecosystem Services to Reduce Crop Residue Burning

Authors: B. Kelsey Jack, Seema Jayachandran, Namrata Kala and Rohini Pande

We organized two evaluations of this paper. To read these evaluations, please click the link at the bottom.

Evaluation Manager’s Notes

Why we chose this paper

This paper was recommended to us. Our reading of the paper was that it seemed substantively important because it rigorously tested an intervention that may be able to be competitive with GiveWell’s top charities, in addition to having environmental benefits. 

How we chose the evaluators

We sought out quantitative social scientists that combined would provide us with: expertise in the substantive subject area, expertise in the methods used in the paper, and at least some familiarity with cost-benefit analysis. We found two economists that covered this.

Evaluation process

The process took about 3 months from start to finish. This is somewhat slower than our target. Some of this was due to difficulties in finding reviewers, and some of this was due to some parts of the process running a little slower than expected.

As per The Unjournal’s policy, the paper’s authors were invited and given two weeks to provide a public response to these evaluations before we posted them. The authors elected not to have a public response, but thanked the reviewers for their feedback.

Summary of evaluations

The evaluations were both quite positive. One reviewer highlighted some methodological details that could be improved, but both agreed that the paper, in the words of one reviewer, “cleanly identifies an attractive policy that solves a big problem.”

Metrics (all evaluators)

Ratings

Evaluator 1: Anonymous

Rating category

Rating (0-100)

90% CI for this rating

Additional comments (optional)

Overall assessment

90

(75, 95)

Advancing knowledge and practice 

80

(70, 85)

Methods: Justification, reasonableness, validity, robustness

80

(70, 95)

Robustness checks and testing mechanisms needed, but possible for the authors to do with the data they have.

Logic & communication

90

(88, 92)

Open, collaborative, replicable

80

(75, 95)

“Replicable” in the normal sense doesn’t apply to a working paper whose underlying data is not yet public (but probably will be after publication). 

Engaging with real-world, impact quantification; practice, realism, and relevance 

95

(90, 100)

The authors mention that this project is in collaboration with the Punjab government; I think it would be great for the world if they worked more with the government or nonprofit funders to try and scale up this intervention.

Relevance to global priorities

95

(80, 100)

See here for details on the categories above.

Evaluator 2: Anonymous

Rating category

Rating (0-100)

90% CI for this rating

Additional comments (optional)

Overall assessment

85

80-90

Advancing knowledge and practice 

80

85-95

Methods: Justification, reasonableness, validity, robustness

80

65-75

Logic & communication

90

80-90

Open, collaborative, replicable

90

70-80

Engaging with real-world, impact quantification; practice, realism, and relevance 

90

90-100

Relevance to global priorities

90

90-100

See here for details on the categories above.

Predictions

Eval. 1 Anon.

Eval. 2 Anon.

Prediction metric

Rating (0-5) (low to high)

90% CI (0-5)

Comments (footnotes)

Rating (0-5)

90% CI (0-5)

Comments

What ‘quality journal’ do you expect this work will be published in?

4

(3, 5)

I don’t know how much the peer review process values the best attributes of this paper (political feasibility, scalability of intervention, cost-effectiveness).

4

80-90

Ed. note: These were left as confidence intervals but the range did not correspond to our journal tier scale, so they were moved here.

On a ‘scale of journals’, what ‘quality of journal’ should this be published in?

5

(2, 5)

4

80-90

Ed. note: These were left as confidence intervals but the range did not correspond to our journal tier scale, so they were moved here

See here for details on the metrics above.

Comments
0
comment
No comments here
Why not start the discussion?