Skip to main content

Older -- Evaluation summary and metrics: “Title of paper” (template)

Published onMar 16, 2023
Older -- Evaluation summary and metrics: “Title of paper” (template)
·

Evaluation summary of “Title”

Add citation to the relevant paper here? [1]

We organized [number] of evaluations of this paper. The author also responded. To read the evaluations and the response, click the links at the bottom.

Evaluation manager’s notes [Optional section]

Some things you may want to discuss (very optionally)

- Why we chose the paper (or how we got it, e.g., through an author submission)

- Why we chose the evaluators

- What insights the process revealed

- A synthesis of the evaluations and authors’ responses

- Implications for global priorities research, for policy in general, for open-science, for the discipline, etc.

Evaluators were asked to follow the general guidelines available here. In addition to written evaluations (similar to journal peer review), we ask evaluators to provide quantitative metrics on several aspects of each article. These are put together below.

[For this paper we did not give specific suggestions on ‘which aspects to evaluate’.]

[OR] [They were also provided with these additional resources specific to this paper, rationale for its selection, and an ‘editorial’ first pass of aspects of the paper to consider. Specific notes about requests made to individual evaluators]

The third evaluator was given a specific request [if this was the case]:

Blah Blah

Metrics (all evaluators)

Github gist (and data access):

can a csv be a gist

Ratings

Eval. 1 (of 3):

John Smith

Eval. 2: Anonymous

Eval. 3:

Jane Doe

Category

Rating (0-100)

90% CI (0-100)

Comments (footnote)

Rating (0-100)

Confidence: *
High = 5,

Low = 0

Comments

Rating (0-100)

90% CI (0-100)

Comments

Overall assessment

50

(40, 65)

1

50

4

79

59-94

Advancing knowledge and practice

25

(20,40)

2

90

5

90

70-100

Methods: Justification, reasonableness, validity, robustness

95

(85,97.5)

80

4

70

50-90

Logic & communication

75

(60,90)

80

4

70

50-90

Open, collaborative, replicable

N/A

N/A

3

90

3

50

30-70

Engaging with real-world, impact quantification; practice, realism, and relevance

4

5

90

70-100

Relevance to global priorities

60

(40,75)

95

3

90

70-100

[*Evaluation Manager = “Editor’s” note (NAME): The Evaluator 2 indicated a ‘level of confidence’ on a scale of 0-5]

Predictions

Eval. 1 (of 3):

John Smith

Evaluator 2: Anonymous

Evaluator 3: Jane Doe

Prediction metric

Rating (0-5) (low to high)

90% CI (0-5)*

Comments (footnotes)

Rating (0-5)

Confidence (0-5)*
High = 5, Low = 0

Comments

Rating (0-5)

Confidence

Comments

What ‘quality journal’ do you expect this work will be published in?

Note: 0= lowest/none, 5= highest/best

3

(2.5,4.5)

6

4

5

7

5

High

On a ‘scale of journals’, what ‘quality of journal’ should this be published in?

Note: 0= lowest/none, 5= highest/best

3

(2.5,4.5)

4

5

5

High

Connections
1 of 2
A Supplement to this Pub
A Reply to this Pub
Comments
0
comment
No comments here
Why not start the discussion?