Abstract
This is a strong statistical analysis of the literature on meat and animal product consumption. It shows weaker evidence than previous reviews, purportedly because of a stricter set of criteria for inclusion which focused on RCTs. The authors have not followed standard methods for systematic reviews, basing their analysis on previous meta-analyses with which they were familiar. They searched Google Scholar, not in fact a bibliographic database, in place of traditional databases like Scopus, Web of Science Core Collections, CAB Abstracts, etc. I do feel there’s a strong likelihood studies have been missed. The authors also fail to conduct important checks of consistency in screening, data extraction and appraisal of risk of bias. Their risk of bias assessment also seems less robust than standard approaches in evidence synthesis using peer-reviewed tools.
Summary Measures
We asked evaluators to give some overall assessments, in addition to ratings across a range of criteria. See the evaluation summary “metrics” for a more detailed breakdown of this. See these ratings in the context of all Unjournal ratings, with some analysis, in our data presentation here.
| Rating | 90% Credible Interval |
Overall assessment | 75/100 | 60 - 80 |
Journal rank tier, normative rating | 3.5/5 | 3.0 - 4.0 |
Overall assessment (See footnote)
Journal rank tier, normative rating (0-5): On a ‘scale of journals’, what ‘quality of journal’ should this be published in? Note: 0= lowest/none, 5= highest/best.
Claim identification and assessment
I. Identify the most important and impactful factual claim this research makes
Claim: Meat and animal product reduction interventions don’t seem to be as effective as previously suggested by evidence synthesis.
II. To what extent do you *believe* the claim you stated above?
Belief/confidence: It’s hard to say given the methods used-maybe they missed some important studies, maybe they still included some that were not robust.
III. Suggested robustness checks
Suggested robustness test: Conduct searches in Scopus, WoSCC and CAB Abstracts to see what search results overlapped and what you might have missed. Provide cross-checking tests of screening, data extraction, and critical appraisal. Provide more details on appraisal of Risk of Bias using a peer-reviewed tool. Include a PRISMA/ROSES checklist to demonstrate procedural objectivity.
IV. Important ‘implication’, policy, credibility
Policy/funding implication: Impossible to say with the level of reliability I have. It looks great, but I don’t really trust your methods.
Written report
Evaluation manager’s note: The evaluation manager provided a range of comments as annotations on the pdf of the original paper. We present these below, divided by the section of the paper, following quotations from the relevant content. Some of these comments request additional methodological details; some of these details were already available in the paper’s supplemental file. The evaluator acknowledged this addressed some of their “more minor questions” but suggested these should have been included in the main text, or the supplement should have been better signposted
Methodological Framing
I know that you are experienced meta-analysts, but please do not refer to meta-analysis outside of the context of a systematic review. MA is a tool used to statistically combine multiple effect sizes and their variance from different sources. You cannot do this reliably without following a systematic methodology, which is why this should be referred to as a 'systematic review' at least SOMEWHERE in the article. You don't acknowledge any systematic review or evidence synthesis methodology anywhere, which is why your methods are missing some key important steps.
Methods
Study Selection
“Given our interdisciplinary research question and previous work indicating a large grey literature (Mathur et al., 2021a), we designed and carried out a customized search process. We: 1) reviewed 156 prior reviews, nine of which yielded included articles”
Where did you find these reviews? How did you ensure you as authors didn't promote your own work? I note one of you is the author of an included review...
Evaluation manager note: This evaluator also added, in their subsequent response, “I don't agree with their justification for their search strategy and not searching for meat reduction because terms are diverse isn't really a strong justification to dispense with best practice”. They evaluator noted their own experience with a meta-analysis where the systematic search involved a list of hundreds of synonyms.
“2) conducted backwards and forward citation search”
Using which articles as a starting point?
“3) reviewed published articles by authors with papers in the meta-analysis”
This is confusing wording - did you perform the meta-analysis and then review analysed authors' other work? Should you not have done this alongside citation chasing before analysis?
“4) crowdsourced potentially missing papers from leading researchers in the field”
Please explain what this means - asking for submissions of evidence? I don't think this is crowdsourcing.
“5) searched Google Scholar for terms that had come up in studies repeatedly”
Google Scholar is a blackbox resource with no transparency, and its use in systematic reviews and meta-analyses as a main source of data really shouldn't be promoted. I'm assuming because you haven't searched any other databases that this is the main source.
“6) used an AI search tool to search for gray literature”
Used how? Please provide more details.
“7) checked two databases emerging from ongoing nonprofit projects that both seek to identify all papers on meat-reducing interventions.”
Which databases?
All three authors contributed to the search.
In what way?
“Inclusion/exclusion decisions were primarily made by the first author, with all authors contributing to discussions about borderline cases.”
What were your inclusion criteria? How did you know they were operationalisable before use? Did you test for consistency across reviews? What were the results? What do you define as a borderline case?
“Figure 1 is a PRISMA diagram depicting the sources of included and excluded studies, which is detailed further in the Supplement.”
If you're using a PRISMA flow diagram, please also use the PRISMA checklist or ROSES checklist, which would highlight areas where your methods are deficient in detail.
“The first author extracted all data. We extracted an effect size for one outcome per intervention: the measure of net MAP or RPM consumption that had the longest follow-up time after intervention. Additional variables coded included information about publication, details of the interventions, length of follow-ups, intervention theories, and additional details about interventions’ methods, contexts, and open science practices”
How did you test the tool was appropriate for data extraction? Did you check for consistent application? If so, what were the results?
“To assess risk of bias, we collected data on whether outcomes were self-reported or objectively measured, publication status, and presence of a pre-analysis plan and/or open data”
Is this a robust method of RoB relative to the wide array of RoB tools developed for systematic reviews and meta-analysis? I suspect it's not comprehensive and would like to see more of a discussion about the details of your RoB assessment in the manuscript. For example - what tool did you use, how did you test it, what was the level of consistency across reviewers, how many reviewers assessed each study, etc.?
Robustness Checks
“As a robustness check, we also coded and meta-analyzed a supplementary dataset of 22 marginal studies, comprising 35 point estimates. Marginal studies were those whose methods fell short of our inclusion criteria, but typically met all but one, e.g. the control group received some aspect of treatment”
Please provide MUCH more detail here - how were they assessed, what were the results for each study, how was the assessment checked for consistent application?
Evaluator details
What is your research field or area of expertise, as relevant to this research?
How long have you been in your field of expertise?
How many proposals, papers, and projects have you evaluated/reviewed (for journals, grants, or other peer-review)?