Skip to content

Latest commit

 

History

History
838 lines (720 loc) · 56.9 KB

README.md

File metadata and controls

838 lines (720 loc) · 56.9 KB

Zheln.com

A Protocol for a Universal Living Overview of Health-Related Systematic Reviews

OSF Icon Zheln Systematic Review Appraisals on Open Science Framework (DOI 10.17605/OSF.IO/EJKFC)

This page is also available as the short links:

🏷 p1m.org/ssb or p1m.org/systematic

🇷🇺 Русскоязычная версия (поддержка прекращена, последнее обновление от 19 сентября 2020)

Background

What Is This Repo?

On July 11, 2019, I inititated this study with the aim of creating an exhaustive registry of systematic reviews in orthopedics (in Russian). Due to the large volume of records to screen, it was not very successful; however, it still did continue up until Feb 17, 2020, when it went into a 200-day hiatus.

Since September 6, 2020, the study has continued as Woodpecker by Anton from The Noun Project Zheln.com, a crowdfunding project, and this repository has been a chosen methods repo, while the website itself is hosted in a separate Zheln repo.

Version History

Date Version
Oct 4, 2020 Last Pre-Protocol
Sep 7, 2019 Last Pre-Zheln
Jul 11, 2019 First Info Added
Feb 12, 2019 Repo Initialized

What Has Changed as Zheln?

  1. The focus of the study has been widened to include not only orthopedics but any evidence-based practice.
  2. Critical appraisal and replication of reviews is now also performed as part of this project.
  3. Grouping of appraised systematic reviews by health care specialty has been recognized as a critical part of the project.

The search & screening methods have not basically changed, whereas the methods of critical appraisal may be described in one phrase: Informal study of the review documentation with replication of some of the elements of the review by a single appraiser to formulate their expert impression as to whether the review is reproducible and whether it is useful for evidence-based practice.

Crowdfunding details are available from the Zheln website.

Who Am I & Why Zheln?

Please refer to the Zheln main repo README.

Zheln’s Mission

Every systematic review daily added to PubMed will get a rigorous, independent, and open-access critical appraisal.

Objectives

  1. Monitor most of all published systematic reviews to rapidly identify new systematic reviews. Systematic reviews, in their turn, are known to be, today, the best source of information for evidence-based practice.
  2. Critically appraise and, where possible, replicate those of the systematic reviews identified that are useful for evidence-based practice, in the appraiser’s mind.
  3. Tag the records under review with physician specialties to ease following the updates by practising physicians.

Pragmatic worth:

  • Early disseminate findings of well-conducted systematic reviews with large practice impact to general English-speaking audience.
  • Early disseminate new systematic review records to practising physicians according to their specialty.
  • Provide both the general audience and practising physicians, who usually lack expertise in evidence synthesis, with open-access quality critical appraisals of systematic reviews.

Methods

The Ten Steps

For clarity, I summarized the methods as Zheln Review Appraisal in 10 Steps. They are shipped in two versions: Explanatory and Pragmatic. The former sheds light on what Zheln appraisal is about, whereas the latter guides how it’s actually done. Throughout Zheln, if simply a Step is mentioned, usually it is safe to assume an Explanatory Step.

Ten Explanatory Steps

  1. ℹ️ Downloaded from the PubMed Systematic Subset Daily Updates
  2. ℹ️ Meets Shojania & Bero 2001 True Positive Criteria for Systematic Reviews by Either Title or Abstract?
  3. ℹ️ Full Text or Other Reports Collected by Zheln
  4. ℹ️ Generates Pragmatic Evidence Directly Relevant to Evidence-Based Practice?
  5. ℹ️ Is Duplicate?
  6. ℹ️ Passed or Failed Replication?
  7. ℹ️ Has Critical Conduct Flaws?
  8. ℹ️ Liked or Disliked by Zheln?
  9. ℹ️ Practical Implications Summarized by Zheln
  10. ℹ️ Appraisal Published & Call for Crowdfunding

Each step is also marked with an appraisal status icon:

  • 🔄 Not Started or In Progress
  • ❌ Failed Appraisal
  • ✅ Passed Appraisal

⚠️ Important!

  • Methods to specialty-tag are not included in the steps as there is no specific time point when tagging should be complete: It starts when the record review starts and it ends when the record review ends.
  • However, these methods are important and are, thus, summarized in this document.

Ten Pragmatic Steps

To conduct an appraisal, compiled editable versions of the records will be required. You can either take precompiled records from the posts-edit directory in this repository or compile them yourself. When you are ready, here is the pathway of primary appraisal.

  1. Open an MD file from the posts-edit directory with any plain-text editor that supports GitHub-Flavored Markdown preview, like either Visual Studio Code + Markdown Preview Enhanced Extension or Epsilon Notes.
  2. Look at the citation title and check it against Zheln eligibility criteria at Explanatory Step 2. If it matches, check the 2. ✅ Meets Shojania & Bero 2001 True Positive Criteria for Systematic Reviews by Title checkbox.
  3. Else click the PMID link to open the PubMed record page and inspect the abstract. If it matches, check the 2. ✅ Meets Shojania & Bero 2001 True Positive Criteria for Systematic Reviews by Abstract tick; else check 2. ❌ Does Not Meet Shojania & Bero 2001 True Positive Criteria for Systematic Reviews by Neither Title nor Abstract.
  4. If the abstract is needed to assess eligibility but is unavailable, do not go for locating the full text but assess based on the title alone instead. By contrast, abstract lookup at the publisher’s is sometimes warranted, usually when the article is a commentary; consider PMID 32977958, to give you an idea. Also, please don’t use any information PubMed would load from PubMed Central as any of it is not actually a part of the MEDLINE record.
  5. After you’ve assessed eligibility, unless you’d found the record to be ineligible, go for specialty-tagging. Else close the record, as its appraisal is finished.
  6. To specialty-tag, use specialty-tagging guidance. In short, you tick at least one specialty tag in the specialty-tag list that best suits the record. Think about the specialty in which the specialists are likely to be most interested in this record. You normally use just the record abstract on PubMed to underpin these decisions. Also, if unsure about the scope of a specialty, you research it until comfortable. To reiterate, please use the full guidance to actually specialty-tag.
  7. Next, make a decision as to whether this record warrants a full appraisal regardless of crowdfunding. As a rule, it does when the topic covered is likely to have a universal or very large practice impact with regard either to both global health care workers and the public or to the minorities. In other words, Zheln does not include studies of narrow topics that likely are of interest to specialists only unless these appraisals are specifically crowdfunded. Essentially, these are the Zheln inclusion criteria. The language of publication must not influence this decision. Exclusion criteria: COVID-19 publications are not selected.
  8. If you have selected the record for full appraisal, go to the very bottom of the record to find more checkboxes. Follow Explanatory Steps 3 thru 9 and tick relevant checkboxes. Else, if you haven’t selected the record for immediate full appraisal and believe it’s narrow enough so that it’s perfectly fine for it to await crowdfunding, just proceed further.
  9. Finally, when at Explanatory Step 10, ensure you are ready to submit the record for final publication. If any additional appraisal work on the record is pending, just do not check this item.
  10. The appraisal is now complete.

PubMed Search

(aka Explanatory Step 1)

Record Screening

(aka Explanatory Step 2)

  • This is the step where record eligibility for Zheln appraisal is assessed.

  • As appraising all retrieved records turned out to be very resource-intensive, will probably appraise only random 10% of records at first (see the randomization protocol). When enough resources are available, all records will be appraised. Current information on this matter is available from the latest Zheln summary posts.

  • Record eligibility is assessed by checking the record title and, if the title failed, abstract against the ‘true positive criteria’ for systematic reviews taken from the publication by Shojania & Bero 2001 (Open Access) and extended further for Zheln (see below).

  • This is a diagnostic test accuracy (DTA) assessment for the systematic review search strategy by the authors—the very search strategy that the PubMed Systematic Subset was originally based on. Incidentally, this is the reason I chose these criteria for Zheln in the first place.

  • I quote:

    We regarded an article as a true positive only if the title or abstract explicitly identified the article as a systematic review or meta-analysis or if the article abstract indicated a strategy for locating the literature reviewed. Thus, an article that contained the phrase “literature review” in the title but merely stated that “relevant literature was reviewed” in the abstract would not count as a true positive. MEDLINE records without an abstract could be counted as true positives only if the title contained the words “meta-analysis,” “metaanalysis,” or “systematic review.”

  • One important limitation of these criteria that became apparent during appraisals is their failure to comprehensively account for practice guidelines, which are often informed by a systematic review.

  • Also, it has become evident during the appraisal process that these criteria fail to capture other review types that make use of systematic methodology. See PMID 33162676 that is a scoping review implementing systematic methodology but fails the original Shojania & Bero criteria. Also, see PMID 33148320 that references an evidence synthesis with no explicit identification as a systematic review.

  • I could have modified Shojania & Bero criteria to include these; however, that would require editing and likely retesting the search query to account for the changes, reappraising all records already appraised, and many other modifications. As I am not currently ready to engage into that, I will not modify the original criteria.

  • Therefore, it is worth noting the Zheln’s screening strategy is inadequate to identify practice guidelines and the coupled systematic reviews whatsoever, so Zheln goes well for standalone-published systematic reviews only. Also, some reviews that follow other evidence-based methodologies, such as scoping or integrative reviews, are missed.

  • That said, the rate Zheln misses such records at is currently unclear and requires pragmatic estimation. It would best be done along with estimating the fraction of published systematic reviews that PubMed missed.

  • There are other special cases to be discussed.

    • Commentaries, reprints, and such related to an original systematic review are included as they reference a systematic review.
    • Meta-analyses outside the context of a systematic review (variants on data pooling) are included if they are explicitly termed meta-analyses as this is how the Shojania & Bero criteria are worded. So these are few false positives. See PMID 33180528 for instance.
    • Protocols for systematic reviews are included as they reference a systematic review.
    • Studies nesting a systematic review are naturally included.
  • Anyway, those records that do not meet these criteria are marked as Failed Step 2 (see this supplementary file for the current exact wording) and are not appraised further.

  • Technically, screening entails conversion of the citation list downloaded from PubMed in the Summary (text) format (see the list of all original PubMed exports made by Zheln) into individual editable records with further compilation of the edited records into their published versions that are available from Zheln.

  • To do both, I wrote from scratch a special bash script that I named General Makeposti.

  • For a look into how an editable record looks like, take any in the directory of editable posts. Alternatively, you could use General Makeposti to do the conversion yourself.

  • Text editors I use for record screening:

  • On Zheln, records that passed the Step 2 of the appraisal process are automatically (since General Makeposti 2.2.1) assigned the awaiting crowdfunding status tag. That is because if I found the record appealing enough to autonomously embark on its full appraisal, I would have pushed it to the Step 3 of appraisal right away.

  • From the other hand, records that failed the appraisal process at the Step 2 are reassigned the awaiting appraisal status tag. This is because I consider these a ghost town analogy meaning that, theoretically, the appraisal could be continued, but for now, it wouldn’t.

  • Also, a corresponding footer is attached to the record based on its Step status.

Specialty-Tagging

(done at Explanatory Steps 2 thru 10)

  • Specialty-tagging is done by the appraiser themselves based on whatever information they acquired during appraisal.

  • The tags are chosen from 191 specialty tags made from a list of 171 AMA Masterfile Physician Specialties.

  • See the methods used to compile the lists of specialties and specialty tags in the commit history.

  • While tagging, the tagger should consider if the record would be accessible from its most relevant specialty page and ensure it would.

  • Namely, the tagger must check the record against each specialty on the list, assess if this record could be of interest to a typical physician working in this specialty, and choose one specialty that looks most relevant in this regard.

  • If choosing one such specialty is hard enough, choosing more than one is acceptable. As such, adding more than one specialty tag is actively discouraged unless absolutely unavoidable, as it remarkably compromises appraisal speed. Therefore, multiple specialties are warranted if and only if it is perceived to be faster to multiselect than choose one among several.

  • I do recognize the best approach to specialty-grouping would be an evidence-based approach involving feedback from the physicians themselves. However, this is unavailable at the moment, so specialty-grouping will be theory-based. In the future, introducing empirical testing will be of interest.

  • If the tagger is uncertain either of the article subject or any specialty scope, they should consult Google or other information sources until both seem perfectly clear.

  • There is no limit to the number of specialty tags attached, but at least one tag should be chosen for each record. Exceptions:

    • Those records that failed and were exlcuded at Step 2 do not require specialty-tagging.
    • It became evident during appraisals that some records on PubMed cover topics rather distant from health care, e.g. PMID 33035990—see its Zheln counterpart. For these records, there may be no specialty tags added when each and every specialty tag clearly does not apply.
  • I do not plan to add any new specialty tags. However, if I hear about any changes to the AMA Masterfile Physician Specialties list, I will consider updating the Zheln specialty lists accordingly.

  • Specialty tags may be assigned or removed throughout further full appraisal if this is deemed appropriate to better reflect the content of the record.

Collection of Reports

(aka Explanatory Step 3)

  • Will try to collect the original full text associated with the record under appraisal either using publicly available electronic resources or via private subscriptions or communication.
  • For some records, collection of other reports (either written by the same authors or by different authors, like in case of important referenced research) will be required and will be done: 3. ✅ Full Text & Other Reports Collected by Zheln.
  • In some, only other reports but not the original record full text will be available: 3. ✅ Other Reports Collected by Zheln.
  • If I have collected the original full-text report and this report only, will put the 3. ✅ Full Text Collected by Zheln label. If no full-text report is available for the record, I will put the 3. ❌ No Full Text Available to Zheln label, place the record in the awaiting crowdfunding category, and reference it in the Full Text Wanted section of twice-weekly summary posts.
  • I will not routinely contact the authors or search extensively for additional reports, but if the study is somehow specifically important or additional appraisal is crowdfunded, will do that.

Data Extraction

(done at Explanatory Step 4)

  • Involves extracting the pragmatic outcomes that I deemed relevant for evidence-based practice.
  • This is done only if the Step 9 summary is conducted. If so, I will extract the data directly in the text of the appraisal.
  • I will decide what outcomes to extract depending on the research question. As a rule, I will choose one among all outcomes mentioned in the study reports and will contrast it to the outcomes usually used in similar studies to decide if this is an acceptably practice-important and question-relevant outcome.
  • To learn what outcomes are best suited for similar research questions, I’m planning to use informal electronic search (Google, PubMed, etc.), that I’m not planning to document, but I will summarize important information on this matter, including references, if applicable, in the text of the appraisal.
  • If I did find such appropriate outcome in the study reports, I will use it as a primary outcome to then assess effectiveness of the interventions. Also, I will tick this study as 4. ✅ Generates Pragmatic Evidence Directly Relevant to Evidence-Based Practice.
  • Otherwise, if I did not find any appropriate outcomes reported, I will put the 4. ❌ Does Not Generate Pragmatic Evidence Directly Relevant to Evidence-Based Practice label and abstain from further appraisal.

Critical Appraisal

(done at Explanatory Steps 4 thru 8)

  • The methods of critical appraisal may be described in one phrase:

    Informal study of the review documentation with replication of some of the elements of the review by a single appraiser to formulate their expert impression as to whether the review is reproducible and whether it is useful for evidence-based practice.

  • Please find by-step description below.

  • Step 4 has been described in detail previously.

  • Step 5:

    • First of all, I will assess if the study under appraisal looks duplicate. To find out, I’m planning to use informal electronic search (Google, PubMed, etc.), that I’m not planning to document, but I will summarize important information on this matter, including references, if applicable, in the text of the appraisal.
    • If I find evidence the study is duplicate (same population, same context, same interventions, same outcomes, with no reasonable reference to previous research), I should naturally not start answering the research question at this study but use previous research first. Therefore, I will abstain from appraising duplicate studies and will mark them as 5. ❌ Is Duplicate.
    • Otherwise, I will mark the study appropriately as 5. ✅ Not Found Duplicate by Zheln and go on with the appraisal.
  • Step 6:

    • If a systematic review features practice-important outcomes and does not look duplicate, it feels safe and appropriate to embark on its replication.
    • In the course of Zheln, I’m not planning to conduct exhaustive replications. In contrast, I’m going to try and replicate selectively those review steps that look both easiest and most natural to redo.
    • For example, rerunning PubMed/MEDLINE searches and replicating the review study set is usually simple enough and appealing, as is reproducing a random couple of data extraction forms. I will document these replication processes in the text of the appraisal.
    • If I deem the replication attempts more or less successful, I will label the study as 6. ✅ Passed Replication.
    • Otherwise, if the replication has largely failed, I will mark the study as 6. ❌ Failed Replication and abstain from further appraisal because an irreproducible review is hardly systematic anymore.
  • Step 7:

    • Replicability is a sign of sound conduct and good reporting but does not guarantee robustness of the review. Therefore, additional quality-of-conduct assessment is required.
    • Some tools have been developed to assess risk of bias in systematic reviews, such as ROBIS or CINeMA. However, they are rather recent, and there is evidence agreement is low at least for some of them (Gates 2020).
    • In contrast, MECIR have been out there for quite some time now. Also, it has been neatly integrated into Cochrane Handbook 6 that provides further insight into these issues.
    • Thus, I elect to use the MECIR conduct standards to assess quality of conduct. I will go over all 75 MECIR conduct items at Step 7 to get an understanding and will document for each item if it was followed, in my view. I will also provide rationale where relevant. All the documentation will take place directly in the appraisal text.
    • In general, I expect mandatory MECIR items to be followed, whereas highly desirable items may be ignored. However, I acknowledge that MECIR standards (1) are not absolute and (2) were developed for intervention reviews only (whereas Zheln may feature other systematic reviews as well). Also, some of the items would look critical for one review and not critical for another.
    • Hence, the final decision about whether or not I have observed evidence of critical conduct flaws is always mine to make; at the same time, I will do my best to accurately substantiate my findings in the appraisal text. My decision may be 7. ❌ Has Critical Conduct Flaws, 7. ✅ No Critical Conduct Flaws Identified by Zheln, or 7. ✅ No Conduct Flaws Identified by Zheln, depending on those findings.
  • Step 8 was designed as solely subjective: 8. 👍 Liked by Zheln if I find the review useful overall and 8. 👎 Disliked by Zheln if I find otherwise. I will provide any personal comment if my decision did not seem obvious.

Data Synthesis

(aka Explanatory Step 9)

  • Only done if Steps 2 thru 7 checked green. Otherwise, I don’t find the review robust enough to look at or disseminate its findings.
  • Involves formulating explicit practice-relevant statements based on the health outcomes extracted at Step 4 and quality-of-conduct assessment at Step 7.
  • Will be done directly in the appraisal text, stressed, and placed at its top.

Publication

(done at Explanatory Step 10)

  • Involves creating a citation for each post on Zheln, see this issue.
  • In reality, the record is published as soon as it is first uploaded after Step 2 and is then updated in the course of its appraisal.
  • When I feel I am ready to submit the record for final publication and no additional appraisal work on the record is pending, I will check it as 10. ✅ Appraisal Published & Call for Crowdfunding. Unless Step 10 is ticked, the record is not to be considered final.
  • Any record updates after the Step 10 tick have been set and published will be reported in the appraisal text.
  • Also, Zheln appraisals are disseminated on public video platforms, such as YouTube and TikTok. One full appraisal recorded on video is uploaded every working day.

Crowdfunding

(done at Explanatory Step 10 and throughout Zheln)

Project Management

(irrespective of Steps)

  • Zheln uses public GitHub kanban boards to manage its workflow.
  • Namely, important tasks needed to accomplish the Zheln’s mission are created as public Projects on the Zheln GitHub page, e.g. Streamline Zheln or Register with PROSPERO.
  • Further, these tasks are broken down into smaller blocks (GitHub Issues) and are displayed on the projects’ kanban boards.
  • Finally, each week, short-term GitHub Milestones are created and then assigned to the Issues to track weekly progress.
  • All of these elements are public. Issues are open for public comment (GitHub registration needed).
  • To maximize publicity, Zheln additionally endeavors to publish summary posts twice a week, where I overview the events that happened on Zheln since the last summary and that I consider important.

Results

  • The monitoring continued from September 1, 2020.
  • Currently, I do not plan to go back to whatever material that I left unreviewed as I do not have capacity for such an undertaking.
  • Please see all current information on the results in the latest Zheln summary posts.
  • The appraisal log is available separately.

Appendix: PubMed Systematic Review Subset Query, Zheln Edition

Live Version

  • Up-To-Date Indefinitely Until Any Interfering PubMed Updates
  • Run on Sep 11, 2020, to Retrieve 474,043 Records
(
    (
        (((systematic review[ti] OR systematic literature review[ti] OR systematic scoping review[ti] OR systematic narrative review[ti] OR systematic qualitative review[ti] OR systematic evidence review[ti] OR systematic quantitative review[ti] OR systematic meta-review[ti] OR systematic critical review[ti] OR systematic mixed studies review[ti] OR systematic mapping review[ti] OR systematic cochrane review[ti] OR systematic search and review[ti] OR systematic integrative review[ti]) NOT comment[pt] NOT (protocol[ti] OR protocols[ti])) NOT MEDLINE [subset]) OR (Cochrane Database Syst Rev[ta] AND review[pt]) OR systematic review[pt]
    )
    OR
    (
        (((systematic review[ti] OR meta-analysis[pt] OR meta-analysis[ti] OR systematic literature review[ti] OR this systematic review[tw] OR pooling project[tw] OR (systematic review[tiab] AND review[pt]) OR meta synthesis[ti] OR meta-analy*[ti] OR integrative review[tw] OR integrative research review[tw] OR rapid review[tw] OR umbrella review[tw] OR consensus development conference[pt] OR practice guideline[pt] OR drug class reviews[ti] OR cochrane database syst rev[ta] OR acp journal club[ta] OR health technol assess[ta] OR evid rep technol assess summ[ta] OR jbi database system rev implement rep[ta]) OR (clinical guideline[tw] AND management[tw]) OR ((evidence based[ti] OR evidence-based medicine[mh] OR best practice*[ti] OR evidence synthesis[tiab]) AND (review[pt] OR diseases category[mh] OR behavior and behavior mechanisms[mh] OR therapeutics[mh] OR evaluation study[pt] OR validation study[pt] OR guideline[pt] OR pmcbook)) OR ((systematic[tw] OR systematically[tw] OR critical[tiab] OR (study selection[tw]) OR (predetermined[tw] OR inclusion[tw] AND criteri*[tw]) OR exclusion criteri*[tw] OR main outcome measures[tw] OR standard of care[tw] OR standards of care[tw]) AND (survey[tiab] OR surveys[tiab] OR overview*[tw] OR review[tiab] OR reviews[tiab] OR search*[tw] OR handsearch[tw] OR analysis[ti] OR critique[tiab] OR appraisal[tw] OR (reduction[tw] AND (risk[mh] OR risk[tw]) AND (death[mh] OR "death"[all] OR recurrence[mh] OR "recurrence"[all]))) AND (literature[tiab] OR articles[tiab] OR publications[tiab] OR publication[tiab] OR bibliography[tiab] OR bibliographies[tiab] OR published[tiab] OR pooled data[tw] OR unpublished[tw] OR citation[tw] OR citations[tw] OR database[tiab] OR internet[tiab] OR textbooks[tiab] OR references[tw] OR scales[tw] OR papers[tw] OR datasets[tw] OR trials[tiab] OR meta-analy*[tw] OR (clinical[tiab] AND studies[tiab]) OR treatment outcome[mh] OR treatment outcome[tw] OR pmcbook)) NOT (letter[pt] OR newspaper article[pt])))
    )
)

Replicated Version

  • Up-To-Date by Sep 9, 2020
  • Run on Sep 12, 2020, to Retrieve 472,252 Records

How to use?

  • Replace all the upper limit dates with the date needed using any text editor.
  • If you need to filter just the records indexed on that date (instead of all the records indexed by that date), then add the following fragment to the query (either before the first or after the last parenthesis): (2020/09/09:2020/09/09[crdt] OR 2020/09/09:2020/09/09[dcom] OR 2020/09/09:2020/09/09[mhda]), where the date is your required date (should be the same as the upper limit date).
  • If everything done correctly & the query itself still works, you will get a consistent set of records each time on whatever date you run the query.
(
    (
        (
            (
                (
                    (
                        "systematic review"[ti] OR "systematic literature review"[ti] OR "systematic scoping review"[ti] OR "systematic narrative review"[ti] OR "systematic qualitative review"[ti] OR "systematic evidence review"[ti] OR "systematic quantitative review"[ti] OR "systematic meta review"[ti] OR "systematic critical review"[ti] OR "systematic mixed studies review"[ti] OR "systematic mapping review"[ti] OR "systematic cochrane review"[ti] OR "systematic search and review"[ti] OR "systematic integrative review"[ti]
                    )
                    AND
                    1865/01/01:2020/09/09[crdt]
                )
                NOT
                (
                    "comment"[pt]
                    AND
                    1865/01/01:2020/09/09[dcom]
                )
                NOT
                (
                    (
                        "protocol"[ti] OR "protocols"[ti]
                    )
                    AND
                    1865/01/01:2020/09/09[crdt]
                )
            )
            NOT
            (
                "medline"[sb]
                AND
                1865/01/01:2020/09/09[dcom]
            )
        )
        OR
        (
            (
                "cochrane database syst rev"[ta]
                AND
                1865/01/01:2020/09/09[crdt]
            )
            AND
            (
                "review"[pt]
                AND
                1865/01/01:2020/09/09[dcom]
            )
        )
        OR
        (
            "systematic review"[pt]
            AND
            1865/01/01:2020/09/09[dcom]
        )
    )
    OR
    (
        (
            (
                "systematic review"[ti]
                AND
                1865/01/01:2020/09/09[crdt]
            )
            OR
            (
                "meta-analysis"[pt]
                AND
                1865/01/01:2020/09/09[dcom]
            )
            OR
            (
                (
                    "meta analysis"[ti] OR "systematic literature review"[ti]
                )
                AND
                1865/01/01:2020/09/09[crdt]
            )
            OR
            (
                (
                    "this systematic review"[tw] OR "pooling project"[tw]
                )
                AND
                (
                    1865/01/01:2020/09/09[dcom]
                    OR
                    1865/01/01:2020/09/09[mhda]
                )
            )
            OR
            (
                (
                    "this systematic review"[tiab] OR "pooling project"[tiab]
                )
                AND
                1865/01/01:2020/09/09[crdt]
            )
            OR
            (
                (
                    "systematic review"[tiab]
                    AND
                    1865/01/01:2020/09/09[crdt]
                )
                AND
                (
                    "review"[pt]
                    AND
                    1865/01/01:2020/09/09[dcom]
                )
            )
            OR
            (
                (
                    "meta synthesis"[ti] OR "meta analy*"[ti]
                )
                AND
                1865/01/01:2020/09/09[crdt]
            )
            OR
            (
                (
                    "integrative review"[tw] OR "integrative research review"[tw] OR "rapid review"[tw] OR "umbrella review"[tw]
                )
                AND
                (
                    1865/01/01:2020/09/09[dcom]
                    OR
                    1865/01/01:2020/09/09[mhda]
                )
            )
            OR
            (
                (
                    "integrative review"[tiab] OR "integrative research review"[tiab] OR "rapid review"[tiab] OR "umbrella review"[tiab]
                )
                AND
                1865/01/01:2020/09/09[crdt]
            )
            OR
            (
                (
                    "consensus development conference"[pt] OR "practice guideline"[pt]
                )
                AND
                1865/01/01:2020/09/09[dcom]
            )
            OR
            (
                (
                    "drug class reviews"[ti] OR "cochrane database syst rev"[ta] OR "acp journal club"[ta] OR "health technol assess"[ta] OR "evid rep technol assess summ"[ta] OR "jbi database system rev implement rep"[ta]
                )
                AND
                1865/01/01:2020/09/09[crdt]
            )
            OR
            (
                "clinical guideline"[tw]
                AND
                "management"[tw]
                AND
                (
                    1865/01/01:2020/09/09[dcom]
                    OR
                    1865/01/01:2020/09/09[mhda]
                )
            )
            OR
            (
                "clinical guideline"[tiab]
                AND
                "management"[tiab]
                AND
                1865/01/01:2020/09/09[crdt]
            )
            OR
            (
                (
                    (
                        "evidence based"[ti]
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        "evidence-based medicine"[mh]
                        AND
                        1865/01/01:2020/09/09[mhda]
                    )
                    OR
                    (
                        (
                            "best practice*"[ti] OR "evidence synthesis"[tiab]
                        )
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                )
                AND
                (
                    (
                        (
                            "review"[pt] OR "evaluation study"[pt] OR "validation study"[pt] OR "guideline"[pt] OR "pmcbook"[all]
                        )
                        AND
                        1865/01/01:2020/09/09[dcom]
                    )
                    OR
                    (
                        (
                            "diseases category"[mh] OR "behavior and behavior mechanisms"[mh] OR "therapeutics"[mh]
                        )
                        AND
                        1865/01/01:2020/09/09[mhda]
                    )
                )
            )
            OR
            (
                (
                    (
                        (
                            "systematic"[tw] OR "systematically"[tw] OR "study selection"[tw]
                            OR
                            (
                                (
                                    "predetermined"[tw] OR "inclusion"[tw]
                                )
                                AND
                                "criteri*"[tw]
                            )
                            OR
                            "exclusion criteri*"[tw] OR "main outcome measures"[tw] OR "standard of care"[tw] OR "standards of care"[tw]
                        )
                        AND
                        (
                            1865/01/01:2020/09/09[dcom]
                            OR
                            1865/01/01:2020/09/09[mhda]
                        )
                    )
                    OR
                    (
                        (
                            "systematic"[tiab] OR "systematically"[tiab] OR "study selection"[tiab]
                            OR
                            (
                                (
                                    "predetermined"[tiab] OR "inclusion"[tiab]
                                )
                                AND
                                "criteri*"[tiab]
                            )
                            OR
                            "exclusion criteri*"[tiab] OR "main outcome measures"[tiab] OR "standard of care"[tiab] OR "standards of care"[tiab]
                        )
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        "critical"[tiab] 
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                )
                AND
                (
                    (
                        (
                            "survey"[tiab] OR "surveys"[tiab] OR "review"[tiab] OR "reviews"[tiab] OR "analysis"[ti] OR "critique"[tiab]
                        )
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        (
                            "overview*"[tw] OR "search*"[tw] OR "handsearch"[tw] OR "appraisal"[tw]
                        )
                        AND
                        (
                            1865/01/01:2020/09/09[dcom]
                            OR
                            1865/01/01:2020/09/09[mhda]
                        )
                    )
                    OR
                    (
                        (
                            "overview*"[tiab] OR "search*"[tiab] OR "handsearch"[tiab] OR "appraisal"[tiab]
                        )
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        (
                            (
                                "reduction"[tw]
                                AND
                                (
                                    1865/01/01:2020/09/09[dcom]
                                    OR
                                    1865/01/01:2020/09/09[mhda]
                                )
                            )
                            OR
                            (
                                "reduction"[tiab]
                                AND
                                1865/01/01:2020/09/09[crdt]
                            )
                        )
                        AND
                        (
                            (
                                "risk"[mh]
                                AND
                                1865/01/01:2020/09/09[mhda]
                            )
                            OR
                            (
                                (
                                    "risk"[tw]
                                    AND
                                    (
                                        1865/01/01:2020/09/09[dcom]
                                        OR
                                        1865/01/01:2020/09/09[mhda]
                                    )
                                )
                                OR
                                (
                                    "risk"[tiab]
                                    AND
                                    1865/01/01:2020/09/09[crdt]
                                )
                            )
                        )
                        AND
                        (
                            (
                                (
                                    "death"[mh] OR "recurrence"[mh]
                                )
                                AND
                                1865/01/01:2020/09/09[mhda]
                            )
                            OR
                            (
                                (
                                    "death"[all] OR "recurrence"[all]
                                )
                                AND
                                1865/01/01:2020/09/09[dcom]
                            )
                        )
                    )
                )
                AND
                (
                    (
                        (
                            "literature"[tiab] OR "articles"[tiab] OR "publications"[tiab] OR "publication"[tiab] OR "bibliography"[tiab] OR "bibliographies"[tiab] OR "published"[tiab] OR "database"[tiab] OR "internet"[tiab] OR "textbooks"[tiab] OR "trials"[tiab]
                        )
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        (
                            "pooled data"[tw] OR "unpublished"[tw] OR "citation"[tw] OR "citations"[tw] OR "references"[tw] OR "scales"[tw] OR "papers"[tw] OR "datasets"[tw] OR "meta analy*"[tw]
                        )
                        AND
                        (
                            1865/01/01:2020/09/09[dcom]
                            OR
                            1865/01/01:2020/09/09[mhda]
                        )
                    )
                    OR
                    (
                        (
                            "pooled data"[tiab] OR "unpublished"[tiab] OR "citation"[tiab] OR "citations"[tiab] OR "references"[tiab] OR "scales"[tiab] OR "papers"[tiab] OR "datasets"[tiab] OR "meta analy*"[tiab]
                        )
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        "clinical"[tiab]
                        AND
                        "studies"[tiab]
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        "treatment outcome"[mh]
                        AND
                        1865/01/01:2020/09/09[mhda]
                    )
                    OR
                    (
                        "treatment outcome"[tw]
                        AND
                        (
                            1865/01/01:2020/09/09[dcom]
                            OR
                            1865/01/01:2020/09/09[mhda]
                        )
                    )
                    OR
                    (
                        "treatment outcome"[tiab]
                        AND
                        1865/01/01:2020/09/09[crdt]
                    )
                    OR
                    (
                        "pmcbook"[all]
                        AND
                        1865/01/01:2020/09/09[dcom]
                    )
                )
            )
        )
        NOT
        (
            (
                "letter"[pt] OR "newspaper article"[pt]
            )
            AND
            1865/01/01:2020/09/09[dcom]
        )
    )
)

Appendix: How to Compile Zheln Records?

General Notes

  • Below you will find guidance on how to compile both editable and published versions of Zheln records for any given date.
  • You will need bash to run the record-maker script aka General Makeposti. Set up bash on your platform before you get down to compilation.
  • I wrote this guidance for General Makeposti 2.3.2 on Oct 7, 2020. It may not work with other versions of the script or if other important changes happen after that date.

Editable Version

  1. Go to PubMed; copy, paste, and run the replicated version of the Zheln search query.

  2. Press Display options and ensure the following values are set:

    • Format Summary
    • Sort by Most recent (decreasing order)
  3. Press Save and set the following values:

    • Selection: All results
    • Format: Summary (text)
  4. Press Create file.

  5. Wait until the file is generated and downloaded. Afterwards, a summary-systematic-set.txt file will appear on your device.

  6. Create a summary-systematic-set directory on your device and put this file there. Then rename the file to follow the summary-systematic-set_%Y-%m-%d_%count.txt convention, where:

    • %Y is the year used in the replicated search
    • %m is the month used in the replicated search
    • %d is the day used in the replicated search
    • %count is the number of results retrieved when running the replicated search

    See examples of how these look like in the summary-systematic-set directory in this repository.

  7. Download all accessory files needed to run the compilation from the zheln directory in this repository:

    • All footer files
    • All header files
    • general-makeposti.sh
    • zheln_ama_specialty_tags.lst
  8. Open the general-makeposti.sh file with any plain-text editor. Then look at the first 10 lines and procure the following:

    • edit=true
    • Set date to the date you used in the replicated search in the yyyy-mm-dd format
    • Set count to the number of results retrieved when running the replicated search
    • Set coreutils to false if you are using the native GNU Bash; otherwise, set it to true if you are using the CoreUtils package
    • Other variables are irrelevant to successful compilation
  9. Run general-makeposti.sh with bash.

  10. If everything went fine, the script would produce the following message when finished:

    > Populate the random list first.

    Also, a plain-text LST file would appear in the newly created rnd directory (see the rnd directory in this repository for comparison).

  11. Go to the RANDOM.ORG Random Sequence Generator (Advanced Mode) and set the following parameters:

    • Leave Smallest value at 1
    • Set Largest value to the value of your count variable you set up earlier
    • Leave Format in … column(s) at 1
    • Choose Output Format: As a bare-bones text document (type text/plain)
    • Choose Randomization: Generate your own personal randomization right now
  12. Press Get Sequence; open your LST file in the rnd directory with any plain-text editor and copy & paste the generated list of random numbers into it.

  13. Run general-makeposti.sh with bash again.

  14. If everything went fine, the script would produce the following message when finished:

    > So uncivilized.

    In this case, you’d find your editable records in the posts-edit directory.

  15. If unsuccessful, follow instructions by the script or feel free to contact me here on GitHub or by email.

Published Version

  1. Procure editable versions of the records you want first. You can either take precompiled records from the posts-edit directory in this repository or compile them yourself.

  2. Open the general-makeposti.sh file with any plain-text editor. Then look at the first 10 lines and procure the following:

    • edit=false
    • Set date to the date of your editable records in the yyyy-mm-dd format
    • Set count to the number of your editable records (should equal the number of records retrieved by the replicated search on the date of your editable records)
    • Set coreutils to false if you are using the native GNU Bash; otherwise, set it to true if you are using the CoreUtils package
    • Other variables are irrelevant to successful compilation
  3. Run general-makeposti.sh in bash.

  4. If everything went fine, the script would produce the following message when finished:

    > So uncivilized.

    In this case, you’d find your published records in the posts directory.

  5. If unsuccessful, follow instructions by the script or feel free to contact me here on GitHub or by email.