I recently completed a short course on Systematic Review and Meta-Analysis offered by the James P. Grant School of Public Health (11th-13th July). It really helped clarify many of the misconceptions I previously had about scientific papers called systematic reviews. Now publishing a systematic review myself doesn’t seem like that distant a prospect. Either way, in this post I talk briefly about what systematic reviews and meta-analyses are. I don’t know if I’ll write more on this particular topic.
WHAT IS A SYSTEMATIC REVIEW?
Everyone who has written a journal paper or even a thesis knows that we need to preface our work with a “literature review”- a survey of the work that has already been done on the topic. This is done to identify gaps of knowledge in a particular field, so the new knowledge generated by the publication could try and fill that gap.
Systematic review is a very specialized sort of literature review. In a regular review, the author has considerable freedom in choosing which studies he wants to include in his analysis, and also the extent to which he will analyze the studies. A systematic review, on the other hand, must incorporate *all* available studies on the topic that are found in the various scientific databases (Medline, Embase, Web of Science etc). This is why a systematic review is done with a lot of methodological rigor.
For example, the author (after setting out his intended research question), must explicitly specify how he conducted the litearture search by mentioning the keywords he looked for, the databases he mined, and so forth. For example, in this paper, the authors mention their search strategy as follows:
In accordance with the PRISMA guidelines, we identified published studies through a systematic review of Medline (via PubMed), Cochrane database, and EMBASE (via Ovid) from the inception to June 31, 2015, with the following search terms: (“Chlamydia trachomatis”) AND (“cervical carcinoma OR cervical cancer OR cancer of the cervix OR carcinoma of the cervix OR cervical neoplasm OR cervical dysplasia OR cervical intraepithelial neoplasia”). We also checked reference lists and citation histories during the search.
This sort of specialized search strategy cannot usually be devised without expert help, and that’s why all systematic review projects must enlist the help of a search coordinator or information scientist. The author must then go on to mention how many studies his search yielded, how many of them he included in the systematic review, how many of them he excluded- and on what basis.
This involves a lot of record-keeping, data tracking and collaboration between groups of different expertise, so there are a lot of softwares which are used for searching, screening and assessing the qualities of the studies.
In short, the hassle involved in writing a systematic review is comparable to the hassle one faces when conducting original research: a detailed protocol needs to be submitted, teams need to be built, funds need to be specified, etc. It is often said that a top-notch systematic review cannot be conducted without at least 3 people (one of them being an information scientist), at least one year of time, and at least 100,000 US dollars.
WHY ARE SYSTEMATIC REVIEWS SO COMPLEX?
It may seem odd that so much time, money and effort could be poured into the writing of a review which requires no lab space or reagent cost, but once we realize the purpose of writing systematic reviews- the mystery disappears. A systematic review is written to provide conclusive evidence on an issue. When there’s a veritable information overload in a particular field (not surprising, given PubMed has somewhere in the neighborhood of 27 million submissions) experts are expected to conduct a thorough systematic review. Decisions like whether particular drugs get prescribed, certain interventions are carried out, and whether there are statistical relationships between different variables (e.g. Zika virus infection and microcephaly, chlamydia and cervical cancer, urbanization and violence, and so forth) all hinge on the results provided by systematic reviews. Single studies are hardly ever definitive, even a combination of multiple studies in the form of a regular literature review may have reporting bias. It’s only when a systematic review is conducted- a thorough analysis of all the extant evidence and synthesis of a concrete conclusion- that policymakers and stakeholders take note. Given the ambitious purposes of a systematic review, it’s hardly surprising that it’s so methodologically cautious, labor-intensive, and often costly.
WHAT IS A META-ANALYSIS?
A meta-analysis (literally meaning: analysis of analysis) is a sort of statistical analysis that is often carried out in systematic reviews of quantitative studies. A meta-analysis is used to statistically combine the results of all the studies that have been done (and analyzed in the systematic review) and produce a single, overall result. This is usually done by first tallying the results of the studies (either in the form of means/standard deviations or odds ratio), giving a “weight” to each piece of study data depending on how narrow their individual confidence interval was (i.e. how “confident” we can be in the conclusion of a particular study), and then combining the results to produce a total outcome using a Forest Plot. In addition to this, a meta-analysis also judges publication biases by means of a Funnel Plot to make sure the systematic review took studies of different outcomes and sample sizes into consideration.
IS META-ANALYSIS ESSENTIAL FOR A SYSTEMATIC REVIEW?
Although a meta-analysis is often done in a systematic review, it’s not an essential part of it. It may not even be feasible- either the data may be qualitative which precludes rigorous statistical analysis of this type, or the studies may be so methodologically varied that it’s impossible to combine their result into a single statistical output. For example, in this systematic review, the authors mention
We deemed a meta-analysis inappropriate due to the heterogeneous nature of the available publications.
In these cases, researchers instead opt for writing what is called a “narrative synthesis”.
CAN YOU PUBLISH A SYSTEMATIC REVIEW?
Despite every impression that was given above, a systematic review may not always have to be so labor and cost-intensive. It can be, but it doesn’t have to be. For one, there’s a difference between systematic reviews which target sociological questions (e.g. does urbanization lead to violence?) vs. clinical/environmental ones (e.g. does Vitamin C reduce sore throat?). The latter is almost always simpler than the former, because hard science has a lesser number of variables, and the data is often simpler to interpret. Also, while a systematic review which wants to answer a question with a wide scope (e.g. association between causative agents of STDs with cervical cancer) is very complex, one with a narrow scope (e.g. association between Chlamydia trachomatis with cervical cancer) may not be as complex. In other words, there can be labor-intensive, expensive, high-impact systematic reviews, and there can be single-author, free, medium-impact systematic reviews.
For example, this sytematic review has only one author and has been published in a journal with an impact factor of 0.982. So while systematic reviews are indeed complex, it’s not fair to say it’s completely out of reach for everyday researchers with their own day jobs.
So yes, I think it’s a safe bet to say that anyone can indeed write systematic reviews, provided they give the effort a bit of time and energy. It’s a publication strategy not beyond the scope of any of us.