The University of Melbourne

Which review is that? A guide to review types.

  • Which review is that?
  • Review Comparison Chart
  • Decision Tool
  • Critical Review
  • Integrative Review
  • Narrative Review
  • State of the Art Review
  • Narrative Summary
  • Systematic Review
  • Meta-analysis
  • Comparative Effectiveness Review
  • Diagnostic Systematic Review
  • Network Meta-analysis
  • Prognostic Review
  • Psychometric Review
  • Review of Economic Evaluations
  • Systematic Review of Epidemiology Studies
  • Living Systematic Reviews
  • Umbrella Review
  • Review of Reviews
  • Rapid Review
  • Rapid Evidence Assessment
  • Rapid Realist Review
  • Qualitative Evidence Synthesis
  • Qualitative Interpretive Meta-synthesis
  • Qualitative Meta-synthesis
  • Qualitative Research Synthesis
  • Framework Synthesis - Best-fit Framework Synthesis
  • Meta-aggregation
  • Meta-ethnography
  • Meta-interpretation
  • Meta-narrative Review
  • Meta-summary
  • Thematic Synthesis
  • Mixed Methods Synthesis
  • Narrative Synthesis
  • Bayesian Meta-analysis
  • EPPI-Centre Review
  • Critical Interpretive Synthesis
  • Realist Synthesis - Realist Review
  • Scoping Review
  • Mapping Review
  • Systematised Review
  • Concept Synthesis
  • Expert Opinion - Policy Review
  • Technology Assessment Review

Methodological Review

  • Systematic Search and Review

A methodological review is a type of systematic secondary research (i.e., research synthesis) which focuses on summarising the state-of-the-art methodological practices of research in a substantive field or topic" (Chong et al, 2021).

Methodological reviews "can be performed to examine any methodological issues relating to the design, conduct and review of research studies and also evidence syntheses". Munn et al, 2018)

Further Reading/Resources

Clarke, M., Oxman, A. D., Paulsen, E., Higgins, J. P. T., & Green, S. (2011). Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. Cochrane Handbook for systematic reviews of interventions . Full Text PDF

Aguinis, H., Ramani, R. S., & Alabduljader, N. (2023). Best-Practice Recommendations for Producers, Evaluators, and Users of Methodological Literature Reviews. Organizational Research Methods, 26(1), 46-76. https://doi.org/10.1177/1094428120943281 Full Text

Jha, C. K., & Kolekar, M. H. (2021). Electrocardiogram data compression techniques for cardiac healthcare systems: A methodological review. IRBM . Full Text

References Munn, Z., Stern, C., Aromataris, E., Lockwood, C., & Jordan, Z. (2018). What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC medical research methodology , 18 (1), 1-9. Full Text Chong, S. W., & Reinders, H. (2021). A methodological review of qualitative research syntheses in CALL: The state-of-the-art. System , 103 , 102646. Full Text

  • << Previous: Technology Assessment Review
  • Next: Systematic Search and Review >>
  • Last Updated: Feb 9, 2024 1:54 PM
  • URL: https://unimelb.libguides.com/whichreview

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Prevent plagiarism. Run a free check.

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved February 15, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

Sage Research Methods Community

Methods and the Literature Review

by Janet Salmons, PhD Research Community Manager for Sage Research Methods Community

How do you know where you going if you don't know where you've been?

methodological review of the literature

Reviewing literature to situate it in a research tradition is an essential step in the process of planning and designing research. A literature review shows the reader where your research is coming from, and how it is situated in relation to prior scholarship. Attention is necessarily given to literature about the research problem, which places the study in one or more disciplines. To situate the study within a scholarly milieu, we must also review literature about methodology and methods.

In other words, the literature review should include not only what has been studied, but how it was studied. Has this problem been largely investigated using quantitative methods? If so, how might a qualitative study contribute to the field? Has the problem generally been studied from individual perspectives, using interviews to collect data? If so, how might the problem be studied from a group, organizational, or societal perspective? Or how might it be studied using Big Data and computational methods? If you are studying a problem in a sociological discipline, might it be useful to look at how the problem has been researched in other disciplines, such as education or public policy? By exploring these kinds of questions you can find new perspectives on how to study your selected problem, and substantiate the approach you decide to take.

What is a literature review ?

Literature reviews are foundational to research proposals, theses and dissertations, as well as scholarly books and articles. In addition to their place within larger pieces of writing, literature reviews are also published as a type of free-standing article. (See Designing Review Research .)

Let's use a simple definition for a literature review: "a systematic syntheses of previous work around a particular topic" (Salkind, 2010, p. 726) . The key words suggest questions we need to answer:

What systems will we use to find, organize, and analyze the literature?

How will we synthesize the literature?

Which previous work should be included or excluded?

What topics are relevant?

Specific to a review of methodology and methods literature, we might ask:

What systems will we use to find, organize, and analyze literature about the methodology and methods central to the study? How can we find the respected methodologists and theorists whose work is most relevant to our research? Does it make sense to organize this literature chronologically, or thematically?

Which previous work should be included or excluded? Whose thinking should inform the approaches we use in our own research? What disciplines should be included, beyond our own field of study? Who are the respected methodologists and theorists in this type of research? Do these thinkers agree or disagree, if so, why? Are there multiple schools of thought that we should consider? How do we define "previous" in terms of time frame, how far should we go back? How do we think about previous work when we are using emerging methods?

How will we synthesize the literature? What critical questions should we ask? How can we pull essential concepts, theoretical constructs, and methods practices from different sources to substantiate design decisions regarding the research approach?

What topics are relevant? How do I find methodological and methods literature that relates to the proposed study?

What types of sources should inform the methodological and methods section of a literature review?

While literature reviews typically draw on articles from peer-reviewed journals, the methodological and methods section of a review may also contain books. The lengthy, in-depth descriptions of methodologies and methods are more often presented in book form. When reading journal articles about your topic of study, check the reference list for sources about the methods used.

System or chaos?

methodological review of the literature

The first key word in Salkind's definition is system. Let's start by exploring systems we can establish to organize, and analyze literature about the methodology and methods central to the study. Without a workable system, the other tasks associated with the literature review can become more difficult to achieve. I've experienced this dilemma: “I know I've read just the right resource, but where is it in the midst of piles and files of documents?” Here are some open access chapters and companion sites for a few relevant SAGE books about literature reviews. If you want to purchase one of them, use the code MSPACEQ123 for a 20% discount, valid until March 31, 2023.

Systematic approaches to a successful literature review (Booth, Sutton, & Papaioannou, 2016). Available resources include a Companion site and Chapter 2 . This book includes steps for students and experienced scholars, with discussion of a variety of literature review types.

Conducting research literature reviews:From the Internet to Paper (Fink, 2019). Available resources include Chapters 1 and 2 . This edition includes recommendations for organizing literature reviews using online resources. Chapter 2 discusses how to look for methodological quality in the sources you select. 

Doing Your Literature Review: Traditional and Systematic Technique s (Jesson, Matheson, & Lacey, 2011). Chapter 1 is available to download. It offers foundational definitions and exercises you can try.

Salkind, N. J. (2010). Literature review. Encyclopedia of Research Design. Thousand Oaks, California.

More Sage Research Methods Community posts about the literature review

Methods Literature as Part of a Review

The process for researching literature on research methods is somewhat different from the process used for researching literature about the topic, problem, or questions. What should we keep in mind when selecting methods literature?

The importance of critical appraisal

Critical appraisal of research papers is a component of everyday academic life, whether as a student as part of an assignment, as a researcher as part of a literature review or as a teacher preparing a lecture. Learn more from this post.

Analyzing Published Literature Across Paradigms

Missed the Methodspace webinar “Analyzing Published Literature Across Paradigms and Disciplines”? View it here and find related resources.

Three Benefits of a Literature Review

Bondy Valdovinos Kaye, co-researcher for “The impact of algorithmically driven recommendation systems on music consumption and production - a literature review,” offers insights about the literature review process.

Literature Review or State of the Science Review?

What is the difference between a literature review and a state of the science review? See an article by Dr. Joan Dodgson.

Computational Literature Reviews

In this interview David Antons and Oliver Salge discuss the roles humans and machines can take to plan and conduct computational literature reviews.

Citation Context Analysis

In this interview Dr. Marc Anderson explains how and why to use citation context analysis to track impact of scholarly publications over time.

Systematicity in Literature Reviews

Dr. Brian Fox explains why systematicity is important in literature reviews.

Theorizing Through Literature Reviews: The Miner-Prospector Continuum

In the article “Theorizing Through Literature Reviews: The Miner-Prospector Continuum” Dermot Breslin and Caroline Gatrell pose an intriguing question: do you approach the literature review as a miner or as a prospector? They discuss options in an interview.

Sample Selection in Systematic Literature Reviews

How do decide what literature you need for a review? See this post featuring an interview Martin Hiebl and related open-access article about sample selection.

Partnering Up: Including Managers (and Practitioners) as Research Partners in Systematic Reviews

Garima Sharma and Pratima (Tima) Bansal discuss ways to engage with managers, professionals, or practitioners to learn from the literature using a systematic review process.

Learn about Methodological Literature Reviews

In this interview Dr. Herman Aguinis and Dr. Ravi Ramani discuss the article they wrote with Dr. Nawaf Alabduljader, “Best-Practice Recommendations for Producers, Evaluators, and Users of Methodological Literature Reviews.”

Synthesizing Methodological Literature

Find tips for organizing and synthesizing methodological sources for your literature review.

Types of Inquiry with Review Research

How can you use published literature as data? In this Methodspace interview Dr. David Denyer explains how and why to use review research.

Methods and the Literature Review

A critical step in planning and designing research entails reviewing literature to situate it in a research tradition.

Literature Reviews and Review Research

Want to design and plan a review study? Find open-access examples of systematic reviews, meta-syntheses, meta–analyses, and integrative literature reviews. Also, learn more with related SAGE books.

Designing Literature Reviews as a Research Project

Review research has become a credible and legitimate form of scientific inquiry in various fields of science including management and organizational sciences. Find open-access articles with practical advice about planning a review study.

Ethics and Your Literature Review

Dr. Helen Kara offers suggestions for taking an ethical approach to your literature review.

About the Lit Review: SRM to the Rescue!

Designing qualitative research with the Total Quality Framework

Literature reviews and review research.

  • Business and Management INK

How to Write, Evaluate, and Use Methodological Literature Reviews

How to Write, Evaluate, and Use Methodological Literature Reviews

stack of open books

One of the things we’ve learned from our sister website MethodSpace is the genuine hunger among the social and behavioral scientists for how-to information. Today Business and Management INK brings you an essay from the president-elect of the Academy of Management, Herman Aguinis from the George Washington University Business School’s Department of Management; Ravi S. Ramani , an assistant professor of organizational behavior and human resource management at Morgan State University; and and Nawaf Alabduljader , professor of management at Kuwait University-College of Business Administration that, as its title proclaims, offers best practices for methodological literature reviews.

The essay is based on the trio’s paper in the journal Organizational Research Methods , and both the paper and the essay make clear that despite some established conventions and a clear need for clear, thorough and transparent methods literature reviews to avoid questionable research practices, mastery of this technique is widely lacking and concrete advice is necessary. The paper’s abstract and video abstract appears below, followed by the essay. (And yes, we’re reposting this essay on MethodSpace!)

We categorized and content-analyzed 168 methodological literature reviews published in 42 management and applied psychology journals. First, our categorization uncovered that the majority of published reviews (i.e., 85.10%) belong in three categories (i.e., critical, narrative, and descriptive reviews), which points to opportunities and promising directions for additional types of methodological literature reviews in the future (e.g., meta-analytic and umbrella reviews). Second, our content analysis uncovered implicit features of published methodological literature reviews. Based on the results of our content analysis, we created a checklist of actionable recommendations regarding 10 components to include to enhance a methodological literature review’s thoroughness, clarity, and ultimately, usefulness. Third, we describe choices and judgment calls in published reviews and provide detailed explications of exemplars that illustrate how those choices and judgment calls can be made explicit. Overall, our article offers recommendations that are useful for three methodological literature review stakeholder groups: producers (i.e., potential authors), evaluators (i.e., journal editors and reviewers), and users (i.e., substantive researchers interested in learning about a particular methodological issue and individuals tasked with training the next generation of scholars).

Groundbreaking discoveries in business management and many other fields are fueled by methodological innovations. From increases in computing power and the emergence of Big Data to the introduction of sophisticated new analytic techniques, we now have a smorgasbord of new methods and techniques to collect and analyze data. But, the picture is not all rosy. There is an overwhelming and even frantic pace of methodological developments, decreased funding for doctoral programs and researcher training in general, and increased competition to publish in high-prestige journals from around the world. As a result, many researchers find themselves struggling to stay up-to-date on methods-related issues.

methodological review of the literature

So how do we researchers keep up with these advances? We often rely on methodological literature reviews. These are reviews that summarize a methodological issue and provide best-practice recommendations. Simply put, they describe “how to do things right.” They are valuable because researchers use them to understand how to apply new and existing methods, instructors rely on them to train doctoral students, and they are a resource for journal editors and reviewers to identify potentially questionable research practices (QRPs) in the manuscripts they evaluate. But, despite their popularity, the details of how to write and also evaluate methodological literature reviews are not clear. Our project started when we realized there was a need for guidance on how to write and evaluate methodological literature reviews.

Our article published in Organizational Research Methods titled “Best-Practice Recommendations for Producers, Evaluators, and Users of Methodological Literature Reviews” addresses this need. We spell out what goes into preparing a methodological literature review and what criteria can be used to judge them. We describe different types of reviews and their purposes, and uncover hidden features in published methodological literature reviews. Also, on a very practical note, our article includes a table with a checklist of actionable recommendations on how to write and evaluate a methodological literature review.

Our analysis of 168 methodological literature reviews published in 42 different journals revealed some interesting and surprising facts. For example, there are many different ways to write a successful review—one that is published in a highly visible and prestigious journal. And, despite increased attention to issues of research transparency and reproducibility, clear reporting about the process used to conduct a review is mostly absent. We also recommend specific tools (e.g., tables, specific type of language) that can be used to increase readability and accessibility, but which are underutilized.

Overall, our article helps improve the thoroughness, clarity, and usefulness of methodological literature reviews. Authors and journal editors and reviewers can use our recommendations to enhance transparency and clarity, and avoid potential QRPs. Readers of methodological literature reviews can use our recommendations to evaluate the trustworthiness of a review, and use our suggestions to perhaps even author one themselves!

We look forward to hearing your reactions to our article and hope that it will serve as a catalyst to further enhance the quality of literature reviews of methods in management, applied psychology, and other fields.

' src=

Herman Aguinis, Ravi S. Ramani, and Nawaf Alabduljader

Herman Aguinis is the Avram Tucker Distinguished Scholar, professor of management, and chairperson of the Department of Management at The George Washington University School of Business. He has been elected for the presidency track of the Academy of Management. Ravi S. Ramani is an assistant professor of organizational behavior and human resource management in the Business Administration department of the Earl G. Graves School of Business and Management at Morgan State University. Nawaf Alabduljader is a professor of Management at Kuwait University-College of Business Administration.

Related Articles

Good Governance, Strong Trust: Building Community Among an Australian City Rebuilding Project

Good Governance, Strong Trust: Building Community Among an Australian City Rebuilding Project

A Black History Addendum to the American Music Industry

A Black History Addendum to the American Music Industry

Organizational Learning in Remote Teams: Harnessing the Power of Games for Meaningful Online Exchanges

Organizational Learning in Remote Teams: Harnessing the Power of Games for Meaningful Online Exchanges

Environmental and Social Sustainability Methods in Online and In-Person Shopping

Environmental and Social Sustainability Methods in Online and In-Person Shopping

Revitalizing Entrepreneurship to Benefit Low-Income Communities

Revitalizing Entrepreneurship to Benefit Low-Income Communities

While entrepreneurship scholarship increasingly illustrates the limits of an individualized approach in commercial businesses, this thinking has not yet filtered through to how we strategize entrepreneurship in low income-areas.

The Key to Dismantling Oppressive Global Systems

The Key to Dismantling Oppressive Global Systems

In this article, Nazarina Jamil, Maria Humphries-Kil, and Kahurangi Dey explore Paulo Freire’s call for responsibility for those who are marginalized and his Pedagogy of Hope to encourage action and inspiration around the dismantling of oppressive global systems.

Using Affective Displays to Predict Customer Satisfaction

Using Affective Displays to Predict Customer Satisfaction

In this article, Shelly Ashtar reflects on her longstanding interest in service-related work and how it connects to her research interest in customer satisfaction. Ashtar explores this topic with collaborators Galit B. Yom-Tov, Anat Rafaeli and Jochen Wirtz in “Affect-as-Information: Customer and Employee Affective Displays as Expeditious Predictors of Customer Satisfaction,” in the Journal of Service Research.

guest

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Electronic Resources and Libraries 19th Annual Conference

Health science library group conference 2024, 2024 western political science association annual meeting, virtual: 2024 british sociological association annual conference.

methodological review of the literature

Join Social Science Space in marking Black History Month in the U.S.

Imagine a graphic representation of your research's impact on public policy that you could share widely: Sage Policy Profiles

Customize your experience

Select your preferred categories.

  • Announcements

Communication

Higher education reform, open access, recent appointments, research ethics, interdisciplinarity, international debate.

  • Academic Funding

Public Engagement

  • Recognition

Presentations

Science & social science, social science bites, the data bulletin.

Social, Behavioral Scientists Eligible to Apply for NSF S-STEM Grants

Social, Behavioral Scientists Eligible to Apply for NSF S-STEM Grants

Solicitations are now being sought for the National Science Foundation’s Scholarships in Science, Technology, Engineering, and Mathematics program, and in an unheralded […]

With COVID and Climate Change Showing Social Science’s Value, Why Cut it Now?

With COVID and Climate Change Showing Social Science’s Value, Why Cut it Now?

What are the three biggest challenges Australia faces in the next five to ten years? What role will the social sciences play in resolving these challenges? The Academy of the Social Sciences in Australia asked these questions in a discussion paper earlier this year. The backdrop to this review is cuts to social science disciplines around the country, with teaching taking priority over research.

Testing-the-Waters Policy With Hypothetical Investment: Evidence From Equity Crowdfunding

Testing-the-Waters Policy With Hypothetical Investment: Evidence From Equity Crowdfunding

While fundraising is time-consuming and entails costs, entrepreneurs might be tempted to “test the water” by simply soliciting investors’ interest before going through the lengthy process. Digitalization of finance has made it possible for small business to run equity crowdfunding campaigns, but also to initiate a TTW process online and quite easily.

SSRC Links with U.S. Treasury on Evaluation Projects

SSRC Links with U.S. Treasury on Evaluation Projects

Thanks to a partnership between the SSRC and the US Department of the Treasury, two new research opportunities in program evaluation – the Homeowner Assistance Fund Project and the State and Local Fiscal Recovery Funds Project – have opened.

NSF Responsible Tech Initiative Looking at AI, Biotech and Climate

NSF Responsible Tech Initiative Looking at AI, Biotech and Climate

The U.S. National Science Foundation’s new Responsible Design, Development, and Deployment of Technologies (ReDDDoT) program supports research, implementation, and educational projects for multidisciplinary, multi-sector teams

Gabe Miller Leaving CFHSS for Universities Canada

Gabe Miller Leaving CFHSS for Universities Canada

Gabriel Miller, currently the president and chief executive officer of the Canadian Federation for the Humanities and Social Sciences, has been named the president and chief executive officer of Universities Canada effective March 18.

Big Think Podcast Series Launched by Canadian Federation of Humanities and Social Sciences

Big Think Podcast Series Launched by Canadian Federation of Humanities and Social Sciences

The Canadian Federation of Humanities and Social Sciences has launched the Big Thinking Podcast, a show series that features leading researchers in the humanities and social sciences in conversation about the most important and interesting issues of our time.

The We Society Explores Intersectionality and Single Motherhood

The We Society Explores Intersectionality and Single Motherhood

In a recently released episode of The We Society podcast, Ann Phoenix, a psychologist at University College London’s Institute of Education, spoke […]

The Social Science Podcast Guide

The Social Science Podcast Guide

This compilation of podcast series covers a host of topics and focus on specific subjects that pertain to the social and behavioral […]

New Report Finds Social Science Key Ingredient in Innovation Recipe

New Report Finds Social Science Key Ingredient in Innovation Recipe

A new report from Britain’s Academy of Social Sciences argues that the key to success for physical science and technology research is a healthy helping of relevant social science.

Too Many ‘Gray Areas’ In Workplace Culture Fosters Racism And Discrimination

Too Many ‘Gray Areas’ In Workplace Culture Fosters Racism And Discrimination

The new president of the American Sociological Association spent more than 10 years interviewing over 200 Black workers in a variety of roles – from the gig economy to the C-suite. I found that many of the problems they face come down to organizational culture. Too often, companies elevate diversity as a concept but overlook the internal processes that disadvantage Black workers.

Harnessing the Power of Social Learning in Teaching Marketing

Harnessing the Power of Social Learning in Teaching Marketing

Dr. Tracy L. Tuten explores the power of social learning in teaching marketing, emphasizing the importance of collaboration, resource sharing, and the use of platforms like Perusall to foster a sense of community and enhance the educational experience, based on an online reading group of her book ‘Principles of Marketing for a Digital Age.’

A Social Scientist Looks at the Irish Border and Its Future

A Social Scientist Looks at the Irish Border and Its Future

‘What Do We Know and What Should We Do About the Irish Border?’ is a new book from Katy Hayward that applies social science to the existing issues and what they portend.

Brexit and the Decline of Academic Internationalism in the UK

Brexit and the Decline of Academic Internationalism in the UK

Brexit seems likely to extend the hostility of the UK immigration system to scholars from European Union countries — unless a significant change of migration politics and prevalent public attitudes towards immigration politics took place in the UK. There are no indications that the latter will happen anytime soon.

Brexit and the Crisis of Academic Cosmopolitanism

Brexit and the Crisis of Academic Cosmopolitanism

A new report from the Royal Society about the effects on Brexit on science in the United Kingdom has our peripatetic Daniel Nehring mulling the changes that will occur in higher education and academic productivity.

Good Governance, Strong Trust: Building Community Among an Australian City Rebuilding Project

In this article, co-authors Johan Ninan, Stewart Clegg, Ashwin Mahalingam, and Shankar Sankaran reflect on their research interests and the inspiration behind their recent […]

A Black History Addendum to the American Music Industry

The new editor of the case study series on the music industry discusses the history of Black Americans in the recording industry.

Organizational Learning in Remote Teams: Harnessing the Power of Games for Meaningful Online Exchanges

Could we make workplace online exchanges more meaningful, especially in the early weeks of global lockdowns when we still lacked the protocols for online interaction? This was the question the authors set out to investigate.

Marc Augé, 1935-2023: Anthropologist Founder Of ‘Non-Places’

Marc Augé, 1935-2023: Anthropologist Founder Of ‘Non-Places’

French anthropologist Marc Augé, who died on July 24, is renowned for his concept of “non-places”. His 1993 text of the same name describes a reality that is very much relevant to our everyday lives.

Jane M. Simoni Named New Head of OBSSR

Jane M. Simoni Named New Head of OBSSR

Clinical psychologist Jane M. Simoni has been named to head the U.S. National Institutes of Health’s Office of Behavioral and Social Sciences Research

Amitai Etzioni, 1929-2023: Father of Communitarianism

Amitai Etzioni, 1929-2023: Father of Communitarianism

Amitai Etzioni, an Israeli-American sociologist, senior policy adviser, educator and father of the communitarianism philosophy, died May 31. He was 94.

National Academies Seeks Experts to Assess 2020 U.S. Census

National Academies Seeks Experts to Assess 2020 U.S. Census

The National Academies’ Committee on National Statistics seeks nominations for members of an ad hoc consensus study panel — sponsored by the U.S. Census Bureau — to review and evaluate the quality of the 2020 Census.

Will the 2020 Census Be the Last of Its Kind?

Will the 2020 Census Be the Last of Its Kind?

Could the 2020 iteration of the United States Census, the constitutionally mandated count of everyone present in the nation, be the last of its kind?

Will We See A More Private, But Less Useful, Census?

Will We See A More Private, But Less Useful, Census?

Census data can be pretty sensitive – it’s not just how many people live in a neighborhood, a town, a state or […]

The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

Retractions are generally framed as a negative: as science not working properly, as an embarrassment for the institutions involved, or as a flaw in the peer review process. They can be all those things. But they can also be part of a story of science working the right way: finding and correcting errors, and publicly acknowledging when information turns out to be incorrect.

Safiya Noble on Search Engines

Safiya Noble on Search Engines

In an age where things like facial recognition or financial software algorithms are shown to uncannily reproduce the prejudices of their creators, this was much less obvious earlier in the century, when researchers like Safiya Umoja Noble were dissecting search engine results and revealing the sometimes appalling material they were highlighting.

Did Turing Miss the Point? Should He Have Thought of the Limerick Test?

Did Turing Miss the Point? Should He Have Thought of the Limerick Test?

David Canter is horrified by the power of readily available large language technology.

Research Integrity Should Not Mean Its Weaponization

Research Integrity Should Not Mean Its Weaponization

Commenting on the trend for the politically motivated forensic scrutiny of the research records of academics, Till Bruckner argues that singling out individuals in this way has a chilling effect on academic freedom and distracts from efforts to address more important systemic issues in research integrity.

What Do We Know about Plagiarism These Days?

What Do We Know about Plagiarism These Days?

In the following Q&A, Roger J. Kreuz, a psychology professor who is working on a manuscript about the history and psychology of plagiarism, explains the nature and prevalence of plagiarism and the challenges associated with detecting it in the age of AI.

The Silver Lining in Bulk Retractions

The Silver Lining in Bulk Retractions

This is the opening from a longer post by Adya Misra, the research integrity and inclusion manager at Social Science Space’s parent, Sage. The full post, which addresses the hows and the whys of bulk retractions in Sage’s academic journals, appears at Retraction Watch.

Webinar: Responsible Design, Development, and Deployment of Technologies (ReDDDoT)

Webinar: Responsible Design, Development, and Deployment of Technologies (ReDDDoT)

Ensuring responsibility in the design and development in technologies is of growing concern, especially in a world filled to the brim with […]

Workshop: Advancing Antiracism, Diversity, Equity, and Inclusion in STEMM Organizations

Workshop: Advancing Antiracism, Diversity, Equity, and Inclusion in STEMM Organizations

The National Academies of Sciences, Engineering, and Medicine are private and nonprofit organizations that provide advice on some of the world’s greatest […]

SCECSAL 2024: Transforming Libraries, Empowering Communities

SCECSAL 2024: Transforming Libraries, Empowering Communities

Standing Conference of Eastern, Central and Southern African Library and Information Associations (SCECSAL) is a regional forum for information and library associations […]

Returning Absentee Ballots during the 2020 Election – A Surprise Ending?

Returning Absentee Ballots during the 2020 Election – A Surprise Ending?

One of the most heavily contested voting-policy issues in the 2020 election, in both the courts and the political arena, was the deadline […]

Overconsumption or a Move Towards Minimalism?

Overconsumption or a Move Towards Minimalism?

(Over)consumption, climate change and working from home. These are a few of the concerns at the forefront of consumers’ minds and three […]

To Better Serve Students and Future Workforces, We Must Diversify the Syllabi

To Better Serve Students and Future Workforces, We Must Diversify the Syllabi

Ellen Hutti and Jenine Harris have quantified the extent to which female authors are represented in assigned course readings. In this blog post, they emphasize that more equal exposure to experts with whom they can identify will better serve our students and foster the growth, diversity and potential of this future workforce. They also present one repository currently being built for readings by underrepresented authors that are Black, Indigenous or people of color.

Addressing the United Kingdom’s Lack of Black Scholars

Addressing the United Kingdom’s Lack of Black Scholars

In the UK, out of 164 university vice-chancellors, only two are Black. Professor David Mba was recently appointed as the first Black vice-chancellor […]

When University Decolonization in Canada Mends Relationships with Indigenous Nations and Lands

When University Decolonization in Canada Mends Relationships with Indigenous Nations and Lands

Community-based work and building and maintaining relationships with nations whose land we live upon is at the heart of what Indigenizing is. It is not simply hiring more faculty, or putting the titles “decolonizing” and “Indigenizing” on anything that might connect to Indigenous peoples.

Connecting Legislators and Researchers, Leads to Policies Based on Scientific Evidence

Connecting Legislators and Researchers, Leads to Policies Based on Scientific Evidence

The author’s team is developing ways to connect policymakers with university-based researchers – and studying what happens when these academics become the trusted sources, rather than those with special interests who stand to gain financially from various initiatives.

Your Data Likely Isn’t Best Served in a Pie Chart

Your Data Likely Isn’t Best Served in a Pie Chart

Overall, it is best to use pie charts sparingly, especially when there is a more “digestible” alternative – the bar chart.

Philip Rubin: FABBS’ Accidental Essential Man Linking Research and Policy

Philip Rubin: FABBS’ Accidental Essential Man Linking Research and Policy

As he stands down from a two-year stint as the president of the Federation of Associations in Behavioral & Brain Sciences, or FABBS, Social Science Space took the opportunity to download a fraction of the experiences of cognitive psychologist Philip Rubin, especially his experiences connecting science and policy.

Infrastructure

Using Forensic Anthropology to Identify the Unknown Dead

Using Forensic Anthropology to Identify the Unknown Dead

Anthropology is the holistic study of human culture, environment and biology across time and space. Biological anthropology focuses on the physiological aspects of people and our nonhuman primate relatives. Forensic anthropology is a further subspecialty that analyzes skeletal remains of the recently deceased within a legal setting.

How Intelligent is Artificial Intelligence?

How Intelligent is Artificial Intelligence?

Cryptocurrencies are so last year. Today’s moral panic is about AI and machine learning. Governments around the world are hastening to adopt […]

National Academies’s Committee On Law And Justice Seeks Experts

National Academies’s Committee On Law And Justice Seeks Experts

The National Academies of Sciences, Engineering and Medicine is seeking suggestions for experts interested in its Committee on Law and Justice (CLAJ) […]

Kohrra on Netflix – Policing and Everyday Life in Contemporary India

Kohrra on Netflix – Policing and Everyday Life in Contemporary India

Even Social Science Space bloggers occasionally have downtime when they log in to Netflix and crash out. One of my favourite themes […]

Jonathan Breckon On Knowledge Brokerage and Influencing Policy

Jonathan Breckon On Knowledge Brokerage and Influencing Policy

Overton spoke with Jonathan Breckon to learn about knowledge brokerage, influencing policy and the potential for technology and data to streamline the research-policy interface.

Research for Social Good Means Addressing Scientific Misconduct

Research for Social Good Means Addressing Scientific Misconduct

Social Science Space’s sister site, Methods Space, explored the broad topic of Social Good this past October, with guest Interviewee Dr. Benson Hong. Here Janet Salmons and him talk about the Academy of Management Perspectives journal article.

Six Principles for Scientists Seeking Hiring, Promotion, and Tenure

Six Principles for Scientists Seeking Hiring, Promotion, and Tenure

The negative consequences of relying too heavily on metrics to assess research quality are well known, potentially fostering practices harmful to scientific research such as p-hacking, salami science, or selective reporting. To address this systemic problem, Florian Naudet, and collegues present six principles for assessing scientists for hiring, promotion, and tenure.

Latest Golden Goose Award Winners Focused on DNA Applications, and Chickens  

Latest Golden Goose Award Winners Focused on DNA Applications, and Chickens  

Five scientists who received federal funding earlier in their research journeys honored for their unexpected discoveries.

Digital Transformation Needs Organizational Talent and Leadership Skills to Be Successful

Digital Transformation Needs Organizational Talent and Leadership Skills to Be Successful

Who drives digital change – the people of the technology? Katharina Gilli explains how her co-authors worked to address that question.

Book Review: The Oxford Handbook of Creative Industries

Book Review: The Oxford Handbook of Creative Industries

Candace Jones, Mark Lorenzen, Jonathan Sapsed , eds.: The Oxford Handbook of Creative Industries. Oxford: Oxford University Press, 2015. 576 pp. $170.00, […]

There’s Something In the Air…But Is It a Virus? Part 1

There’s Something In the Air…But Is It a Virus? Part 1

The historic Hippocrates has become an iconic figure in the creation myths of medicine. What can the body of thought attributed to him tell us about modern responses to COVID?

The Social Sciences Are Under Attack in Higher Education

The Social Sciences Are Under Attack in Higher Education

The social sciences have been a consistent target for political operatives around the United States in recent years., and recent actions at the state level have opened a new front in the long-running conflict.

Canadian Librarians Suggest Secondary Publishing Rights to Improve Public Access to Research

Canadian Librarians Suggest Secondary Publishing Rights to Improve Public Access to Research

The Canadian Federation of Library Associations recently proposed providing secondary publishing rights to academic authors in Canada.

Webinar: How Can Public Access Advance Equity and Learning?

Webinar: How Can Public Access Advance Equity and Learning?

The U.S. National Science Foundation and the American Association for the Advancement of Science have teamed up present a 90-minute online session examining how to balance public access to federally funded research results with an equitable publishing environment.

Open Access in the Humanities and Social Sciences in Canada: A Conversation

Open Access in the Humanities and Social Sciences in Canada: A Conversation

Five organizations representing knowledge networks, research libraries, and publishing platforms joined the Federation of Humanities and Social Sciences to review the present and the future of open access — in policy and in practice – in Canada

Book Review: A Memoir Highlighting Scientific Complexity

Book Review: A Memoir Highlighting Scientific Complexity

In this brief, crisply written memoir, “In a Flight of Starlings: The Wonders of Complex Systems,” Parisi takes the reader on a journey through his scientific life in the realm of complex, disordered systems, from fundamental particles to migratory birds. He argues that science’s struggle to understand and master the universe’s complexity, and especially to communicate it to an ever-more skeptical public, holds the key to humanity’s future well-being.

The Added Value of Latinx and Black Teachers

The Added Value of Latinx and Black Teachers

As the U.S. Congress debates the reauthorization of the Higher Education Act, a new paper in Policy Insights from the Behavioral and Brain Sciences urges lawmakers to focus on provisions aimed at increasing the numbers of black and Latinx teachers.

A Collection: Behavioral Science Insights on Addressing COVID’s Collateral Effects

To help in decisions surrounding the effects and aftermath of the COVID-19 pandemic, the the journal ‘Policy Insights from the Behavioral and Brain Sciences’ offers this collection of articles as a free resource.

Susan Fiske Connects Policy and Research in Print

Psychologist Susan Fiske was the founding editor of the journal Policy Insights from the Behavioral and Brain Sciences. In trying to reach a lay audience with research findings that matter, she counsels stepping a bit outside your academic comfort zone.

Mixed Methods As A Tool To Research Self-Reported Outcomes From Diverse Treatments Among People With Multiple Sclerosis

Mixed Methods As A Tool To Research Self-Reported Outcomes From Diverse Treatments Among People With Multiple Sclerosis

What does heritage mean to you?

What does heritage mean to you?

Personal Information Management Strategies in Higher Education

Personal Information Management Strategies in Higher Education

Working Alongside Artificial Intelligence Key Focus at Critical Thinking Bootcamp 2022

Working Alongside Artificial Intelligence Key Focus at Critical Thinking Bootcamp 2022

SAGE Publishing — the parent of Social Science Space – will hold its Third Annual Critical Thinking Bootcamp on August 9. Leaning more and register here

Watch the Forum: A Turning Point for International Climate Policy

Watch the Forum: A Turning Point for International Climate Policy

On May 13, the American Academy of Political and Social Science hosted an online seminar, co-sponsored by SAGE Publishing, that featured presentations […]

Event: Living, Working, Dying: Demographic Insights into COVID-19

Event: Living, Working, Dying: Demographic Insights into COVID-19

On Friday, April 23rd, join the Population Association of America and the Association of Population Centers for a virtual congressional briefing. The […]

Involving patients – or abandoning them?

Involving patients – or abandoning them?

The Covid-19 pandemic seems to be subsiding into a low-level endemic respiratory infection – although the associated pandemics of fear and action […]

Public Policy

Canada’s Federation For Humanities and Social Sciences Welcomes New Board Members

Canada’s Federation For Humanities and Social Sciences Welcomes New Board Members

Annie Pilote, dean of the faculty of graduate and postdoctoral studies at the Université Laval, was named chair of the Federation for the Humanities and Social Sciences at its 2023 virtual annual meeting last month. Members also elected Debra Thompson as a new director on the board.

Federal Health and Human Services Department Names Research Integrity Head

Federal Health and Human Services Department Names Research Integrity Head

After a two-year vacancy, the United States Office of Research Integrity has named a permanent director. Sheila Garrity.

Berggruen Philosophy Prize Awarded to Sociologist Patricia Hill Collins

Berggruen Philosophy Prize Awarded to Sociologist Patricia Hill Collins

Patricia Hill Collins, a sociologist and social theorist whose work helped set the stage for theoretical examinations of intersectionality, especially for African-American women, was awarded the 2023 Berggruen Prize for Philosophy and Culture

The Many Wins Represented by Claudia Goldin’s  Nobel Prize

The Many Wins Represented by Claudia Goldin’s Nobel Prize

Decades of research have seen economic historian Claudia Goldin methodically collate data and archival stories, detective style, to uncover explanations for the rise and fall (and rise again) of women’s paid employment over the centuries

Harvard’s Claudia Goldin Receives Nobel for Work on Gender Labor Gap

Harvard’s Claudia Goldin Receives Nobel for Work on Gender Labor Gap

Economic historian and labor economist Claudia Goldin on Monday received the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2023, commonly known as the Nobel in economics. The citation from the Nobel Committee cited Goldin “for having advanced our understanding of women’s labor market outcomes.”

National Academies Looks at How to Reduce Racial Inequality In Criminal Justice System

National Academies Looks at How to Reduce Racial Inequality In Criminal Justice System

To address racial and ethnic inequalities in the U.S. criminal justice system, the National Academies of Sciences, Engineering and Medicine just released “Reducing Racial Inequality in Crime and Justice: Science, Practice and Policy.”

Survey Examines Global Status Of Political Science Profession

Survey Examines Global Status Of Political Science Profession

The ECPR-IPSA World of Political Science Survey 2023 assesses political science scholar’s viewpoints on the global status of the discipline and the challenges it faces, specifically targeting the phenomena of cancel culture, self-censorship and threats to academic freedom of expression.

Report: Latest Academic Freedom Index Sees Global Declines

Report: Latest Academic Freedom Index Sees Global Declines

The latest update of the global Academic Freedom Index finds improvements in only five countries

The Risks Of Using Research-Based Evidence In Policymaking

The Risks Of Using Research-Based Evidence In Policymaking

With research-based evidence increasingly being seen in policy, we should acknowledge that there are risks that the research or ‘evidence’ used isn’t suitable or can be accidentally misused for a variety of reasons. 

Surveys Provide Insight Into Three Factors That Encourage Open Data and Science

Surveys Provide Insight Into Three Factors That Encourage Open Data and Science

Over a 10-year period Carol Tenopir of DataONE and her team conducted a global survey of scientists, managers and government workers involved in broad environmental science activities about their willingness to share data and their opinion of the resources available to do so (Tenopir et al., 2011, 2015, 2018, 2020). Comparing the responses over that time shows a general increase in the willingness to share data (and thus engage in Open Science).

Unskilled But Aware: Rethinking The Dunning-Kruger Effect

Unskilled But Aware: Rethinking The Dunning-Kruger Effect

As a math professor who teaches students to use data to make informed decisions, I am familiar with common mistakes people make when dealing with numbers. The Dunning-Kruger effect is the idea that the least skilled people overestimate their abilities more than anyone else. This sounds convincing on the surface and makes for excellent comedy. But in a recent paper, my colleagues and I suggest that the mathematical approach used to show this effect may be incorrect.

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

Maintaining Anonymity In Double-Blind Peer Review During The Age of Artificial Intelligence

The double-blind review process, adopted by many publishers and funding agencies, plays a vital role in maintaining fairness and unbiasedness by concealing the identities of authors and reviewers. However, in the era of artificial intelligence (AI) and big data, a pressing question arises: can an author’s identity be deduced even from an anonymized paper (in cases where the authors do not advertise their submitted article on social media)?

Hype Terms In Research: Words Exaggerating Results Undermine Findings

Hype Terms In Research: Words Exaggerating Results Undermine Findings

The claim that academics hype their research is not news. The use of subjective or emotive words that glamorize, publicize, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel skepticism and alienate readers.

Five Steps to Protect – and to Hear – Research Participants

Five Steps to Protect – and to Hear – Research Participants

Jasper Knight identifies five key issues that underlie working with human subjects in research and which transcend institutional or disciplinary differences.

New Dataset Collects Instances of ‘Contentious Politics’ Around the World

New Dataset Collects Instances of ‘Contentious Politics’ Around the World

The European Research Center is funding the Global Contentious Politics Dataset, or GLOCON, a state-of-the-art automated database curating information on political events — including confrontations, political turbulence, strikes, rallies, and protests

Tejendra Pherali on Education and Conflict

Tejendra Pherali on Education and Conflict

Tejendra Pherali, a professor of education, conflict and peace at University College London, researches the intersection of education and conflict around the world.

Dimitris Xygalatas on Ritual

Dimitris Xygalatas on Ritual

In this Social Science Bites podcast, cognitive anthropologist Dimitris Xygalatas details how ritual often serves a positive purpose for individuals – synchronizing them with their communities or relieving their stress.

Gamification as an Effective Instructional Strategy

Gamification as an Effective Instructional Strategy

Gamification—the use of video game elements such as achievements, badges, ranking boards, avatars, adventures, and customized goals in non-game contexts—is certainly not a new thing.

Harnessing the Tide, Not Stemming It: AI, HE and Academic Publishing

Harnessing the Tide, Not Stemming It: AI, HE and Academic Publishing

Who will use AI-assisted writing tools — and what will they use them for? The short answer, says Katie Metzler, is everyone and for almost every task that involves typing.

Immigration Court’s Active Backlog Surpasses One Million

Immigration Court’s Active Backlog Surpasses One Million

In the first post from a series of bulletins on public data that social and behavioral scientists might be interested in, Gary Price links to an analysis from the Transactional Records Access Clearinghouse.

Webinar Discusses Promoting Your Article

Webinar Discusses Promoting Your Article

The next in SAGE Publishing’s How to Get Published webinar series focuses on promoting your writing after publication. The free webinar is set for November 16 at 4 p.m. BT/11 a.m. ET/8 a.m. PT.

Webinar Examines Open Access and Author Rights

Webinar Examines Open Access and Author Rights

The next in SAGE Publishing’s How to Get Published webinar series honors International Open Access Week (October 24-30). The free webinar is […]

Ping, Read, Reply, Repeat: Research-Based Tips About Breaking Bad Email Habits

Ping, Read, Reply, Repeat: Research-Based Tips About Breaking Bad Email Habits

At a time when there are so many concerns being raised about always-on work cultures and our right to disconnect, email is the bane of many of our working lives.

Matchmaking Research to Policy: Introducing Britain’s Areas of Research Interest Database

Matchmaking Research to Policy: Introducing Britain’s Areas of Research Interest Database

Kathryn Oliver discusses the recent launch of the United Kingdom’s Areas of Research Interest Database. A new tool that promises to provide a mechanism to link researchers, funders and policymakers more effectively collaboratively and transparently.

How ChatGPT Could Transform Higher Education 

How ChatGPT Could Transform Higher Education 

ChatGPT is by no means a perfect accessory for the modern academic – but it might just get there.

Watch The Lecture: The ‘E’ In Science Stands For Equity

Watch The Lecture: The ‘E’ In Science Stands For Equity

According to the National Science Foundation, the percentage of American adults with a great deal of trust in the scientific community dropped […]

Watch a Social Scientist Reflect on the Russian Invasion of Ukraine

Watch a Social Scientist Reflect on the Russian Invasion of Ukraine

“It’s very hard,” explains Sir Lawrence Freedman, “to motivate people when they’re going backwards.”

Dispatches from Social and Behavioral Scientists on COVID

Dispatches from Social and Behavioral Scientists on COVID

Has the ongoing COVID-19 pandemic impacted how social and behavioral scientists view and conduct research? If so, how exactly? And what are […]

New Thought Leadership Webinar Series Opens with Regional Looks at Research Impact

New Thought Leadership Webinar Series Opens with Regional Looks at Research Impact

Research impact will be the focus of a new webinar series from Epigeum, which provides online courses for universities and colleges. The […]

Watch! Methodspace Roundtables Examine Threats To Intellectual And Academic Freedom

Watch! Methodspace Roundtables Examine Threats To Intellectual And Academic Freedom

Janet Salmons, the research community director of our sister site, Sage Methodspace, coordinated a series of research roundtables to discuss the obstacles facing academic freedom and how to navigate them.

  • Impact metrics
  • Early Career
  • In Memorium
  • Curated-Collection Page Links
  • Science communication
  • Survey Explores How College Students Feel About Using AI To Complete Coursework
  • Can We Trust the World Health Organization with So Much Power?
  • Melissa Kearney on Marriage and Children
  • Raffaella Sadun on Effective Management

Subscribe to our mailing list

Get the latest news from the social and behavioral science community delivered straight to your inbox.

Thanks! Please check your inbox or spam folder to confirm your subscription.

Duke University Libraries

Literature Reviews

  • Types of reviews
  • Getting started

Types of reviews and examples

Choosing a review type.

  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

methodological review of the literature

Contact a Librarian

Ask a Librarian

Overview of types of literature reviews

Made with  Visme Infographic Maker

  • Literature (narrative)
  • Scoping / Evidence map
  • Meta-analysis

Characteristics:

  • Provides examination of recent or current literature on a wide range of subjects
  • Varying levels of completeness / comprehensiveness, non-standardized methodology
  • May or may not include comprehensive searching, quality assessment or critical appraisal

Mitchell, L. E., & Zajchowski, C. A. (2022). The history of air quality in Utah: A narrative review.  Sustainability ,  14 (15), 9653.  doi.org/10.3390/su14159653

  • Assessment of what is already known about an issue
  • Similar to a systematic review but within a time-constrained setting
  • Typically employs methodological shortcuts, increasing risk of introducing bias, includes basic level of quality assessment
  • Best suited for issues needing quick decisions and solutions (i.e., policy recommendations)

Learn more about the method:

Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: the evolution of a rapid review approach.  Systematic reviews, 1 (1), 1-9.  https://doi.org/10.1186/2046-4053-1-10

Virginia Commonwealth University Libraries. (2021). Rapid Review Protocol .

Quarmby, S., Santos, G., & Mathias, M. (2019). Air quality strategies and technologies: A rapid review of the international evidence.  Sustainability, 11 (10), 2757.  https://doi.org/10.3390/su11102757

  • Compiles evidence from multiple reviews into one document
  • Often defines a broader question than is typical of a traditional systematic review.

Choi, G. J., & Kang, H. (2022). The umbrella review: a useful strategy in the rain of evidence.  The Korean Journal of Pain ,  35 (2), 127–128.  https://doi.org/10.3344/kjp.2022.35.2.127

Aromataris, E., Fernandez, R., Godfrey, C. M., Holly, C., Khalil, H., & Tungpunkom, P. (2015). Summarizing systematic reviews: Methodological development, conduct and reporting of an umbrella review approach. International Journal of Evidence-Based Healthcare , 13(3), 132–140. https://doi.org/10.1097/XEB.0000000000000055

Rojas-Rueda, D., Morales-Zamora, E., Alsufyani, W. A., Herbst, C. H., Al Balawi, S. M., Alsukait, R., & Alomran, M. (2021). Environmental risk factors and health: An umbrella review of meta-analyses.  International Journal of Environmental Research and Public Dealth ,  18 (2), 704.  https://doi.org/10.3390/ijerph18020704

  • Main purpose is to map out and categorize existing literature, identify gaps in literature
  • Search comprehensiveness determined by time/scope constraints, could take longer than a systematic review
  • No formal quality assessment or critical appraisal

Learn more about the methods :

Arksey, H., & O'Malley, L. (2005) Scoping studies: towards a methodological framework.  International Journal of Social Research Methodology ,  8 (1), 19-32.  https://doi.org/10.1080/1364557032000119616

Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science: IS, 5, 69. https://doi.org/10.1186/1748-5908-5-69

Miake-Lye, I. M., Hempel, S., Shanman, R., & Shekelle, P. G. (2016). What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products.  Systematic reviews, 5 (1), 1-21.  https://doi.org/10.1186/s13643-016-0204-x

Example : 

Rahman, A., Sarkar, A., Yadav, O. P., Achari, G., & Slobodnik, J. (2021). Potential human health risks due to environmental exposure to nano-and microplastics and knowledge gaps: A scoping review.  Science of the Total Environment, 757 , 143872.  https://doi.org/10.1016/j.scitotenv.2020.143872

  • Seeks to systematically search for, appraise, and synthesize research evidence
  • Adheres to strict guidelines, protocols, and frameworks
  • Time-intensive and often take months to a year or more to complete. 
  • The most commonly referred to type of evidence synthesis. Sometimes confused as a blanket term for other types of reviews.

Gascon, M., Triguero-Mas, M., Martínez, D., Dadvand, P., Forns, J., Plasència, A., & Nieuwenhuijsen, M. J. (2015). Mental health benefits of long-term exposure to residential green and blue spaces: a systematic review.  International Journal of Environmental Research and Public Health ,  12 (4), 4354–4379.  https://doi.org/10.3390/ijerph120404354

  • Statistical technique for combining results of quantitative studies to provide more precise effect of results
  • Aims for exhaustive, comprehensive searching
  • Quality assessment may determine inclusion/exclusion criteria
  • May be conducted independently or as part of a systematic review

Berman, N. G., & Parker, R. A. (2002). Meta-analysis: Neither quick nor easy. BMC Medical Research Methodology , 2(1), 10. https://doi.org/10.1186/1471-2288-2-10

Hites R. A. (2004). Polybrominated diphenyl ethers in the environment and in people: a meta-analysis of concentrations.  Environmental Science & Technology ,  38 (4), 945–956.  https://doi.org/10.1021/es035082g

Flowchart of review types

  • Review Decision Tree - Cornell University For more information, check out Cornell's review methodology decision tree.
  • LitR-Ex.com - Eight literature review methodologies Learn more about 8 different review types (incl. Systematic Reviews and Scoping Reviews) with practical tips about strengths and weaknesses of different methods.
  • << Previous: Getting started
  • Next: 1. Define your research question >>
  • Last Updated: Feb 15, 2024 1:45 PM
  • URL: https://guides.library.duke.edu/lit-reviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 5. The Literature Review
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

A literature review surveys prior research published in books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated. Literature reviews are designed to provide an overview of sources you have used in researching a particular topic and to demonstrate to your readers how your research fits within existing scholarship about the topic.

Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . Fourth edition. Thousand Oaks, CA: SAGE, 2014.

Importance of a Good Literature Review

A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories . A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that informs how you are planning to investigate a research problem. The analytical features of a literature review might:

  • Give a new interpretation of old material or combine new with old interpretations,
  • Trace the intellectual progression of the field, including major debates,
  • Depending on the situation, evaluate the sources and advise the reader on the most pertinent or relevant research, or
  • Usually in the conclusion of a literature review, identify where gaps exist in how a problem has been researched to date.

Given this, the purpose of a literature review is to:

  • Place each work in the context of its contribution to understanding the research problem being studied.
  • Describe the relationship of each work to the others under consideration.
  • Identify new ways to interpret prior research.
  • Reveal any gaps that exist in the literature.
  • Resolve conflicts amongst seemingly contradictory previous studies.
  • Identify areas of prior scholarship to prevent duplication of effort.
  • Point the way in fulfilling a need for additional research.
  • Locate your own research within the context of existing literature [very important].

Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper. 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . Los Angeles, CA: SAGE, 2011; Knopf, Jeffrey W. "Doing a Literature Review." PS: Political Science and Politics 39 (January 2006): 127-132; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012.

Types of Literature Reviews

It is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the primary studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally among scholars that become part of the body of epistemological traditions within the field.

In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews. Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are a number of approaches you could adopt depending upon the type of analysis underpinning your study.

Argumentative Review This form examines literature selectively in order to support or refute an argument, deeply embedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to make summary claims of the sort found in systematic reviews [see below].

Integrative Review Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses or research problems. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication. This is the most common form of review in the social sciences.

Historical Review Few things rest in isolation from historical precedent. Historical literature reviews focus on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review A review does not always focus on what someone said [findings], but how they came about saying what they say [method of analysis]. Reviewing methods of analysis provides a framework of understanding at different levels [i.e. those of theory, substantive fields, research approaches, and data collection and analysis techniques], how researchers draw upon a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection, and data analysis. This approach helps highlight ethical issues which you should be aware of and consider as you go through your own study.

Systematic Review This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyze data from the studies that are included in the review. The goal is to deliberately document, critically evaluate, and summarize scientifically all of the research about a clearly defined research problem . Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?" This type of literature review is primarily applied to examining prior research studies in clinical medicine and allied health fields, but it is increasingly being used in the social sciences.

Theoretical Review The purpose of this form is to examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review helps to establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

NOTE : Most often the literature review will incorporate some combination of types. For example, a review that examines literature supporting or refuting an argument, assumption, or philosophical problem related to the research problem will also need to include writing supported by sources that establish the history of these arguments in the literature.

Baumeister, Roy F. and Mark R. Leary. "Writing Narrative Literature Reviews."  Review of General Psychology 1 (September 1997): 311-320; Mark R. Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Kennedy, Mary M. "Defining a Literature." Educational Researcher 36 (April 2007): 139-147; Petticrew, Mark and Helen Roberts. Systematic Reviews in the Social Sciences: A Practical Guide . Malden, MA: Blackwell Publishers, 2006; Torracro, Richard. "Writing Integrative Literature Reviews: Guidelines and Examples." Human Resource Development Review 4 (September 2005): 356-367; Rocco, Tonette S. and Maria S. Plakhotnik. "Literature Reviews, Conceptual Frameworks, and Theoretical Frameworks: Terms, Functions, and Distinctions." Human Ressource Development Review 8 (March 2008): 120-130; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.

Structure and Writing Style

I.  Thinking About Your Literature Review

The structure of a literature review should include the following in support of understanding the research problem :

  • An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
  • Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
  • An explanation of how each work is similar to and how it varies from the others,
  • Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.

The critical evaluation of each work should consider :

  • Provenance -- what are the author's credentials? Are the author's arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?
  • Methodology -- were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?
  • Objectivity -- is the author's perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author's point?
  • Persuasiveness -- which of the author's theses are most convincing or least convincing?
  • Validity -- are the author's arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?

II.  Development of the Literature Review

Four Basic Stages of Writing 1.  Problem formulation -- which topic or field is being examined and what are its component issues? 2.  Literature search -- finding materials relevant to the subject being explored. 3.  Data evaluation -- determining which literature makes a significant contribution to the understanding of the topic. 4.  Analysis and interpretation -- discussing the findings and conclusions of pertinent literature.

Consider the following issues before writing the literature review: Clarify If your assignment is not specific about what form your literature review should take, seek clarification from your professor by asking these questions: 1.  Roughly how many sources would be appropriate to include? 2.  What types of sources should I review (books, journal articles, websites; scholarly versus popular sources)? 3.  Should I summarize, synthesize, or critique sources by discussing a common theme or issue? 4.  Should I evaluate the sources in any way beyond evaluating how they relate to understanding the research problem? 5.  Should I provide subheadings and other background information, such as definitions and/or a history? Find Models Use the exercise of reviewing the literature to examine how authors in your discipline or area of interest have composed their literature review sections. Read them to get a sense of the types of themes you might want to look for in your own research or to identify ways to organize your final review. The bibliography or reference section of sources you've already read, such as required readings in the course syllabus, are also excellent entry points into your own research. Narrow the Topic The narrower your topic, the easier it will be to limit the number of sources you need to read in order to obtain a good survey of relevant resources. Your professor will probably not expect you to read everything that's available about the topic, but you'll make the act of reviewing easier if you first limit scope of the research problem. A good strategy is to begin by searching the USC Libraries Catalog for recent books about the topic and review the table of contents for chapters that focuses on specific issues. You can also review the indexes of books to find references to specific issues that can serve as the focus of your research. For example, a book surveying the history of the Israeli-Palestinian conflict may include a chapter on the role Egypt has played in mediating the conflict, or look in the index for the pages where Egypt is mentioned in the text. Consider Whether Your Sources are Current Some disciplines require that you use information that is as current as possible. This is particularly true in disciplines in medicine and the sciences where research conducted becomes obsolete very quickly as new discoveries are made. However, when writing a review in the social sciences, a survey of the history of the literature may be required. In other words, a complete understanding the research problem requires you to deliberately examine how knowledge and perspectives have changed over time. Sort through other current bibliographies or literature reviews in the field to get a sense of what your discipline expects. You can also use this method to explore what is considered by scholars to be a "hot topic" and what is not.

III.  Ways to Organize Your Literature Review

Chronology of Events If your review follows the chronological method, you could write about the materials according to when they were published. This approach should only be followed if a clear path of research building on previous research can be identified and that these trends follow a clear chronological order of development. For example, a literature review that focuses on continuing research about the emergence of German economic power after the fall of the Soviet Union. By Publication Order your sources by publication chronology, then, only if the order demonstrates a more important trend. For instance, you could order a review of literature on environmental studies of brown fields if the progression revealed, for example, a change in the soil collection practices of the researchers who wrote and/or conducted the studies. Thematic [“conceptual categories”] A thematic literature review is the most common approach to summarizing prior research in the social and behavioral sciences. Thematic reviews are organized around a topic or issue, rather than the progression of time, although the progression of time may still be incorporated into a thematic review. For example, a review of the Internet’s impact on American presidential politics could focus on the development of online political satire. While the study focuses on one topic, the Internet’s impact on American presidential politics, it would still be organized chronologically reflecting technological developments in media. The difference in this example between a "chronological" and a "thematic" approach is what is emphasized the most: themes related to the role of the Internet in presidential politics. Note that more authentic thematic reviews tend to break away from chronological order. A review organized in this manner would shift between time periods within each section according to the point being made. Methodological A methodological approach focuses on the methods utilized by the researcher. For the Internet in American presidential politics project, one methodological approach would be to look at cultural differences between the portrayal of American presidents on American, British, and French websites. Or the review might focus on the fundraising impact of the Internet on a particular political party. A methodological scope will influence either the types of documents in the review or the way in which these documents are discussed.

Other Sections of Your Literature Review Once you've decided on the organizational method for your literature review, the sections you need to include in the paper should be easy to figure out because they arise from your organizational strategy. In other words, a chronological review would have subsections for each vital time period; a thematic review would have subtopics based upon factors that relate to the theme or issue. However, sometimes you may need to add additional sections that are necessary for your study, but do not fit in the organizational strategy of the body. What other sections you include in the body is up to you. However, only include what is necessary for the reader to locate your study within the larger scholarship about the research problem.

Here are examples of other sections, usually in the form of a single paragraph, you may need to include depending on the type of review you write:

  • Current Situation : Information necessary to understand the current topic or focus of the literature review.
  • Sources Used : Describes the methods and resources [e.g., databases] you used to identify the literature you reviewed.
  • History : The chronological progression of the field, the research literature, or an idea that is necessary to understand the literature review, if the body of the literature review is not already a chronology.
  • Selection Methods : Criteria you used to select (and perhaps exclude) sources in your literature review. For instance, you might explain that your review includes only peer-reviewed [i.e., scholarly] sources.
  • Standards : Description of the way in which you present your information.
  • Questions for Further Research : What questions about the field has the review sparked? How will you further your research as a result of the review?

IV.  Writing Your Literature Review

Once you've settled on how to organize your literature review, you're ready to write each section. When writing your review, keep in mind these issues.

Use Evidence A literature review section is, in this sense, just like any other academic research paper. Your interpretation of the available sources must be backed up with evidence [citations] that demonstrates that what you are saying is valid. Be Selective Select only the most important points in each source to highlight in the review. The type of information you choose to mention should relate directly to the research problem, whether it is thematic, methodological, or chronological. Related items that provide additional information, but that are not key to understanding the research problem, can be included in a list of further readings . Use Quotes Sparingly Some short quotes are appropriate if you want to emphasize a point, or if what an author stated cannot be easily paraphrased. Sometimes you may need to quote certain terminology that was coined by the author, is not common knowledge, or taken directly from the study. Do not use extensive quotes as a substitute for using your own words in reviewing the literature. Summarize and Synthesize Remember to summarize and synthesize your sources within each thematic paragraph as well as throughout the review. Recapitulate important features of a research study, but then synthesize it by rephrasing the study's significance and relating it to your own work and the work of others. Keep Your Own Voice While the literature review presents others' ideas, your voice [the writer's] should remain front and center. For example, weave references to other sources into what you are writing but maintain your own voice by starting and ending the paragraph with your own ideas and wording. Use Caution When Paraphrasing When paraphrasing a source that is not your own, be sure to represent the author's information or opinions accurately and in your own words. Even when paraphrasing an author’s work, you still must provide a citation to that work.

V.  Common Mistakes to Avoid

These are the most common mistakes made in reviewing social science research literature.

  • Sources in your literature review do not clearly relate to the research problem;
  • You do not take sufficient time to define and identify the most relevant sources to use in the literature review related to the research problem;
  • Relies exclusively on secondary analytical sources rather than including relevant primary research studies or data;
  • Uncritically accepts another researcher's findings and interpretations as valid, rather than examining critically all aspects of the research design and analysis;
  • Does not describe the search procedures that were used in identifying the literature to review;
  • Reports isolated statistical results rather than synthesizing them in chi-squared or meta-analytic methods; and,
  • Only includes research that validates assumptions and does not consider contrary findings and alternative interpretations found in the literature.

Cook, Kathleen E. and Elise Murowchick. “Do Literature Review Skills Transfer from One Course to Another?” Psychology Learning and Teaching 13 (March 2014): 3-11; Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . London: SAGE, 2011; Literature Review Handout. Online Writing Center. Liberty University; Literature Reviews. The Writing Center. University of North Carolina; Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: SAGE, 2016; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012; Randolph, Justus J. “A Guide to Writing the Dissertation Literature Review." Practical Assessment, Research, and Evaluation. vol. 14, June 2009; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016; Taylor, Dena. The Literature Review: A Few Tips On Conducting It. University College Writing Centre. University of Toronto; Writing a Literature Review. Academic Skills Centre. University of Canberra.

Writing Tip

Break Out of Your Disciplinary Box!

Thinking interdisciplinarily about a research problem can be a rewarding exercise in applying new ideas, theories, or concepts to an old problem. For example, what might cultural anthropologists say about the continuing conflict in the Middle East? In what ways might geographers view the need for better distribution of social service agencies in large cities than how social workers might study the issue? You don’t want to substitute a thorough review of core research literature in your discipline for studies conducted in other fields of study. However, particularly in the social sciences, thinking about research problems from multiple vectors is a key strategy for finding new solutions to a problem or gaining a new perspective. Consult with a librarian about identifying research databases in other disciplines; almost every field of study has at least one comprehensive database devoted to indexing its research literature.

Frodeman, Robert. The Oxford Handbook of Interdisciplinarity . New York: Oxford University Press, 2010.

Another Writing Tip

Don't Just Review for Content!

While conducting a review of the literature, maximize the time you devote to writing this part of your paper by thinking broadly about what you should be looking for and evaluating. Review not just what scholars are saying, but how are they saying it. Some questions to ask:

  • How are they organizing their ideas?
  • What methods have they used to study the problem?
  • What theories have been used to explain, predict, or understand their research problem?
  • What sources have they cited to support their conclusions?
  • How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?

When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.

Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1 998.

Yet Another Writing Tip

When Do I Know I Can Stop Looking and Move On?

Here are several strategies you can utilize to assess whether you've thoroughly reviewed the literature:

  • Look for repeating patterns in the research findings . If the same thing is being said, just by different people, then this likely demonstrates that the research problem has hit a conceptual dead end. At this point consider: Does your study extend current research?  Does it forge a new path? Or, does is merely add more of the same thing being said?
  • Look at sources the authors cite to in their work . If you begin to see the same researchers cited again and again, then this is often an indication that no new ideas have been generated to address the research problem.
  • Search Google Scholar to identify who has subsequently cited leading scholars already identified in your literature review [see next sub-tab]. This is called citation tracking and there are a number of sources that can help you identify who has cited whom, particularly scholars from outside of your discipline. Here again, if the same authors are being cited again and again, this may indicate no new literature has been written on the topic.

Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: Sage, 2016; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.

  • << Previous: Theoretical Framework
  • Next: Citation Tracking >>
  • Last Updated: Feb 8, 2024 1:57 PM
  • URL: https://libguides.usc.edu/writingguide

Auraria Library red logo

Research Methods: Literature Reviews

  • Annotated Bibliographies
  • Literature Reviews
  • Scoping Reviews
  • Systematic Reviews
  • Scholarship of Teaching and Learning
  • Persuasive Arguments
  • Subject Specific Methodology

A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic. The results of a literature review may be an entire report or article OR may be part of a article, thesis, dissertation, or grant proposal. A literature review helps the author learn about the history and nature of their topic, and identify research gaps and problems.

Steps & Elements

Problem formulation

  • Determine your topic and its components by asking a question
  • Research: locate literature related to your topic to identify the gap(s) that can be addressed
  • Read: read the articles or other sources of information
  • Analyze: assess the findings for relevancy
  • Evaluating: determine how the article are relevant to your research and what are the key findings
  • Synthesis: write about the key findings and how it is relevant to your research

Elements of a Literature Review

  • Summarize subject, issue or theory under consideration, along with objectives of the review
  • Divide works under review into categories (e.g. those in support of a particular position, those against, those offering alternative theories entirely)
  • Explain how each work is similar to and how it varies from the others
  • Conclude which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of an area of research

Writing a Literature Review Resources

  • How to Write a Literature Review From the Wesleyan University Library
  • Write a Literature Review From the University of California Santa Cruz Library. A Brief overview of a literature review, includes a list of stages for writing a lit review.
  • Literature Reviews From the University of North Carolina Writing Center. Detailed information about writing a literature review.
  • Undertaking a literature review: a step-by-step approach Cronin, P., Ryan, F., & Coughan, M. (2008). Undertaking a literature review: A step-by-step approach. British Journal of Nursing, 17(1), p.38-43

methodological review of the literature

Literature Review Tutorial

  • << Previous: Annotated Bibliographies
  • Next: Scoping Reviews >>
  • Last Updated: Jan 26, 2024 2:37 PM
  • URL: https://guides.auraria.edu/researchmethods

1100 Lawrence Street Denver, CO 80204 303-315-7700 Ask Us Directions

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Rapid reviews methods series: guidance on rapid qualitative evidence synthesis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-4808-3880 Andrew Booth 1 , 2 ,
  • Isolde Sommer 3 , 4 ,
  • Jane Noyes 2 , 5 ,
  • Catherine Houghton 2 , 6 ,
  • Fiona Campbell 1 , 7
  • The Cochrane Rapid Reviews Methods Group and Cochrane Qualitative and Implementation Methods Group (CQIMG)
  • 1 EnSyGN Sheffield Evidence Synthesis Group , University of Sheffield , Sheffield , UK
  • 2 Cochrane Qualitative and Implementation Methods Group (CQIMG) , London , UK
  • 3 Department for Evidence-based Medicine and Evaluation , University for Continuing Education Krems , Krems , Austria
  • 4 Cochrane Rapid Reviews Group & Cochrane Austria , Krems , Austria
  • 5 Bangor University , Bangor , UK
  • 6 University of Galway , Galway , Ireland
  • 7 University of Newcastle upon Tyne , Newcastle upon Tyne , UK
  • Correspondence to Professor Andrew Booth, Univ Sheffield, Sheffield, UK; a.booth{at}sheffield.ac.uk

This paper forms part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group and addresses rapid qualitative evidence syntheses (QESs), which use modified systematic, transparent and reproducible methodsu to accelerate the synthesis of qualitative evidence when faced with resource constraints. This guidance covers the review process as it relates to synthesis of qualitative research. ‘Rapid’ or ‘resource-constrained’ QES require use of templates and targeted knowledge user involvement. Clear definition of perspectives and decisions on indirect evidence, sampling and use of existing QES help in targeting eligibility criteria. Involvement of an information specialist, especially in prioritising databases, targeting grey literature and planning supplemental searches, can prove invaluable. Use of templates and frameworks in study selection and data extraction can be accompanied by quality assurance procedures targeting areas of likely weakness. Current Cochrane guidance informs selection of tools for quality assessment and of synthesis method. Thematic and framework synthesis facilitate efficient synthesis of large numbers of studies or plentiful data. Finally, judicious use of Grading of Recommendations Assessment, Development and Evaluation approach for assessing the Confidence of Evidence from Reviews of Qualitative research assessments and of software as appropriate help to achieve a timely and useful review product.

  • Systematic Reviews as Topic
  • Patient Care

Data availability statement

No data are available. Not applicable. All data is from published articles.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjebm-2023-112620

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Rapid Qualitative Evidence Synthesis (QES) is a relatively recent innovation in evidence synthesis and few published examples currently exists.

Guidance for authoring a rapid QES is scattered and requires compilation and summary.

WHAT THIS STUDY ADDS

This paper represents the first attempt to compile current guidance, illustrated by the experience of several international review teams.

We identify features of rapid QES methods that could be accelerated or abbreviated and where methods resemble those for conventional QESs.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

This paper offers guidance for researchers when conducting a rapid QES and informs commissioners of research and policy-makers what to expect when commissioning such a review.

Introduction

This paper forms part of a series from the Cochrane Rapid Reviews Methods Group providing methodological guidance for rapid reviews. While other papers in the series 1–4 focus on generic considerations, we aim to provide in-depth recommendations specific to a resource-constrained (or rapid) qualitative evidence synthesis (rQES). 5 This paper is accompanied by recommended resources ( online supplemental appendix A ) and an elaboration with practical considerations ( online supplemental appendix B ).

Supplemental material

The role of qualitative evidence in decision-making is increasingly recognised. 6 This, in turn, has led to appreciation of the value of qualitative evidence syntheses (QESs) that summarise findings across multiple contexts. 7 Recognition of the need for such syntheses to be available at the time most useful to decision-making has, in turn, driven demand for rapid qualitative evidence syntheses. 8 The breadth of potential rQES mirrors the versatility of QES in general (from focused questions to broad overviews) and outputs range from descriptive thematic maps through to theory-informed syntheses (see table 1 ).

  • View inline

Glossary of important terms (alphabetically)

As with other resource-constrained reviews, no one size fits all. A team should start by specifying the phenomenon of interest, the review question, 9 the perspectives to be included 9 and the sample to be determined and selected. 10 Subsequently, the team must finalise the appropriate choice of synthesis. 11 Above all, the review team should consider the intended knowledge users, 3 including requirements of the funder.

An rQES team, in particular, cannot afford any extra time or resource requirements that might arise from either a misunderstanding of the review question, an unclear picture of user requirements or an inappropriate choice of methods. The team seeks to align the review question and the requirements of the knowledge user with available time and resources. They also need to ensure that the choice of data and choice of synthesis are appropriate to the intended ‘knowledge claims’ (epistemology) made by the rQES. 11 This involves the team asking ‘what types of data are meaningful for this review question?’, ‘what types of data are trustworthy?’ and ‘is the favoured synthesis method appropriate for this type of data?’. 12 This paper aims to help rQES teams to choose methods that best fit their project while understanding the limitations of those choices. Our recommendations derive from current QES guidance, 5 evidence on modified QES methods, 8 13 and practical experience. 14 15

This paper presents an overview of considerations and recommendations as described in table 2 . Supplemental materials including additional resources details of our recommendations and practical examples are provided in online supplemental appendices A and B .

Recommendations for resource-constrained qualitative evidence synthesis (rQES)

Setting the review question and topic refinement

Rapid reviews summarise information from multiple research studies to produce evidence for ‘the public, researchers, policymakers and funders in a systematic, resource-efficient manner’. 16 Involvement of knowledge users is critical. 3 Given time constraints, individual knowledge users could be asked only to feedback on very specific decisions and tasks or on selective sections of the protocol. Specifically, whenever a QES is abbreviated or accelerated, a team should ensure that the review question is agreed by a minimum number of knowledge users with expertise or experience that reflects all the important review perspectives and with authority to approve the final version 2 5 11 ( table 2 , item R1).

Involvement of topic experts can ensure that the rQES is responsive to need. 14 17 One Cochrane rQES saved considerable time by agreeing the review topic within a single meeting and one-phase iteration. 9 Decisions on topics to be omitted are also informed by a knowledge of existing QESs. 17

An information specialist can help to manage the quantity and quality of available evidence by setting conceptual boundaries and logistic limits. A structured question format, such as Setting-Perspective-Interest, phenomenon of-Comparison-Evaluation or Population-Interest, phenomenon of-Context helps in communicating the scope and, subsequently, in operationalising study selection. 9 18

Scoping (of review parameters) and mapping (of key types of evidence and likely richness of data) helps when planning the review. 5 19 The option to choose purposive sampling over comprehensive sampling approaches, as offered by standard QES, may be particularly helpful in the context of a rapid QES. 8 Once a team knows the approximate number and distribution of studies, perhaps mapping them against country, age, ethnicity, etc), they can decide whether or not to use purposive sampling. 12 An rQES for the WHO combined purposive with variation sampling. Sampling in two stages started by reducing the initial number of studies to a more manageable sampling frame and then sampling approximately a third of the remaining studies from within the sampling frame. 20

Sampling may target richer studies and/or privilege diversity. 8 21 A rich qualitative study typically illustrates findings with verbatim extracts from transcripts from interviews or textual responses from questionnaires. Rich studies are often found in specialist qualitative research or social science journals. In contrast, less rich studies may itemise themes with an occasional indicative text extract and tend to summarise findings. In clinical or biomedical journals less rich findings may be placed within a single table or box.

No rule exists on an optimal number of studies; too many studies makes it challenging to ‘maintain insight’, 22 too few does not sustain rigorous analysis. 23 Guidance on sampling is available from the forthcoming Cochrane-Campbell QES Handbook.

A review team can use templates to fast-track writing of a protocol. The protocol should always be publicly available ( table 2 , item R2). 24 25 Formal registration may require that the team has not commenced data extraction but should be considered if it does not compromise the rQES timeframe. Time pressures may require that methods are left suitably flexible to allow well-justified changes to be made as a detailed picture of the studies and data emerge. 26 The first Cochrane rQES drew heavily on text from a joint protocol/review template previously produced within Cochrane. 24

Setting eligibility criteria

An rQES team may need to limit the number of perspectives, focusing on those most important for decision-making 5 9 27 ( table 2 , item R3). Beyond the patients/clients each additional perspective (eg, family members, health professionals, other professionals, etc) multiplies the additional effort involved.

A rapid QES may require strict date and setting restrictions 17 and language restrictions that accommodate the specific requirements of the review. Specifically, the team should consider whether changes in context over time or substantive differences between geographical regions could be used to justify a narrower date range or a limited coverage of countries and/or languages. The team should also decide if ‘indirect evidence’ is to substitute for the absence of direct evidence. An rQES typically focuses on direct evidence, except when only indirect evidence is available 28 ( table 2 , item R4). Decisions on relevance are challenging—precautions for swine influenza may inform precautions for bird influenza. 28 A smoking ban may operate similarly to seat belt legislation, etc. A review team should identify where such shared mechanisms might operate. 28 An rQES team must also decide whether to use frameworks or models to focus the review. Theories may be unearthed within the topic search or be already known to team members, fro example, Theory of Planned Behaviour. 29

Options for managing the quantity and quality of studies and data emerge during the scoping (see above). In summary, the review team should consider privileging rich qualitative studies 2 ; consider a stepwise approach to inclusion of qualitative data and explore the possibility of sampling ( table 2 , item R5). For example, where data is plentiful an rQES may be limited to qualitative research and/or to mixed methods studies. Where data is less plentiful then surveys or other qualitative data sources may need to be included. Where plentiful reviews already exist, a team may decide to conduct a review of reviews 5 by including multiple QES within a mega-synthesis 28 29 ( table 2 , item R6).

Searching for QES merits its own guidance, 21–23 30 this section reinforces important considerations from guidance specific to qualitative research. Generic guidance for rapid reviews in this series broadly applies to rapid QESs. 1

In addition to journal articles, by far the most plentiful source, qualitative research is found in book chapters, theses and in published and unpublished reports. 21 Searches to support an rQES can (a) limit the number of databases searched, deliberately selecting databases from diverse disciplines, (b) use abbreviated study filters to retrieve qualitative designs and (c) employ high yield complementary methods (eg, reference checking, citation searching and Related Articles features). An information specialist (eg, librarian) should be involved in prioritising sources and search methods ( table 2 , item R7). 11 14

According to empirical evidence optimal database combinations include Scopus plus CINAHL or Scopus plus ProQuest Dissertations and Theses Global (two-database combinations) and Scopus plus CINAHL plus ProQuest Dissertations and Theses Global (three-database combination) with both choices retrieving between 89% and 92% of relevant studies. 30

If resources allow, searches should include one or two specialised databases ( table 2 , item R8) from different disciplines or contexts 21 (eg, social science databases, specialist discipline databases or regional or institutional repositories). Even when resources are limited, the information specialist should factor in time for peer review of at least one search strategy ( table 2 , item R9). 31 Searches for ‘grey literature’ should selectively target appropriate types of grey literature (such as theses or process evaluations) and supplemental searches, including citation chaining or Related Articles features ( table 2 , item R10). 32 The first Cochrane rQES reported that searching reference lists of key papers yielded an extra 30 candidate papers for review. However, the team documented exclusion of grey literature as a limitation of their review. 15

Study selection

Consistency in study selection is achieved by using templates, by gaining a shared team understanding of the audience and purpose, and by ongoing communication within, and beyond, the team. 2 33 Individuals may work in parallel on the same task, as in the first Cochrane rQES, or follow a ‘segmented’ approach where each reviewer is allocated a different task. 14 The use of machine learning in the specific context of rQES remains experimental. However, the possibility of developing qualitative study classifiers comparable to those for randomised controlled trials offers an achievable aspiration. 34

Title and abstract screening

The entire screening team should use pre-prepared, pretested title and abstract templates to limit the scale of piloting, calibration and testing ( table 2 , item R11). 1 14 The first Cochrane rQES team double-screened titles and abstracts within Covidence review software. 14 Disagreements were resolved with reference to a third reviewer achieving a shared understanding of the eligibility criteria and enhancing familiarity with target studies and insight from data. 14 The team should target and prioritise identified risks of either over-zealous inclusion or over-exclusion specific to each rQES ( table 2 , item R12). 14 The team should maximise opportunities to capture divergent views and perspectives within study findings. 35

Full-text screening

Full-text screening similarly benefits from using a pre-prepared pretested standardised template where possible 1 14 ( table 2 , item R11). If a single reviewer undertakes full-text screening, 8 the team should identify likely risks to trustworthiness of findings and focus quality control procedures (eg, use of additional reviewers and percentages for double screening) on specific threats 14 ( table 2 , item R13). The Cochrane rQES team opted for double screening to assist their immersion within the topic. 14

Data extraction

Data extraction of descriptive/contextual data may be facilitated by review management software (eg, EPPI-Reviewer) or home-made approaches using Google Forms, or other survey software. 36 Where extraction of qualitative findings requires line-by-line coding with multiple iterations of the data then a qualitative data management analysis package, such as QSR NVivo, reaps dividends. 36 The team must decide if, collectively, they favour extracting data to a template or coding direct within an electronic version of an article.

Quality control must be fit for purpose but not excessive. Published examples typically use a single reviewer for data extraction 8 with use of two independent reviewers being the exception. The team could limit data extraction to minimal essential items. They may also consider re-using descriptive details and findings previously extracted within previous well-conducted QES ( table 2 , item R14). A pre-existing framework, where readily identified, may help to structure the data extraction template. 15 37 The same framework may be used to present the findings. Some organisations may specify a preferred framework, such as an evidence-to-decision-making framework. 38

Assessment of methodological limitations

The QES community assess ‘methodological limitations’ rather than use ‘risk of bias’ terminology. An rQES team should pick an approach appropriate to their specific review. For example, a thematic map may not require assessment of individual studies—a brief statement of the generic limitations of the set of studies may be sufficient. However, for any synthesis that underpins practice recommendations 39 assessment of included studies is integral to the credibility of findings. In any decision-making context that involves recommendations or guidelines, an assessment of methodological limitations is mandatory. 40 41

Each review team should work with knowledge users to determine a review-specific approach to quality assessment. 27 While ‘traffic lights’, similar to the outputs from the Cochrane Risk of Bias tool, may facilitate rapid interpretation, accompanying textual notes are invaluable in highlighting specific areas for concern. In particular, the rQES team should demonstrate that they are aware (a) that research designs for qualitative research seek to elicit divergent views, rather than control for variation; (b) that, for qualitative research, the selection of the sample is far more informative than the size of the sample; and (c) that researchers from primary research, and equally reviewers for the qualitative synthesis, need to be thoughtful and reflexive about their possible influences on interpretation of either the primary data or the synthesised findings.

Selection of checklist

Numerous scales and checklists exist for assessing the quality of qualitative studies. In the absence of validated risk of bias tools for qualitative studies, the team should choose a tool according to Cochrane Qualitative and Implementation Methods Group (CQIMG) guidance together with expediency (according to ease of use, prior familiarity, etc) ( table 2 , item R15). 41 In comparison to the Critical Appraisal Skills Programme checklist which was never designed for use in synthesis, 42 the Cochrane qualitative tool is similarly easy to use and was designed for QES use. Work is underway to identify an assessment process that is compatible with QESs that support decision-making. 41 For now the choice of a checklist remains determined by interim Cochrane guidance and, beyond this, by personal preference and experience. For an rQES a team could use a single reviewer to assess methodological limitations, with verification of judgements (and support statements) by a second reviewer ( table 2 , item R16).

The CQIMG endorses three types of synthesis; thematic synthesis, framework synthesis and meta-ethnography ( box 1 ). 43 44 Rapid QES favour descriptive thematic synthesis 45 or framework synthesis, 46 47 except when theory generation (meta-ethnography 48 49 or analytical thematic synthesis) is a priority ( table 2 , item R17).

Choosing a method for rapid qualitative synthesis

Thematic synthesis: first choice method for rQES. 45 For example, in their rapid QES Crooks and colleagues 44 used a thematic synthesis to understand the experiences of both academic and lived experience coresearchers within palliative and end of life research. 45

Framework synthesis: alternative where a suitable framework can be speedily identified. 46 For example, Bright and colleagues 46 considered ‘best-fit framework synthesis’ as appropriate for mapping study findings to an ‘a priori framework of dimensions measured by prenatal maternal anxiety tools’ within their ‘streamlined and time-limited evidence review’. 47

Less commonly, an adapted meta-ethnographical approach was used for an implementation model of social distancing where supportive data (29 studies) was plentiful. 48 However, this QES demonstrates several features that subsequently challenge its original identification as ‘rapid’. 49

Abbrevations: QES, qualitative evidence synthesis; rQES, resource-constrained qualitative evidence synthesis.

The team should consider whether a conceptual model, theory or framework offers a rapid way for organising, coding, interpreting and presenting findings ( table 2 , item R18). If the extracted data appears rich enough to sustain further interpretation, data from a thematic or framework synthesis can subsequently be explored within a subsequent meta-ethnography. 43 However, this requires a team with substantial interpretative expertise. 11

Assessments of confidence in the evidence 4 are central to any rQES that seeks to support decision-making and the QES-specific Grading of Recommendations Assessment, Development and Evaluation approach for assessing the Confidence of Evidence from Reviews of Qualitative research (GRADE-CERQual) approach is designed to assess confidence in qualitative evidence. 50 This can be performed by a single reviewer, confirmed by a second reviewer. 26 Additional reviewers could verify all, or a sample of, assessments. For a rapid assessment a team must prioritise findings, using objective criteria; a WHO rQES focused only on the three ‘highly synthesised findings’. 20 The team could consider reusing GRADE-CERQual assessments from published QESs if findings are relevant and of demonstrable high quality ( table 2 , item R19). 50 No rapid approach to full application of GRADE-CERQual currently exists.

Reporting and record management

Little is written on optimal use of technology. 8 A rapid review is not a good time to learn review management software or qualitative analysis management software. Using such software for all general QES processes ( table 2 , item R20), and then harnessing these skills and tools when specifically under resource pressures, is a sounder strategy. Good file labelling and folder management and a ‘develop once, re-use multi-times’ approach facilitates resource savings.

Reporting requirements include the meta-ethnography reporting guidance (eMERGe) 51 and the Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement. 52 An rQES should describe limitations and their implications for confidence in the evidence even more thoroughly than a regular QES; detailing the consequences of fast-tracking, streamlining or of omitting processes all together. 8 Time spent documenting reflexivity is similarly important. 27 If QES methodology is to remain credible rapid approaches must be applied with insight and documented with circumspection. 53 54 (56)

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Klerings I ,
  • Robalino S ,
  • Booth A , et al
  • Nussbaumer-Streit B ,
  • Hamel C , et al
  • Garritty C ,
  • Tricco AC ,
  • Smith M , et al
  • Gartlehner G ,
  • Devane D , et al
  • NHS Scotland
  • Campbell F ,
  • Flemming K , et al
  • Glenton C ,
  • Lubarsky S ,
  • Varpio L , et al
  • Meskell P ,
  • Glenton C , et al
  • Houghton C ,
  • Delaney H , et al
  • Beecher C ,
  • Maeso B , et al
  • McKenzie JE , et al
  • Harris JL ,
  • Cargo M , et al
  • Varley-Campbell J , et al
  • Downe S , et al
  • Shamseer L ,
  • Clarke M , et al
  • Nussbaumer-Streit B , et al
  • Finlayson KW ,
  • Lawrie TA , et al
  • Lewin S , et al
  • Frandsen TF ,
  • Gildberg FA ,
  • Tingleff EB
  • Mshelia S ,
  • Analo CV , et al
  • Husk K , et al
  • Carmona C ,
  • Carroll C ,
  • Ilott I , et al
  • Meehan B , et al
  • Munthe-Kaas H ,
  • Bohren MA ,
  • Munthe-Kaas HM ,
  • French DP ,
  • Flemming K ,
  • Garside R , et al
  • Shulman C , et al
  • Dixon-Woods M
  • Bright KS ,
  • Norris JM ,
  • Letourneau NL , et al
  • Sadjadi M ,
  • Mörschel KS ,
  • Petticrew M
  • France EF ,
  • Cunningham M ,
  • Ring N , et al
  • McInnes E , et al
  • Britten N ,
  • Garside R ,
  • Pope C , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Contributors All authors (AB, IS, JN, CH, FC) have made substantial contributions to the conception and design of the guidance document. AB led on drafting the work and revising it critically for important intellectual content. All other authors (IS, JN, CH, FC) contributed to revisions of the document. All authors (AB, IS, JN, CH, FC) have given final approval of the version to be published. As members of the Cochrane Qualitative and Implementation Methods Group and/or the Cochrane Rapid Reviews Methods Group all authors (AB, IS, JN, CH, FC) agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests AB is co-convenor of the Cochrane Qualitative and Implementation Methods Group. In the last 36 months, he received royalties from Systematic Approaches To a Successful Literature Review (Sage 3rd edition), honoraria from the Agency for Healthcare Research and Quality, and travel support from the WHO. JN is lead convenor of the Cochrane Qualitative and Implementation Methods Group. In the last 36 months, she has received honoraria from the Agency for Healthcare Research and Quality and travel support from the WHO. CH is co-convenor of the Cochrane Qualitative and Implementation Methods Group.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; internally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Indian J Sex Transm Dis AIDS
  • v.35(2); Jul-Dec 2014

Reviewing literature for research: Doing it the right way

Shital amin poojary.

Department of Dermatology, K J Somaiya Medical College, Mumbai, Maharashtra, India

Jimish Deepak Bagadia

In an era of information overload, it is important to know how to obtain the required information and also to ensure that it is reliable information. Hence, it is essential to understand how to perform a systematic literature search. This article focuses on reliable literature sources and how to make optimum use of these in dermatology and venereology.

INTRODUCTION

A thorough review of literature is not only essential for selecting research topics, but also enables the right applicability of a research project. Most importantly, a good literature search is the cornerstone of practice of evidence based medicine. Today, everything is available at the click of a mouse or at the tip of the fingertips (or the stylus). Google is often the Go-To search website, the supposed answer to all questions in the universe. However, the deluge of information available comes with its own set of problems; how much of it is actually reliable information? How much are the search results that the search string threw up actually relevant? Did we actually find what we were looking for? Lack of a systematic approach can lead to a literature review ending up as a time-consuming and at times frustrating process. Hence, whether it is for research projects, theses/dissertations, case studies/reports or mere wish to obtain information; knowing where to look, and more importantly, how to look, is of prime importance today.

Literature search

Fink has defined research literature review as a “systematic, explicit and reproducible method for identifying, evaluating, and synthesizing the existing body of completed and recorded work produced by researchers, scholars and practitioners.”[ 1 ]

Review of research literature can be summarized into a seven step process: (i) Selecting research questions/purpose of the literature review (ii) Selecting your sources (iii) Choosing search terms (iv) Running your search (v) Applying practical screening criteria (vi) Applying methodological screening criteria/quality appraisal (vii) Synthesizing the results.[ 1 ]

This article will primarily concentrate on refining techniques of literature search.

Sources for literature search are enumerated in Table 1 .

Sources for literature search

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g001.jpg

PubMed is currently the most widely used among these as it contains over 23 million citations for biomedical literature and has been made available free by National Center for Biotechnology Information (NCBI), U.S. National Library of Medicine. However, the availability of free full text articles depends on the sources. Use of options such as advanced search, medical subject headings (MeSH) terms, free full text, PubMed tutorials, and single citation matcher makes the database extremely user-friendly [ Figure 1 ]. It can also be accessed on the go through mobiles using “PubMed Mobile.” One can also create own account in NCBI to save searches and to use certain PubMed tools.

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g002.jpg

PubMed home page showing location of different tools which can be used for an efficient literature search

Tips for efficient use of PubMed search:[ 2 , 3 , 4 ]

Use of field and Boolean operators

When one searches using key words, all articles containing the words show up, many of which may not be related to the topic. Hence, the use of operators while searching makes the search more specific and less cumbersome. Operators are of two types: Field operators and Boolean operators, the latter enabling us to combine more than one concept, thereby making the search highly accurate. A few key operators that can be used in PubMed are shown in Tables ​ Tables2 2 and ​ and3 3 and illustrated in Figures ​ Figures2 2 and ​ and3 3 .

Field operators used in PubMed search

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g003.jpg

Boolean operators used in PubMed search

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g004.jpg

PubMed search results page showing articles on donovanosis using the field operator [TIAB]; it shows all articles which have the keyword “donovanosis” in either title or abstract of the article

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g006.jpg

PubMed search using Boolean operators ‘AND’, ‘NOT’; To search for articles on treatment of lepra reaction other than steroids, after clicking the option ‘Advanced search’ on the home page, one can build the search using ‘AND’ option for treatment and ‘NOT’ option for steroids to omit articles on steroid treatment in lepra reaction

Use of medical subject headings terms

These are very specific and standardized terms used by indexers to describe every article in PubMed and are added to the record of every article. A search using MeSH will show all articles about the topic (or keywords), but will not show articles only containing these keywords (these articles may be about an entirely different topic, but still may contain your keywords in another context in any part of the article). This will make your search more specific. Within the topic, specific subheadings can be added to the search builder to refine your search [ Figure 4 ]. For example, MeSH terms for treatment are therapy and therapeutics.

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g007.jpg

PubMed search using medical subject headings (MeSH) terms for management of gonorrhea. Click on MeSH database ( Figure 1 ) →In the MeSH search box type gonorrhea and click search. Under the MeSH term gonorrhea, there will be a list of subheadings; therapy, prevention and control, click the relevant check boxes and add to search builder →Click on search →All articles on therapy, prevention and control of gonorrhea will be displayed. Below the subheadings, there are two options: (1) Restrict to medical subject headings (MeSH) major topic and (2) do not include MeSH terms found below this term in the MeSH hierarchy. These can be used to further refine the search results so that only articles which are majorly about treatment of gonorrhea will be displayed

Two additional options can be used to further refine MeSH searches. These are located below the subheadings for a MeSH term: (1) Restrict to MeSH major topic; checking this box will retrieve articles which are majorly about the search term and are therefore, more focused and (2) Do not include MeSH terms found below this term in the MeSH hierarchy. This option will again give you more focused articles as it excludes the lower specific terms [ Figure 4 ].

Similar feature is available with Cochrane library (also called MeSH), EMBASE (known as EMTREE) and PsycINFO (Thesaurus of Psychological Index Terms).

Saving your searches

Any search that one has performed can be saved by using the ‘Send to’ option and can be saved as a simple word file [ Figure 5 ]. Alternatively, the ‘Save Search’ button (just below the search box) can be used. However, it is essential to set up an NCBI account and log in to NCBI for this. One can even choose to have E-mail updates of new articles in the topic of interest.

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g008.jpg

Saving PubMed searches. A simple option is to click on the dropdown box next to ‘Send to’ option and then choose among the options. It can be saved as a text or word file by choosing ‘File’ option. Another option is the “Save search” option below the search box but this will require logging into your National Center for Biotechnology Information account. This however allows you to set up alerts for E-mail updates for new articles

Single citation matcher

This is another important tool that helps to find the genuine original source of a particular research work (when few details are known about the title/author/publication date/place/journal) and cite the reference in the most correct manner [ Figure 6 ].

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g009.jpg

Single citation matcher: Click on “Single citation matcher” on PubMed Home page. Type available details of the required reference in the boxes to get the required citation

Full text articles

In any search clicking on the link “free full text” (if present) gives you free access to the article. In some instances, though the published article may not be available free, the author manuscript may be available free of charge. Furthermore, PubMed Central articles are available free of charge.

Managing filters

Filters can be used to refine a search according to type of article required or subjects of research. One can specify the type of article required such as clinical trial, reviews, free full text; these options are available on a typical search results page. Further specialized filters are available under “manage filters:” e.g., articles confined to certain age groups (properties option), “Links” to other databases, article specific to particular journals, etc. However, one needs to have an NCBI account and log in to access this option [ Figure 7 ].

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g010.jpg

Managing filters. Simple filters are available on the ‘search results’ page. One can choose type of article, e.g., clinical trial, reviews etc. Further options are available in the “Manage filters” option, but this requires logging into National Center for Biotechnology Information account

The Cochrane library

Although reviews are available in PubMed, for systematic reviews and meta-analysis, Cochrane library is a much better resource. The Cochrane library is a collection of full length systematic reviews, which can be accessed for free in India, thanks to Indian Council of Medical Research renewing the license up to 2016, benefitting users all over India. It is immensely helpful in finding detailed high quality research work done in a particular field/topic [ Figure 8 ].

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g011.jpg

Cochrane library is a useful resource for reliable, systematic reviews. One can choose the type of reviews required, including trials

An important tool that must be used while searching for research work is screening. Screening helps to improve the accuracy of search results. It is of two types: (1) Practical: To identify a broad range of potentially useful studies. Examples: Date of publication (last 5 years only; gives you most recent updates), participants or subjects (humans above 18 years), publication language (English only) (2) methodological: To identify best available studies (for example, excluding studies not involving control group or studies with only randomized control trials).

Selecting the right quality of literature is the key to successful research literature review. The quality can be estimated by what is known as “The Evidence Pyramid.” The level of evidence of references obtained from the aforementioned search tools are depicted in Figure 9 . Systematic reviews obtained from Cochrane library constitute level 1 evidence.

An external file that holds a picture, illustration, etc.
Object name is IJSTD-35-85-g012.jpg

Evidence pyramid: Depicting the level of evidence of references obtained from the aforementioned search tools

Thus, a systematic literature review can help not only in setting up the basis of a good research with optimal use of available information, but also in practice of evidence-based medicine.

Source of Support: Nil.

Conflict of Interest: None declared.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 14, Issue 2
  • Tools for assessing quality of studies investigating health interventions using real-world data: a literature review and content analysis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-6546-0778 Li Jiu 1 ,
  • Michiel Hartog 1 ,
  • Junfeng Wang 1 ,
  • Rick A Vreman 1 ,
  • Olaf H Klungel 1 ,
  • http://orcid.org/0000-0002-8782-0698 Aukje K Mantel-Teeuwisse 1 ,
  • Wim G Goettsch 1 , 2
  • 1 Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences , Utrecht University , Utrecht , Netherlands
  • 2 National Health Care Institute , Diemen , Netherlands
  • Correspondence to Dr Wim G Goettsch; w.g.goettsch{at}uu.nl ; Dr Junfeng Wang; j.wang5{at}uu.nl ; Dr Junfeng Wang; j.wang5{at}uu.nl

Objectives We aimed to identify existing appraisal tools for non-randomised studies of interventions (NRSIs) and to compare the criteria that the tools provide at the quality-item level.

Design Literature review through three approaches: systematic search of journal articles, snowballing search of reviews on appraisal tools and grey literature search on websites of health technology assessment (HTA) agencies.

Data sources Systematic search: Medline; Snowballing: starting from three articles (D’Andrea et al , Quigley et al and Faria et al ); Grey literature: websites of European HTA agencies listed by the International Network of Agencies for Health Technology Assessment. Appraisal tools were searched through April 2022.

Eligibility criteria for selecting studies We included a tool, if it addressed quality concerns of NRSIs and was published in English (unless from grey literature). A tool was excluded, if it was only for diagnostic, prognostic, qualitative or secondary studies.

Data extraction and synthesis Two independent researchers searched, screened and reviewed all included studies and tools, summarised quality items and scored whether and to what extent a quality item was described by a tool, for either methodological quality or reporting.

Results Forty-nine tools met inclusion criteria and were included for the content analysis. Concerns regarding the quality of NRSI were categorised into 4 domains and 26 items. The Research Triangle Institute Item Bank (RTI Item Bank) and STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) were the most comprehensive tools for methodological quality and reporting, respectively, as they addressed (n=20; 17) and sufficiently described (n=18; 13) the highest number of items. However, none of the tools covered all items.

Conclusion Most of the tools have their own strengths, but none of them could address all quality concerns relevant to NRSIs. Even the most comprehensive tools can be complemented by several items. We suggest decision-makers, researchers and tool developers consider the quality-item level heterogeneity, when selecting a tool or identifying a research gap.

OSF registration number OSF registration DOI ( https://doi.org/10.17605/OSF.IO/KCSGX ).

  • Systematic Review
  • EPIDEMIOLOGY
  • HEALTH ECONOMICS

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2023-075173

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

This literature review identified 49 appraisal tools for non-randomised studies of interventions, through both the systematic approach (ie, database search) and the non-systematic approaches (ie, snowballing and grey literature search).

Our study compared sufficient descriptions of appraisal tools at quality-item levels, for either methodological quality or reporting.

We only searched health technology assessment agencies for grey literature, so some tools only mentioned by clinical guideline or regulatory organisations might have been overlooked.

Usefulness of categorising a quality item as ‘sufficient’ or ‘brief’ for each tool, based on whether an explanation was provided for the criteria, has not been tested by previous studies.

Introduction

Real-world data (RWD) generally refer to data collected during routine clinical practice, but their definitions could vary in settings. 1 According to Makady et al , one of the RWD definitions is data collected without interference with treatment assignment. 1 RWD that fit this definition are normally analysed in non-randomised studies of interventions (NRSIs), which estimate effectiveness of a health intervention without randomising intervention groups. 2 3

NRSIs provide evidence on clinical and cost-effectiveness of health interventions for decision-making, in clinical and health technology assessment (HTA) settings. 4–9 For example, NRSIs could inform clinicians on what diagnosis or treatment strategies to adopt. 4 5 Also, with NRSIs, HTA agencies could gain more certainty on validity of evidence from randomised controlled trials (RCTs), when deciding on which health intervention to reimburse and on which pricing strategy to adopt. 6 7 Also, HTA stakeholders could exploit NRSIs to evaluate highly innovative or complex interventions, for which RCTs may be considered infeasible or unethical. 8 9 Generally speaking, NRSIs have become increasingly useful, as they complement and sometimes replace RCTs, when RCTs are scarce or even infeasible to conduct. 2 10

However, the usefulness of NRSIs is often questioned due to quality concerns, in terms of risk of bias (RoB) and reporting. According to the Cochrane Handbook, NRSIs have higher RoB than RCTs and are vulnerable to various types of bias, such as confounding, selection and information bias. 11 Also, the Professional Society for Health Economics and Outcomes Research (ISPOR) published a report in 2020, which stated that insufficient reporting on how an NRSI was generated was a major barrier for decision-makers to adopt NRSIs. 12

To address NRSI’s quality concerns and to build decision-makers’ confidence, NRSIs need to be rigorously appraised, and this rationalises the development and use of appraisal tools. According to systematic reviews of appraisal tools for NRSIs, tens of tools have been developed in the past five decades. 13–15 The growing number of tools has then brought a new challenge to users: how to select the best tool. To address this challenge, previous reviews have summarised quality items (ie, a group of criteria or signalling questions for methodological quality or reporting) and compared whether existing tools addressed these items. 13–15 Some example items include ‘measurement of outcomes’, ‘loss to follow-up bias’, ‘inclusion and exclusion criteria of target population’, ‘sampling strategies to correct selection bias’, etc. 13 In addition, these reviews provided some general recommendations on tool selection, such as referring to multiple tools for quality appraisal. 14 However, information is still lacking on to what extent the tools address each quality item and the heterogeneity of tools at the quality-item level. To take outcome measurement as an example, the Academy of Nutrition and Dietetics Quality Criteria (ANDQ) checklist mentions that outcomes should be measured with ‘standard, valid and reliable data collection instruments, tests and procedures’ and ‘at an appropriate level of precision’. 16 In contrast, the Good ReseArch for Comparative Effectiveness (GRACE) checklist considers the ‘valid and reliable’ measurement as ‘objective rather than subject to clinical judgement’ 17 ; while the Risk Of Bias In Non-randomised Studies—of Interventions (ROBINS-I) checklist interprets the ‘standard’ way as ‘comparable across study groups’ and ‘valid and reliable’ as low detection bias without ‘systematic errors’ in outcome measurement. 18 In summary, the heterogeneity in level of detail with which a tool addresses a quality item and the heterogeneity in content and format of signalling questions can pose a challenge when tools are selected, or even merged.

Hence, our study aimed to summarise and compare signalling questions or criteria in the tools provided at the quality-item level, through a content analysis. This research was performed as part of the HTx project. 19 The project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825162.

To ensure credibility of the review and the content analysis, we registered a study protocol in the OSF registry (registration DOI: https://doi.org/10.17605/OSF.IO/KCSGX ) on 30 June 2022. The OSF registry is an online repository that accepts registration of all types of research projects, including reviews and content analyses. 20

Patient and public involvement

Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

In our study, appraisal tools refer to tools, guidelines, instruments or standards that provide guidance on how to report or assess any quality concern of NRSIs. NRSIs, according to the Cochrane Handbook, refer to any quantitative study estimating the effectiveness of an intervention without randomisation to allocate patients to intervention groups. 2 According to Makady et al , data collected in such NRSIs belong to the second category of RWD, that is, those collected without interference with treatment assignment, patient monitoring or follow-up, or selection of study population. 1

Search strategy

To identify appraisal tools for NRSIs from various potential sources, we adopted three approaches. A diagram illustrating how the three approaches complemented each other is shown in online supplemental appendix 1 .

Supplemental material

Database search.

In the first approach, we conducted a systematic review to identify articles on appraisal tools, through a database search using Medline. Since D’Andrea et al have already conducted a systematic review to identify appraisal tools for all types of non-randomised studies published before November 2019, 13 we updated their review by searching for articles published between November 2019 and April 2022, with their strings.

Snowballing

In the second approach, we searched for published reviews on appraisal tools for NRSIs. To identify all published reviews, we adopted a snowballing approach described by Wohlin. 21 Snowballing refers to using the citations of articles to identify additional articles, and it is considered a good extension of a database search. 21 To implement the snowballing approach, three researchers (LJ, MH and JW) first conducted a pilot search of articles using Google Scholar, reviewed full-text, judged eligibility through a group discussion, then identified three reviews (ie, those by D’Andrea et al , 13 Quigley et al 14 and Faria et al 15 ). Next, the three reviews were used as a starting set and were uploaded to the website Connected Papers, which provides an online tool for snowballing. 22 With each uploaded review, Connected Papers analysed approximately 50 000 articles and finally returned 40 articles with the highest level of similarity, based on factors such as overlapping citations. After judging eligibility of the returned articles, eligible articles were uploaded to the website Connected Papers for a second round of snowballing.

Grey literature

In the third approach, we searched for grey literature on the websites of European HTA agencies. Our rationale was that some appraisal tools may exist in the format of grey literature, such as agency reports and technical support documents. The list of European HTA agencies was derived from the International Network of Agencies for Health Technology Assessment. 23 On each agency website, two researchers (MH and LJ) independently searched for grey literature with four concepts, respectively, ‘quality’, ‘RoB’, ‘appraisal’ and ‘methodology’. For each concept, only the first 10 hits sorted by relevance, if optional, were included (ie, a maximum of 40 hits for each website).

Eligibility criteria for articles and grey literature to identify relevant tools

An article or grey literature document was included if it described one or more appraisal tools. It was excluded if it only described tools for RCTs or only described tools for diagnostic, prognostic, qualitative or secondary studies (eg, systematic reviews and cost-effectiveness analyses). We only included articles identified through the database search and snowballing if published in English, while included grey literature could be published in all languages, as many HTA agencies tend to only use languages of their nations. Relevant documents obtained through this approach were translated using Google Translate.

The process of identifying studies and appraisal tools

Two researchers (MH and LJ) independently scanned all titles and abstract of the identified hits, then reviewed the full-text with Rayyan 24 and Excel. After identifying the eligible studies, one researcher (MH) extracted the name of the tools and downloaded them by tracking study citations. A pilot search with Google was conducted to ensure we downloaded the most up-to-date version. Next, two researchers (MH and LJ) independently reviewed full-text and judged eligibility of the tools. An appraisal tool was included if it (1) was designed for non-randomised studies, (2) was used for assessing either methodological quality or reporting and (3) was developed or updated after 2002. A tool was excluded if it was designed for non-randomised studies of exposures which were not controlled by investigators (eg, diets). All discrepancies were solved through discussion among the three researchers (MH, LJ and JW).

One researcher (MH) extracted tool characteristics using a prespecified Excel form. The data items included publication year, tool format (eg, checklist or rating scale), targeted study design (eg, all NRSIs, cohort studies, etc), target interventions (eg, all or surgical interventions), originality (ie, whether a tool was developed based on an existing tool) and scope. The scope referred to whether the tools were designed for assessing methodological quality (eg, RoB and external validity) and/or for ensuring adequate reporting of research details that could be used for assessing methodological quality. 25

For the content analysis, we adopted both deductive and inductive coding techniques. 26 First, we derived a list of candidate quality items from the three reviews, the starting set for the snowballing. 13–15 Then, in a pilot coding process, we reviewed all identified appraisal tools and judged whether a candidate quality item was described. After the pilot coding, we summarised signalling questions or criteria that were not covered by the candidate items and coded them as new items. After updating the list of candidate items, three researchers (JW, LJ and MH) finalised the items in four group meetings. During the meetings, we merged items with overlapping content, split items containing too much content and renamed items so they could be self-explanatory.

To score whether and to what extent a quality item was described by a tool, we again reviewed all identified tools. If an item was described by a tool in one or several signalling questions, we judged whether the question(s) was related to methodological quality, reporting or both, independently of what original studies claimed to be. Additionally, we judged whether an item was described sufficiently or briefly. A description was scored as ‘brief’, if the corresponding signalling question(s) did not explain how to improve or assess methodological quality or specify elements needed for reporting. For example, ‘outcomes should be measured appropriately’ or ‘outcome measurement should be adequately described’ are ‘brief’ descriptions, if no additional explanations were provided. The scoring process was independently conducted by two researchers (LJ and MH) using NVivo V.12, and all discrepancies were solved through discussion between the two.

Tool selection

As shown in figure 1 , we identified 1738 articles after removing duplicates and excluded 1645 articles after subsequently reviewing titles, abstracts and full-text. From the remaining 27 eligible studies, we identified 417 appraisal tools. After removing duplicates and reviewing full-texts, we included 49 tools which met our criteria. References of the included studies and appraisal tools are shown in online supplemental appendix 2 and 3 , respectively.

  • Download figure
  • Open in new tab
  • Download powerpoint

Flow chart for the inclusion and exclusion of appraisal tools for non-randomised studies of interventions

Characteristics of appraisal tools

As shown in table 1 , 18 (37%) tools were published between 2002 and 2010, while 31 (63%) tools were published thereafter. Among these, 30 (61%), 6 (12%) and 5 (10%) tools were designed for addressing methodological quality, reporting and both, respectively, while 7 (14%) tools did not report intended use of the tools. About three quarters of the tools were designed for all types of NRSIs, while others were designed for one or several NRSI types, such as cohort (16%) and case–control studies (16%). Regarding sources, 44 (90%) tools were described in articles that developed a tool, in grey literature (eg, online checklist or report), or in both, while the other five tools were extended from existing tools, when researchers conducted systematic reviews on non-randomised studies. Finally, 9 (18%) tools were designed for specific interventions or diseases while all other tools were generic in nature.

  • View inline

Characteristics of the 49 included appraisal tools for non-randomised studies of interventions

Quality domains and items

We identified 44 criteria to describe study quality from three previous reviews. 13–15 After merging criteria with similar content (eg, ‘Follow-up’ and ‘Loss to follow-up’) and incorporating items into those with wider meanings (eg, ‘Loss to follow-up bias’ into ‘Loss to follow-up’), we obtained a list of 18 items. After the pilot coding, we summarised criteria of appraisal tools not covered by the 18 items into another 8 items. According to the general order of conducting an NRSI (eg, study design and data analysis, etc.), these 26 items were categorised into four domains: Study design, Data quality, Data analysis and Results presentation. As shown in figure 2 and table 2 , all domains and most items were addressed by existing tools, but for each item, the number of tools with sufficient descriptions was relatively small. For three items in methodology and nine items in reporting, less than five tools addressed them, and none of the tools sufficiently described them.

The extent to which the appraisal tools addressed quality items on methodological quality or reporting.

Overview of the 4 domains and 26 quality items, with numbers and proportions of appraisal tools that addressed or sufficiently described them

Figure 2 illustrates whether and to what extent the identified tools addressed quality items in terms of methodological quality or reporting. The 26 columns represent the 26 quality items as shown in table 2 . The ranking of appraisal tools based on the number of items addressed or sufficiently described, either general or segmented by quality domains, is shown in online supplemental appendix 4–6 . Regarding methodological quality, Research Triangle Institute Item Bank (RTI Item Bank) 27 addressed (n=20) and sufficiently described (n=18) the highest number of items. In addition, the tools that ranked both top 10, based on number of items addressed or sufficiently described, included Methodology Index for Non-randomized Studies (MINORS), 28 , Faillie et al , 29 ROBINS-I, 18 ANDQ, 16 Comparative Effectiveness Research Collaborative Initiative Questionnaire (CER-CI) 30 and Joanna Briggs Institute’s Critical Appraisal Tool (JBI). 31 These tools addressed at least 10 items and sufficiently described at least 5 items. In the study-design domain, RTI Item Bank 27 sufficiently described the most items (n=7), while in the Data quality domain, RTI Item Bank 27 and MINOR 28 ranked the top two, which sufficiently described at least 5 of the 10 items. In the Data analysis domain, only Faillie et al 29 and Handu et al 32 sufficiently described all the three included items. In the Results presentation domain, the relevant two items sufficiently described by Faillie et al 29 and Handu et al , 32 and ANDQ. 16 Regarding reporting, STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) 33 addressed (n=17) and sufficiently described (n=14) the highest number of items. Also, the tools that ranked both top 10, based on the two criteria, included Transparent Reporting of Evaluations with Non-randomized Designs (TREND), 34 the tool by Genaidy et al , 35 REporting of studies Conducted using Observational Routinely-collected Data (RECORD), 36 European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP), 36 International Society of Pharmacoepidemiology (ISPE), 37 the tool by Tseng et al 38 and Joint Task Force between the International Society for Pharmacoepidemiology and the International Society for Pharmacoeconomics and Outcomes Research (ISPE-ISPOR). 39 These tools at least addressed and sufficiently described seven and three quality items, respectively. In all the four quality domains, STROBE 32 sufficiently described the (equally) most items, compared with other tools. Besides, in the Study design domain, ENCePP 36 and RECORD 40 sufficiently described at least 4 of the 11 items, while in the Data quality domain, TREND 34 and Genaidy et al 35 sufficiently described at least 4 of the 10 items. In the Data analysis and Results presentation domain, STROBE was the only tool that sufficiently described two of the thee items, while 7 and 12 other tools sufficiently described only one item, respectively.

Methodological quality

Among the four domains, the Study design domain was the most ignored domain by appraisal tools, as only 4 of the 11 relevant items were described with sufficient details by more than four tools. More specifically, no tool described methodological quality on Ethical approval or Study objective with sufficient detail. For example, the guidelines manual of the National Institute for Health and Care Excellence (NICE) stated that: “The study addresses an appropriate and clearly focused question”. 41 The tool did not explain the standard of appropriateness and clearness.

In addition, although one-third of tools discussed what a good study design was, only three tools defined the goodness. 42–44 For example, the NHS Wales Questions to Assist with the Critical Appraisal of a Cross-Sectional Study (NHS Wales) stated that the choice of study design should be appropriate to the research question and ensure the reliability of study results. 44 Outcome selection was also ignored by most tools, as only three tools (ie, RTI Item Bank, 27 MINORS 28 and the tool by Faillie et al 29 ) sufficiently described them. Similarly, only RTI Item Bank, 27 the tool by Genaidy et al 35 and NICE 41 sufficiently described the item Outcome definition. For example, Genaidy et al 35 stated that a definition was clear only if ‘definitions of all outcome variables were clearly described’, and was partially clear if not all variables were clearly described, but ‘sufficient information was provided for the reader to understand the intent’. 35 Other items that were rarely addressed or insufficiently described included Intervention definition and Data source. The respective tools with sufficient descriptions included SURE, 45 ROBINS-I, 18 MINORS, 28 CER-CI, 30 GRACE 17 and the tools described by Faillie et al . 29

The Data quality domain was ignored by most tools, as 4 of the 10 relevant items were sufficiently addressed by less than three tools. In particular, the item Intervention measurement and Length of follow-up were sufficiently addressed by none of the tools, JBI was the only tool stating that method of measuring interventions should be clearly reported, 31 while 19 tools addressing Intervention measurement only focused on methodological quality. Some other items that were rarely addressed or insufficiently addressed included Outcome blinding and Loss to follow-up. Regarding Outcome blinding, only three tools provided sufficient descriptions, that is, MINORS, TREND and ISPE. 28 34 37 . Similarly, only the tool by Genaidy et al , 35 TREND and STROBE sufficiently described Loss to follow-up. 32 35 36

We conducted a review of appraisal tools for NRSIs and assessed whether and how sufficiently these tools addressed quality concerns, in terms of methodological quality or reporting, in 4 quality domains and across 26 items. Our study identified 49 tools and showed that the RTI Item Bank and STROBE were most comprehensive, with the highest number of items addressed and sufficiently described, respectively, on methodological quality and reporting. However, none of the tools addressed concerns in all items, not even briefly. The items least addressed for methodological quality included Outcome selection, Outcome definition and Ethical approval, and for reporting included Intervention selection, Intervention measurement and Length of follow-up.

To our knowledge, this is the first study that compared level of sufficient descriptions of appraisal tools at quality-item levels. Previous reviews also compared appraisal tools but from different perspectives. D’Andrea et al identified 44 tools evaluating the comparative safety and effectiveness of medications, and only assessed whether or not these tools addressed methodological quality in eight domains. 13 In another review, Ma et al elaborated for what types of study design a tool was suited. 46 For example, for cohort studies, they encouraged using five tools, while discouraged the use of another two. However, they did not clarify why some tools were more suitable than the others. Quigley et al identified 48 tools for appraising quality of systematic reviews of non-randomised studies, listed the five most commonly used tools and assessed whether they addressed the 12 quality domains, such as ‘appropriate design’ and ‘appropriate statistical analysis’. 14 Although the tools were compared using different criteria, some results were consistent among all studies. For example, both D’Andrea et al 13 and our study found that intervention measurement, outcome measurement and confounding were frequently addressed by existing tools. Also, Ma et al 46 and Quigley et al 14 both recommended ROBINS-I, MINORS and JBI, and all these tools ranked top 10 for addressing and sufficiently describing methodological quality in our study. With detailed information on level of sufficient descriptions of appraisal tools at the quality-item level, we add value to previous reviews by listing quality concerns that such commonly recommended tools could not adequately address.

We also found some discrepancies in the tools identified or recommended. For example, of the 44 tools identified by D’Andrea et al , 13 27 were published between 2003 and 2019; while in our study, 47 were identified as published between 2003 and 2019. This discrepancy could be explained by additional tools identified through other reviews, tools from grey literature and differences in eligibility criteria (eg, exclusion of non-pharmacological interventions or assessing only one or a few specific types of bias). Another discrepancy was that some tools that ranked top in our study were less recommended by previous reviews, such as RTI Item Bank 27 and the tool by Faillie et al 29 for methodological quality and by Genaidy et al 35 for reporting. This might be explained by the novel criteria (ie, how sufficiently quality items were addressed) we used to evaluate these tools.

We discovered that, with information on how sufficiently a tool described a quality item, tool users might broaden their horizons on quality concerns of non-randomised studies to be considered. For example, if ROBINS-I 18 is used for assessing methodological quality, the quality concerns known to users will be RoB in eight domains (eg, confounding and selection bias). However, as shown in figure 2 , quality concerns in 16 items (eg, Intervention selection and Outcome definition) may not be sufficiently described in ROBINS-I but in other tools, such as RTI Item Bank, 27 the NICE checklist 41 and the tool by NHS Wales. 44 Similarly, if users check the ENCePP 36 and ISPE tools, 37 in addition to STROBE, for reporting quality concerns, they may more comprehensively understand concerns on Ethical approval, Outcome definition, Study objective and Data source. Tool users who may benefit from such information are not only researchers who conduct non-randomised studies and decision-makers who assess study quality, but also tool developers who may identify a research gap.

While the needs of tool users may vary, they could all be somewhat satisfied by our research. For example, it is important for researchers to ensure sufficient reporting of the strengths and weaknesses of an NRSI, as such information will be ultimately used for determining the eligibility of their studies for a decision-making. 32 47 For HTA agencies, NRSIs can be used to extrapolate long-term drug effectiveness and to identify drug-related costs, and a deep and consistent understanding of how to assess NRSI quality among the agencies is important for promoting the use of RWD. 48 For regulators, a comprehensive understanding of how to evaluate NRSI quality may promote a structured pattern of using RWD to support drug regulation. 49 While researchers focus more on reporting, and decision-makers (eg, HTA agencies) have emphasis on methodological quality, we suggest all users pay attention to the linkage between methodology and reporting for each quality item, as illustrated in our research, as it could help understand the necessity of investigating each item.

Another finding of our research was that whether and to what extent a quality concern was addressed by a tool partly depended on the tool purpose. For example, the GRACE checklist was designed as a ‘screening tool’ to exclude studies that did not meet basic quality requirements, 17 and ROBINS-I focused on RoB, rather than all methodological quality issues, such as appropriateness of study objectives or statistical analyses for patient matching. 18 Some tools, such as JBI Cohort, 31 were specific to a type of study design. While they addressed less than half of quality items defined in our research, they were proven robust in many studies. 14 Additionally, for several quality items we found some heterogeneity in content of signalling questions or criteria among the tools with sufficient description. For example, to assess methodological quality of sensitivity analysis, CER-CI 30 stated that key assumptions or definitions of outcomes should be tested, while the tool by Viswanathan et al 50 emphasised the importance of reducing uncertainty in individual judgements. Given the heterogeneity of tools, we suggest users following a two-step approach when selecting a tool. First, users may narrow down the scope of tools based on their own needs, for example, excluding tools for a different study design. This step could be achieved by referring to synthesised results and recommendations from existing reviews. 13 14 Second, users could use the overview we provide ( figure 2 ) to see which tool(s) could provide complementary insights the tool of their first choice is lacking.

Furthermore, we found that appraisal tools designed for specific interventions had potential to be transferred for general interventions. In our research, the tools described by Tseng et al 38 and Blagojevic et al 51 and ANDQ 16 were originally designed for a surgical intervention, knee osteoarthritis and for the field of diabetes, respectively. All these tools ranked top 15 in our study for addressing either methodological quality or reporting ( online supplemental file appendix 4–6 ), and many of their criteria could be generalisable. For example, Tseng et al 38 stated that interventions could be adequately described with specifically referenced articles ( online supplemental file appendix 7 ). 38 Though such tools could be transferred, they often used disease-or-intervention-specific concepts in their criteria, which might be adjusted before being applied more widely.

Moreover, we noticed that, some quality items were less frequently addressed, such as Study objective, Ethical approval or Sensitivity analysis, compared with other items. This might be explained by the fact that, some items were more related to a certain need of users than the others. For example, a tool addressing concerns on RoB may focus less on Study objective, which is relatively more difficult to be directly linked to a well-defined type of bias. Still, since these quality items are related to NRSI quality, and they are rarely sufficiently described, particular efforts investigating these quality items may be needed in future tool development. In contrast, while some quality items have been frequently addressed, such as Length of follow-up and Intervention measurement, they are not necessarily relevant to all types of user needs. For example, as shown in table 2 and online supplemental file appendix 7 , 14 tools highlighted that the follow-up should be sufficiently substantial for detecting an association between intervention and outcome, but none of these tools linked Length of follow-up to RoB. Therefore, we recommend tool developers to clarify not only the purpose of their tools but also the relevance of their signalling questions to any user needs (eg, RoB assessment). We also advise that in future research the relationships between quality items and user needs will be investigated in more detail.

Our study has a number of limitations. One limitation is that, some tools identified by our study were originally developed for purposes beyond assessing methodological quality of reporting of NRSIs, so our study could not cover all potentials of these tools. For example, the GRADE framework was mainly designed for addressing certainty of evidence, such as indirectness (ie, whether interventions were compared directly), and for making relevant clinical practice recommendations. While it mentions RoB (eg, publication bias), its main purpose is to illustrate how to grade quality of evidence, rather than to function as an exact quality appraisal tool. In other words, the GRADE allows users to use any additional tools to assess NRSI quality. 52 Also, the GRADE checklist was designed for both RCTs and NRSIs, so some criteria might be relatively brief, compared with specifically designed tools, such as RTI Item Bank. 27 Finally, GRADE can be used to estimate and score the quality of evidence for the full body of evidence and not only for individual primary studies. Therefore, tool users who assess NRSIs beyond methodological quality or reporting should consider criteria in addition to those mentioned in our study, for selecting a tool. Another limitation is that, some tools were predecessors of others, but we did not exclude them if they met the inclusion criteria. For example, the ROBINS-I tool was developed from the Cochrane Risk Of Bias Assessment Tool: for Non-Randomized Studies of Interventions (ACROBAT-NRSI), 53 and some of their signalling questions differed. Such information on tool linkage may also be considered for tool selection, if available from the tools. Another limitation is that we only searched HTA agencies for grey literature, and the returned hits on the snowballing approach depended on the starting-set articles, so some tools only mentioned by clinical guideline or regulatory organisations, or tools missed by the previous reviews might have been overlooked. Also, only one researcher (MH) traced versions of tools, by following reference lists of the identified studies and by visiting websites of the online tools. Consequently, the most up-to-date version of a tool might be missing, and the extent to which a quality item was described by a tool might be underestimated. As existing appraisal tools are improved continuously and new tools are being developed (eg, the HARmonized Protocol Template to Enhance Reproducibility (HARPER) and Authentic Transparent Relevant Accurate Track-Record (ATRAcTR)), 54 55 an online platform that automatically identifies appraisal tools and summarises tool information is promising. Such platforms have already been established for tools for assessing observational studies for exposures that were not controlled by investigators (eg, dietary patterns). 56 Another limitation is that we categorised criteria of a quality item as ‘sufficient’ or ‘brief’ for each tool, based on whether an explanation was provided for the criteria. Though consensus was reached among authors, and all tool criteria were independently reviewed by two researchers, tool users might question the feasibility of such categorisation when selecting a tool. Additionally, as we categorised quality items based on the order of conducting an NRSI (ie, from study design to results presentation), we did not provide specific suggestions on how to select tools based on bias categories. For example, motivational bias, which would occur when judgements are influenced by the desirability or undesirability of events or outcomes, may affect reporting and measurement of patient outcomes and adherence to healthcare interventions. 57 58 Although the items Conflict of interest and Outcome measurement are relevant to motivational bias, we did not investigate their relationships. Hence, we recommend for future research to bridge our quality items to all potential categories of bias, then test whether a tool selected based on such categorisation, together with recommendations from previous reviews, can really satisfy tool users. It is also worth noting that, the target audience of this review and content analysis could be decision-makers who assess the general quality of an NRSI, NRSI performers who may report quality of their studies, or developers of relevant appraisal tools. However, when users focus on a specific type of concern (eg, causal effect or data quality), some methodological guidance investigating the specific issue or tools beyond the healthcare field (eg, social science) really exist 59 60 and may be referred to by users. In addition, the tools for diagnosis studies, prognosis studies and secondary studies were beyond the scope of our study, and relevant users may refer to other studies, such as Quigley et al 14 , for further information. Moreover, some frameworks specifically designed for assessing data quality, for example, in terms of data structures and completeness, have been published, and some of their instructions may also be considered as criteria for assessing NRSI quality. 61–65 While evaluating these frameworks is beyond the scope of this study, we recommend tool developers to refer to these frameworks when they define relevant criteria or signalling questions in the future.

Most of the appraisal tools for NRSIs have their own strengths, but none of them could address all quality concerns relevant to these studies. Even the most comprehensive tools could be complemented with items from other tools. With information on how sufficiently a tool describes a quality item, tool users might broaden their horizons on quality concerns of non-randomised studies to be considered and might select a tool that more completely satisfies their needs. We suggest decision-makers, researchers and tool developers consider the quality-item level heterogeneity when selecting a tool or identifying a research gap.

Acknowledgments

This research was once published as an abstract in 2022-11, ISPOR Europe 2022, Vienna, Austria. The citation was: Jiu L, Hartog MK, Wang J, et al . OP18 applicability of appraisal tools of real-world evidence in health technology assessment: a literature review and content analysis. Value Health . 2022 Dec 1;25(12):S389.

  • de Boer A ,
  • Hillege H , et al
  • Reeves BC ,
  • Higgins JPT , et al
  • Higgins J ,
  • Rooney A , et al
  • Katkade VB ,
  • Sanders KN ,
  • Baumfeld Andre E ,
  • Carrington N ,
  • Siami FS , et al
  • van Veelen A ,
  • Jonsson P , et al
  • Salcher-Konrad M ,
  • Boccia S , et al
  • Rannanheimo P ,
  • Batchelor L , et al
  • Hogervorst MA ,
  • Vreman RA , et al
  • Schünemann HJ ,
  • Tugwell P ,
  • Reeves BC , et al
  • Sterne JA ,
  • Hernán MA ,
  • McAleenan A , et al
  • Orsini LS ,
  • Crown W , et al
  • D’Andrea E ,
  • Patorno E , et al
  • Quigley JM ,
  • Thompson JC ,
  • Halfpenny NJ , et al
  • University of Sheffield
  • Evidence Analysis
  • Dreyer NA ,
  • Velentgas P ,
  • Westrich K , et al
  • ↵ Open science framework (OSF) Registry . 2022 . Available : https://osf.io/dashboard
  • ↵ Connected papers . 2022 . Available : https://www.connectedpapers.com/about
  • ↵ The International network of agencies for health technology assessment members . 2022 . Available : https://www.inahta.org/members/members_list/
  • Ouzzani M ,
  • Hammady H ,
  • Fedorowicz Z , et al
  • Whiting P ,
  • Mallett S , et al
  • Nowell LS ,
  • Norris JM ,
  • White DE , et al
  • Viswanathan M ,
  • Forestier D , et al
  • Faillie J-L ,
  • Gouverneur A , et al
  • Berger ML ,
  • Martin BC ,
  • Husereau D , et al
  • Joanna Briggs Institute
  • Moloney L ,
  • Wolfram T , et al
  • Vandenbroucke JP ,
  • von Elm E ,
  • Altman DG , et al
  • Des Jarlais DC ,
  • Crepaz N , et al
  • Genaidy AM ,
  • Lemasters GK ,
  • Lockey J , et al
  • ENCePP Guide on Methodological Standards in Pharmacoepidemiology (Revision 10)
  • Public Policy Committee
  • Fesperman SF , et al
  • Schneeweiss S ,
  • Berger ML , et al
  • Benchimol EI ,
  • Guttmann A , et al
  • ↵ National institute for health and care excellence (NICE) ]. The guidelines manual: Appendices B–I . 2012 . Available : https://www.nice.org.uk/process/pmg6/resources/the-guidelines-manual-appendices-bi-pdf-3304416006853
  • Kennedy CE ,
  • Fonner VA ,
  • Armstrong KA , et al
  • ↵ Critical appraisal tools . 2022 . Available : https://www.cardiff.ac.uk/specialist-unit-for-review-evidence/resources/critical-appraisal-checklists
  • Yang Z-H , et al
  • Franklin JM ,
  • Martin D , et al
  • Patnode CD ,
  • Berkman ND , et al
  • Blagojevic M ,
  • Jeffery A , et al
  • Guyatt GH ,
  • Vist G , et al
  • University of Bristol [Internet
  • Pottegård A ,
  • Li JZ , et al
  • Allman-Farinelli M , et al
  • Arjmand EM ,
  • Shapiro JA ,
  • Shah RK , et al
  • Montibeller G ,
  • von Winterfeldt D
  • WILEY online library
  • Sharma Waddington H ,
  • Cairncross S
  • Food and Drug Administration (FDA)
  • European Medicines Agency (EMA)
  • Duke Margolis Center for Health Policy
  • Callahan TJ ,
  • Barnard J , et al
  • Schmidt CO ,
  • Struckmann S ,
  • Enzenbach C , et al
  • Bishop FL ,
  • Prescott P ,
  • Chan YK , et al
  • Gagnon M-P ,
  • Griffiths F , et al
  • Falco FJE , et al
  • Heller RF ,
  • Gemmell I , et al
  • Weightman AL, Mann MK, Sander L, Turley RL
  • Thomas BH ,
  • Ciliska D ,
  • Dobbins M , et al
  • Rangel SJ ,
  • Colby CE , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Contributors LJ designed the study protocol, identified appraisal tools, conducted the content analysis and wrote the manuscript; MH identified appraisal tools, collected data on appraisal tools and conducted the content analysis; JW designed the study protocol, solved the discrepancies on identification of appraisal tools and edited the manuscript; RAV designed the study protocol and edited the manuscript; OK provided assistance on coding of quality items and edited the manuscript; AM-T edited the manuscript; WG edited the manuscript, and was responsible for the overall content as the guarantor.

Funding This research was performed as part of the HTx project. The project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825162.

Competing interests None declared.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

Literature Reviews

  • Getting Started
  • Choosing a Type of Review
  • Developing a Research Question
  • Searching the Literature
  • Searching Tips
  • ChatGPT [beta]
  • Documenting your Search
  • Using Citation Managers
  • Concept Mapping
  • Concept Map Definition

MindMeister

  • Writing the Review
  • Further Resources

Additional Tools

Google slides.

GSlides can create concept maps using their Diagram feature. Insert > Diagram > Hierarchy will give you some editable templates to use.

Tutorial on diagrams in GSlides .

MICROSOFT WORD

MS Word can create concept maps using Insert > SmartArt Graphic. Select Process, Cycle, Hierarchy, or Relationship to see templates.

NVivo  is software for qualitative analysis that has a concept map feature. Zotero libraries can be uploaded using ris files. NVivo Concept Map information.

A concept map or mind map is a visual representation of knowledge that illustrates relationships between concepts or ideas. It is a tool for organizing and representing information in a hierarchical and interconnected manner. At its core, a concept map consists of nodes, which represent individual concepts or ideas, and links, which depict the relationships between these concepts .

Below is a non-exhaustive list of tools that can facilitate the creation of concept maps.

methodological review of the literature

www.canva.com

Canva is a user-friendly graphic design platform that enables individuals to create visual content quickly and easily. It offers a diverse array of customizable templates, design elements, and tools, making it accessible to users with varying levels of design experience. 

Pros: comes with many pre-made concept map templates to get you started

Cons : not all features are available in the free version

Explore Canva concept map templates here .

Note: Although Canva advertises an "education" option, this is for K-12 only and does not apply to university users.

methodological review of the literature

www.lucidchart.com

Lucid has two tools that can create mind maps (what they're called inside Lucid): Lucidchart is the place to build, document, and diagram, and Lucidspark is the place to ideate, connect, and plan.

Lucidchart is a collaborative online diagramming and visualization tool that allows users to create a wide range of diagrams, including flowcharts, org charts, wireframes, and mind maps. Its mind-mapping feature provides a structured framework for brainstorming ideas, organizing thoughts, and visualizing relationships between concepts. 

Lucidspark , works as a virtual whiteboard. Here, you can add sticky notes, develop ideas through freehand drawing, and collaborate with your teammates. Has only one template for mind mapping.

Explore Lucid mind map creation here .

How to create mind maps using LucidSpark: 

Note: U-M students have access to Lucid through ITS. [ info here ] Choose the "Login w Google" option, use your @umich.edu account, and access should happen automatically.

methodological review of the literature

www.figma.com

Figma is a cloud-based design tool that enables collaborative interface design and prototyping. It's widely used by UI/UX designers to create, prototype, and iterate on digital designs. Figma is the main design tool, and FigJam is their virtual whiteboard:

Figma  is a comprehensive design tool that enables designers to create and prototype high-fidelity designs

FigJam focuses on collaboration and brainstorming, providing a virtual whiteboard-like experience, best for concept maps

Explore FigJam concept maps here .

methodological review of the literature

Note: There is a " Figma for Education " version for students that will provide access. Choose the "Login w Google" option, use your @umich.edu account, and access should happen automatically.

methodological review of the literature

www.mindmeister.com

MindMeister  is an online mind mapping tool that allows users to visually organize their thoughts, ideas, and information in a structured and hierarchical format. It provides a digital canvas where users can create and manipulate nodes representing concepts or topics, and connect them with lines to show relationships and associations.

Features : collaborative, permits multiple co-authors, and multiple export formats. The free version allows up to 3 mind maps.

Explore  MindMeister templates here .

  • << Previous: Using Citation Managers
  • Next: Writing the Review >>
  • Last Updated: Feb 15, 2024 1:47 PM
  • URL: https://guides.lib.umich.edu/litreview
  • Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

49k Accesses

26 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

methodological review of the literature

IMAGES

  1. How to Write a Literature Review in 5 Simple Steps

    methodological review of the literature

  2. example of methodology for literature review

    methodological review of the literature

  3. literature review as a research methodology an overview and guidelines

    methodological review of the literature

  4. 14+ Literature Review Examples

    methodological review of the literature

  5. 1: Literature Review Stage of the Research Methodology

    methodological review of the literature

  6. Literature Review Methodology Example

    methodological review of the literature

VIDEO

  1. Systematic literature review

  2. Approaches to searching the literature

  3. Review of literature

  4. Conducting Methodological Review for Qualitative Research

  5. Part 2 Writing the Review of Literature

  6. Effective Review of Literature

COMMENTS

  1. Methodological Review

    A methodological review is a type of systematic secondary research (i.e., research synthesis) which focuses on summarising the state-of-the-art methodological practices of research in a substantive field or topic" (Chong et al, 2021).

  2. Methodological Approaches to Literature Review

    This chapter introduces various methodological approaches to conducting literature reviews and covers a range of review types including narrative reviews, scoping reviews, and systematic reviews, and also reporting strategies such as meta-analyses and meta-syntheses. Introduction

  3. Chapter 9 Methods for Literature Reviews

    Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generat...

  4. How to Write a Literature Review

    Step 1 - Search for relevant literature Step 2 - Evaluate and select sources Step 3 - Identify themes, debates, and gaps Step 4 - Outline your literature review's structure Step 5 - Write your literature review Free lecture slides Other interesting articles Frequently asked questions Introduction Quick Run-through Step 1 & 2 Step 3 Step 4 Step 5

  5. PDF METHODOLOGY OF THE LITERATURE REVIEW

    The literature review represents a method because the literature reviewer chooses from an array of strategies and procedures for identifying, recording, understanding, meaning-making, and transmitting information pertinent to a topic of interest.

  6. Literature review as a research methodology: An ...

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published.

  7. Writing a Literature Review

    A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays).

  8. Reviewing the research methods literature: principles and strategies

    Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type. Conclusions

  9. Learn about Methodological Literature Reviews

    Identifying & Planning Research Mar 13 by Janet Salmons, Ph.D., Research Community Manager for SAGE Methodspace A feature topic in the Organizational Research Methods journal, Rigorous and Impactful Literature Reviews, explores methodologies for conducting review research.

  10. PDF Methodological Approaches to Literature Review

    5. Mixed studies review/mixed methods review: Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context, it refers to a combi-nation of review approaches, for example, combining quantitative with qualitative research or outcome with process studies. 6.

  11. Methods and the Literature Review

    This book includes steps for students and experienced scholars, with discussion of a variety of literature review types. Conducting research literature reviews:From the Internet to Paper (Fink, 2019). Available resources include Chapters 1 and 2. This edition includes recommendations for organizing literature reviews using online resources.

  12. How to Write, Evaluate, and Use Methodological Literature Reviews

    Our analysis of 168 methodological literature reviews published in 42 different journals revealed some interesting and surprising facts. For example, there are many different ways to write a successful review—one that is published in a highly visible and prestigious journal.

  13. Types of reviews

    For more information, check out Cornell's review methodology decision tree. LitR-Ex.com - Eight literature review methodologies. Learn more about 8 different review types (incl. Systematic Reviews and Scoping Reviews) with practical tips about strengths and weaknesses of different methods. ... Tags: literature review, narrative review, rapid ...

  14. An overview of methodological approaches in systematic reviews

    Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to...

  15. 5. The Literature Review

    A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories.A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that ...

  16. Research Methods: Literature Reviews

    Scoping Reviews Systematic Reviews Scholarship of Teaching and Learning Persuasive Arguments Subject Specific Methodology Definition A literature review involves researching, reading, analyzing, evaluating, and summarizing scholarly literature (typically journals and articles) about a specific topic.

  17. (PDF) Literature Review as a Research Methodology: An overview and

    ... Its aim is to identify all empirical evidence that fits a pre-specified inclusion criteria to answer a particular research question or hypothesis (Snyder, 2019). It minimizes bias and provides...

  18. A methodological review of systematic literature reviews in higher

    The methodological framework presented in this review is the outcome of an inductive analysis of methodological stages and steps (each methodological stage can contain numerous steps) of 160 systematic literature reviews in leading higher education journals, presenting a coherent methodological framework using consistent terminologies while ...

  19. Improving the evidence base: A methodological review of the

    The aim is to highlight a set of pertinent methodological questions that researchers working in the field of climate migration should be aware of and address. References to complementary literature sources, which discuss the considered issues in greater detail, are provided throughout the text. Download : Download high-res image (536KB)

  20. (PDF) Literature review as a research methodology: An overview and

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and...

  21. Rapid reviews methods series: guidance on rapid qualitative evidence

    This paper forms part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group and addresses rapid qualitative evidence syntheses (QESs), which use modified systematic, transparent and reproducible methodsu to accelerate the synthesis of qualitative evidence when faced with resource constraints. This guidance covers the review process as it relates to synthesis of ...

  22. Reviewing literature for research: Doing it the right way

    Literature search. Fink has defined research literature review as a "systematic, explicit and reproducible method for identifying, evaluating, and synthesizing the existing body of completed and recorded work produced by researchers, scholars and practitioners."[]Review of research literature can be summarized into a seven step process: (i) Selecting research questions/purpose of the ...

  23. Tools for assessing quality of studies investigating health

    Objectives We aimed to identify existing appraisal tools for non-randomised studies of interventions (NRSIs) and to compare the criteria that the tools provide at the quality-item level. Design Literature review through three approaches: systematic search of journal articles, snowballing search of reviews on appraisal tools and grey literature search on websites of health technology assessment ...

  24. Research Guides: Literature Reviews: Concept Mapping

    A concept map or mind map is a visual representation of knowledge that illustrates relationships between concepts or ideas. It is a tool for organizing and representing information in a hierarchical and interconnected manner. At its core, a concept map consists of nodes, which represent individual concepts or ideas, and links, which depict the relationships between these concepts.

  25. Reviewing the research methods literature: principles and strategies

    The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 - 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 - 10 ]) is to synthesize empirical research findings from multip...

  26. A scoping review of methods and measures used to capture children's

    Methods. The scoping review was conducted in accordance with the JBI methodology for scoping reviews ... By synthesising this diverse literature in a single review, we anticipate that the review will facilitate play research in educational contexts and support, where appropriate, more consistent use of methods and measures, allowing studies to ...

  27. A methodological review of qualitative research syntheses in CALL: The

    In this connection, systematic literature reviews have been carried out to consolidate research findings. With a proliferation of systematic literature reviews in higher education, the aim of this meta, methodological review is to provide a state-of-the-art systematic literature review methodologies in the field of higher education.

  28. Empowering Students for Cybersecurity Awareness Management in the

    This study may make a theoretical, methodological, and practical contribution by offering a novel way to quantify social influence and providing additional insights into its impact on cybersecurity awareness as given the dearth of large-scale studies on internet security awareness and the rarity of comparisons between university students and ...