Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

model literature review

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

model literature review

Try for free

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.

  • UWF Libraries

Literature Review: Conducting & Writing

  • Sample Literature Reviews
  • Steps for Conducting a Lit Review
  • Finding "The Literature"
  • Organizing/Writing
  • APA Style This link opens in a new window
  • Chicago: Notes Bibliography This link opens in a new window
  • MLA Style This link opens in a new window

Sample Lit Reviews from Communication Arts

Have an exemplary literature review.

  • Literature Review Sample 1
  • Literature Review Sample 2
  • Literature Review Sample 3

Have you written a stellar literature review you care to share for teaching purposes?

Are you an instructor who has received an exemplary literature review and have permission from the student to post?

Please contact Britt McGowan at [email protected] for inclusion in this guide. All disciplines welcome and encouraged.

  • << Previous: MLA Style
  • Next: Get Help! >>
  • Last Updated: Mar 22, 2024 9:37 AM
  • URL: https://libguides.uwf.edu/litreview

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Dissertation
  • What is a Literature Review? | Guide, Template, & Examples

What is a Literature Review? | Guide, Template, & Examples

Published on 22 February 2022 by Shona McCombes . Revised on 7 June 2022.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research.

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarise sources – it analyses, synthesises, and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

Why write a literature review, examples of literature reviews, step 1: search for relevant literature, step 2: evaluate and select sources, step 3: identify themes, debates and gaps, step 4: outline your literature review’s structure, step 5: write your literature review, frequently asked questions about literature reviews, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a dissertation or thesis, you will have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position yourself in relation to other researchers and theorists
  • Show how your dissertation addresses a gap or contributes to a debate

You might also have to write a literature review as a stand-alone assignment. In this case, the purpose is to evaluate the current state of research and demonstrate your knowledge of scholarly debates around a topic.

The content will look slightly different in each case, but the process of conducting a literature review follows the same steps. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Prevent plagiarism, run a free check.

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research objectives and questions .

If you are writing a literature review as a stand-alone assignment, you will have to choose a focus and develop a central question to direct your search. Unlike a dissertation research question, this question has to be answerable without collecting original data. You should be able to answer it based only on a review of existing publications.

Make a list of keywords

Start by creating a list of keywords related to your research topic. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list if you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can use boolean operators to help narrow down your search:

Read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

To identify the most important publications on your topic, take note of recurring citations. If the same authors, books or articles keep appearing in your reading, make sure to seek them out.

You probably won’t be able to read absolutely everything that has been written on the topic – you’ll have to evaluate which sources are most relevant to your questions.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models and methods? Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • How does the publication contribute to your understanding of the topic? What are its key insights and arguments?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible, and make sure you read any landmark studies and major theories in your field of research.

You can find out how many times an article has been cited on Google Scholar – a high citation count means the article has been influential in the field, and should certainly be included in your literature review.

The scope of your review will depend on your topic and discipline: in the sciences you usually only review recent literature, but in the humanities you might take a long historical perspective (for example, to trace how a concept has changed in meaning over time).

Remember that you can use our template to summarise and evaluate sources you’re thinking about using!

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It’s important to keep track of your sources with references to avoid plagiarism . It can be helpful to make an annotated bibliography, where you compile full reference information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

You can use our free APA Reference Generator for quick, correct, consistent citations.

To begin organising your literature review’s argument and structure, you need to understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly-visual platforms like Instagram and Snapchat – this is a gap that you could address in your own research.

There are various approaches to organising the body of a literature review. You should have a rough idea of your strategy before you start writing.

Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarising sources in order.

Try to analyse patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organise your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text, your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

If you are writing the literature review as part of your dissertation or thesis, reiterate your central problem or research question and give a brief summary of the scholarly context. You can emphasise the timeliness of the topic (“many recent studies have focused on the problem of x”) or highlight a gap in the literature (“while there has been much research on x, few researchers have taken y into consideration”).

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, make sure to follow these tips:

  • Summarise and synthesise: give an overview of the main points of each source and combine them into a coherent whole.
  • Analyse and interpret: don’t just paraphrase other researchers – add your own interpretations, discussing the significance of findings in relation to the literature as a whole.
  • Critically evaluate: mention the strengths and weaknesses of your sources.
  • Write in well-structured paragraphs: use transitions and topic sentences to draw connections, comparisons and contrasts.

In the conclusion, you should summarise the key findings you have taken from the literature and emphasise their significance.

If the literature review is part of your dissertation or thesis, reiterate how your research addresses gaps and contributes new knowledge, or discuss how you have drawn on existing theories and methods to build a framework for your research. This can lead directly into your methodology section.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your  dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, June 07). What is a Literature Review? | Guide, Template, & Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/thesis-dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a theoretical framework | a step-by-step guide, what is a research methodology | steps & tips.

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

How To Write A Literature Review - A Complete Guide

Deeptanshu D

Table of Contents

A literature review is much more than just another section in your research paper. It forms the very foundation of your research. It is a formal piece of writing where you analyze the existing theoretical framework, principles, and assumptions and use that as a base to shape your approach to the research question.

Curating and drafting a solid literature review section not only lends more credibility to your research paper but also makes your research tighter and better focused. But, writing literature reviews is a difficult task. It requires extensive reading, plus you have to consider market trends and technological and political changes, which tend to change in the blink of an eye.

Now streamline your literature review process with the help of SciSpace Copilot. With this AI research assistant, you can efficiently synthesize and analyze a vast amount of information, identify key themes and trends, and uncover gaps in the existing research. Get real-time explanations, summaries, and answers to your questions for the paper you're reviewing, making navigating and understanding the complex literature landscape easier.

Perform Literature reviews using SciSpace Copilot

In this comprehensive guide, we will explore everything from the definition of a literature review, its appropriate length, various types of literature reviews, and how to write one.

What is a literature review?

A literature review is a collation of survey, research, critical evaluation, and assessment of the existing literature in a preferred domain.

Eminent researcher and academic Arlene Fink, in her book Conducting Research Literature Reviews , defines it as the following:

“A literature review surveys books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated.

Literature reviews are designed to provide an overview of sources you have explored while researching a particular topic, and to demonstrate to your readers how your research fits within a larger field of study.”

Simply put, a literature review can be defined as a critical discussion of relevant pre-existing research around your research question and carving out a definitive place for your study in the existing body of knowledge. Literature reviews can be presented in multiple ways: a section of an article, the whole research paper itself, or a chapter of your thesis.

A literature review paper

A literature review does function as a summary of sources, but it also allows you to analyze further, interpret, and examine the stated theories, methods, viewpoints, and, of course, the gaps in the existing content.

As an author, you can discuss and interpret the research question and its various aspects and debate your adopted methods to support the claim.

What is the purpose of a literature review?

A literature review is meant to help your readers understand the relevance of your research question and where it fits within the existing body of knowledge. As a researcher, you should use it to set the context, build your argument, and establish the need for your study.

What is the importance of a literature review?

The literature review is a critical part of research papers because it helps you:

  • Gain an in-depth understanding of your research question and the surrounding area
  • Convey that you have a thorough understanding of your research area and are up-to-date with the latest changes and advancements
  • Establish how your research is connected or builds on the existing body of knowledge and how it could contribute to further research
  • Elaborate on the validity and suitability of your theoretical framework and research methodology
  • Identify and highlight gaps and shortcomings in the existing body of knowledge and how things need to change
  • Convey to readers how your study is different or how it contributes to the research area

How long should a literature review be?

Ideally, the literature review should take up 15%-40% of the total length of your manuscript. So, if you have a 10,000-word research paper, the minimum word count could be 1500.

Your literature review format depends heavily on the kind of manuscript you are writing — an entire chapter in case of doctoral theses, a part of the introductory section in a research article, to a full-fledged review article that examines the previously published research on a topic.

Another determining factor is the type of research you are doing. The literature review section tends to be longer for secondary research projects than primary research projects.

What are the different types of literature reviews?

All literature reviews are not the same. There are a variety of possible approaches that you can take. It all depends on the type of research you are pursuing.

Here are the different types of literature reviews:

Argumentative review

It is called an argumentative review when you carefully present literature that only supports or counters a specific argument or premise to establish a viewpoint.

Integrative review

It is a type of literature review focused on building a comprehensive understanding of a topic by combining available theoretical frameworks and empirical evidence.

Methodological review

This approach delves into the ''how'' and the ''what" of the research question —  you cannot look at the outcome in isolation; you should also review the methodology used.

Systematic review

This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research and collect, report, and analyze data from the studies included in the review.

Meta-analysis review

Meta-analysis uses statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects than those derived from the individual studies included within a review.

Historical review

Historical literature reviews focus on examining research throughout a period, often starting with the first time an issue, concept, theory, or phenomenon emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and identify future research's likely directions.

Theoretical Review

This form aims to examine the corpus of theory accumulated regarding an issue, concept, theory, and phenomenon. The theoretical literature review helps to establish what theories exist, the relationships between them, the degree the existing approaches have been investigated, and to develop new hypotheses to be tested.

Scoping Review

The Scoping Review is often used at the beginning of an article, dissertation, or research proposal. It is conducted before the research to highlight gaps in the existing body of knowledge and explains why the project should be greenlit.

State-of-the-Art Review

The State-of-the-Art review is conducted periodically, focusing on the most recent research. It describes what is currently known, understood, or agreed upon regarding the research topic and highlights where there are still disagreements.

Can you use the first person in a literature review?

When writing literature reviews, you should avoid the usage of first-person pronouns. It means that instead of "I argue that" or "we argue that," the appropriate expression would be "this research paper argues that."

Do you need an abstract for a literature review?

Ideally, yes. It is always good to have a condensed summary that is self-contained and independent of the rest of your review. As for how to draft one, you can follow the same fundamental idea when preparing an abstract for a literature review. It should also include:

  • The research topic and your motivation behind selecting it
  • A one-sentence thesis statement
  • An explanation of the kinds of literature featured in the review
  • Summary of what you've learned
  • Conclusions you drew from the literature you reviewed
  • Potential implications and future scope for research

Here's an example of the abstract of a literature review

Abstract-of-a-literature-review

Is a literature review written in the past tense?

Yes, the literature review should ideally be written in the past tense. You should not use the present or future tense when writing one. The exceptions are when you have statements describing events that happened earlier than the literature you are reviewing or events that are currently occurring; then, you can use the past perfect or present perfect tenses.

How many sources for a literature review?

There are multiple approaches to deciding how many sources to include in a literature review section. The first approach would be to look level you are at as a researcher. For instance, a doctoral thesis might need 60+ sources. In contrast, you might only need to refer to 5-15 sources at the undergraduate level.

The second approach is based on the kind of literature review you are doing — whether it is merely a chapter of your paper or if it is a self-contained paper in itself. When it is just a chapter, sources should equal the total number of pages in your article's body. In the second scenario, you need at least three times as many sources as there are pages in your work.

Quick tips on how to write a literature review

To know how to write a literature review, you must clearly understand its impact and role in establishing your work as substantive research material.

You need to follow the below-mentioned steps, to write a literature review:

  • Outline the purpose behind the literature review
  • Search relevant literature
  • Examine and assess the relevant resources
  • Discover connections by drawing deep insights from the resources
  • Structure planning to write a good literature review

1. Outline and identify the purpose of  a literature review

As a first step on how to write a literature review, you must know what the research question or topic is and what shape you want your literature review to take. Ensure you understand the research topic inside out, or else seek clarifications. You must be able to the answer below questions before you start:

  • How many sources do I need to include?
  • What kind of sources should I analyze?
  • How much should I critically evaluate each source?
  • Should I summarize, synthesize or offer a critique of the sources?
  • Do I need to include any background information or definitions?

Additionally, you should know that the narrower your research topic is, the swifter it will be for you to restrict the number of sources to be analyzed.

2. Search relevant literature

Dig deeper into search engines to discover what has already been published around your chosen topic. Make sure you thoroughly go through appropriate reference sources like books, reports, journal articles, government docs, and web-based resources.

You must prepare a list of keywords and their different variations. You can start your search from any library’s catalog, provided you are an active member of that institution. The exact keywords can be extended to widen your research over other databases and academic search engines like:

  • Google Scholar
  • Microsoft Academic
  • Science.gov

Besides, it is not advisable to go through every resource word by word. Alternatively, what you can do is you can start by reading the abstract and then decide whether that source is relevant to your research or not.

Additionally, you must spend surplus time assessing the quality and relevance of resources. It would help if you tried preparing a list of citations to ensure that there lies no repetition of authors, publications, or articles in the literature review.

3. Examine and assess the sources

It is nearly impossible for you to go through every detail in the research article. So rather than trying to fetch every detail, you have to analyze and decide which research sources resemble closest and appear relevant to your chosen domain.

While analyzing the sources, you should look to find out answers to questions like:

  • What question or problem has the author been describing and debating?
  • What is the definition of critical aspects?
  • How well the theories, approach, and methodology have been explained?
  • Whether the research theory used some conventional or new innovative approach?
  • How relevant are the key findings of the work?
  • In what ways does it relate to other sources on the same topic?
  • What challenges does this research paper pose to the existing theory
  • What are the possible contributions or benefits it adds to the subject domain?

Be always mindful that you refer only to credible and authentic resources. It would be best if you always take references from different publications to validate your theory.

Always keep track of important information or data you can present in your literature review right from the beginning. It will help steer your path from any threats of plagiarism and also make it easier to curate an annotated bibliography or reference section.

4. Discover connections

At this stage, you must start deciding on the argument and structure of your literature review. To accomplish this, you must discover and identify the relations and connections between various resources while drafting your abstract.

A few aspects that you should be aware of while writing a literature review include:

  • Rise to prominence: Theories and methods that have gained reputation and supporters over time.
  • Constant scrutiny: Concepts or theories that repeatedly went under examination.
  • Contradictions and conflicts: Theories, both the supporting and the contradictory ones, for the research topic.
  • Knowledge gaps: What exactly does it fail to address, and how to bridge them with further research?
  • Influential resources: Significant research projects available that have been upheld as milestones or perhaps, something that can modify the current trends

Once you join the dots between various past research works, it will be easier for you to draw a conclusion and identify your contribution to the existing knowledge base.

5. Structure planning to write a good literature review

There exist different ways towards planning and executing the structure of a literature review. The format of a literature review varies and depends upon the length of the research.

Like any other research paper, the literature review format must contain three sections: introduction, body, and conclusion. The goals and objectives of the research question determine what goes inside these three sections.

Nevertheless, a good literature review can be structured according to the chronological, thematic, methodological, or theoretical framework approach.

Literature review samples

1. Standalone

Standalone-Literature-Review

2. As a section of a research paper

Literature-review-as-a-section-of-a-research-paper

How SciSpace Discover makes literature review a breeze?

SciSpace Discover is a one-stop solution to do an effective literature search and get barrier-free access to scientific knowledge. It is an excellent repository where you can find millions of only peer-reviewed articles and full-text PDF files. Here’s more on how you can use it:

Find the right information

Find-the-right-information-using-SciSpace

Find what you want quickly and easily with comprehensive search filters that let you narrow down papers according to PDF availability, year of publishing, document type, and affiliated institution. Moreover, you can sort the results based on the publishing date, citation count, and relevance.

Assess credibility of papers quickly

Assess-credibility-of-papers-quickly-using-SciSpace

When doing the literature review, it is critical to establish the quality of your sources. They form the foundation of your research. SciSpace Discover helps you assess the quality of a source by providing an overview of its references, citations, and performance metrics.

Get the complete picture in no time

SciSpace's-personalized-informtion-engine

SciSpace Discover’s personalized suggestion engine helps you stay on course and get the complete picture of the topic from one place. Every time you visit an article page, it provides you links to related papers. Besides that, it helps you understand what’s trending, who are the top authors, and who are the leading publishers on a topic.

Make referring sources super easy

Make-referring-pages-super-easy-with-SciSpace

To ensure you don't lose track of your sources, you must start noting down your references when doing the literature review. SciSpace Discover makes this step effortless. Click the 'cite' button on an article page, and you will receive preloaded citation text in multiple styles — all you've to do is copy-paste it into your manuscript.

Final tips on how to write a literature review

A massive chunk of time and effort is required to write a good literature review. But, if you go about it systematically, you'll be able to save a ton of time and build a solid foundation for your research.

We hope this guide has helped you answer several key questions you have about writing literature reviews.

Would you like to explore SciSpace Discover and kick off your literature search right away? You can get started here .

Frequently Asked Questions (FAQs)

1. how to start a literature review.

• What questions do you want to answer?

• What sources do you need to answer these questions?

• What information do these sources contain?

• How can you use this information to answer your questions?

2. What to include in a literature review?

• A brief background of the problem or issue

• What has previously been done to address the problem or issue

• A description of what you will do in your project

• How this study will contribute to research on the subject

3. Why literature review is important?

The literature review is an important part of any research project because it allows the writer to look at previous studies on a topic and determine existing gaps in the literature, as well as what has already been done. It will also help them to choose the most appropriate method for their own study.

4. How to cite a literature review in APA format?

To cite a literature review in APA style, you need to provide the author's name, the title of the article, and the year of publication. For example: Patel, A. B., & Stokes, G. S. (2012). The relationship between personality and intelligence: A meta-analysis of longitudinal research. Personality and Individual Differences, 53(1), 16-21

5. What are the components of a literature review?

• A brief introduction to the topic, including its background and context. The introduction should also include a rationale for why the study is being conducted and what it will accomplish.

• A description of the methodologies used in the study. This can include information about data collection methods, sample size, and statistical analyses.

• A presentation of the findings in an organized format that helps readers follow along with the author's conclusions.

6. What are common errors in writing literature review?

• Not spending enough time to critically evaluate the relevance of resources, observations and conclusions.

• Totally relying on secondary data while ignoring primary data.

• Letting your personal bias seep into your interpretation of existing literature.

• No detailed explanation of the procedure to discover and identify an appropriate literature review.

7. What are the 5 C's of writing literature review?

• Cite - the sources you utilized and referenced in your research.

• Compare - existing arguments, hypotheses, methodologies, and conclusions found in the knowledge base.

• Contrast - the arguments, topics, methodologies, approaches, and disputes that may be found in the literature.

• Critique - the literature and describe the ideas and opinions you find more convincing and why.

• Connect - the various studies you reviewed in your research.

8. How many sources should a literature review have?

When it is just a chapter, sources should equal the total number of pages in your article's body. if it is a self-contained paper in itself, you need at least three times as many sources as there are pages in your work.

9. Can literature review have diagrams?

• To represent an abstract idea or concept

• To explain the steps of a process or procedure

• To help readers understand the relationships between different concepts

10. How old should sources be in a literature review?

Sources for a literature review should be as current as possible or not older than ten years. The only exception to this rule is if you are reviewing a historical topic and need to use older sources.

11. What are the types of literature review?

• Argumentative review

• Integrative review

• Methodological review

• Systematic review

• Meta-analysis review

• Historical review

• Theoretical review

• Scoping review

• State-of-the-Art review

12. Is a literature review mandatory?

Yes. Literature review is a mandatory part of any research project. It is a critical step in the process that allows you to establish the scope of your research, and provide a background for the rest of your work.

But before you go,

  • Six Online Tools for Easy Literature Review
  • Evaluating literature review: systematic vs. scoping reviews
  • Systematic Approaches to a Successful Literature Review
  • Writing Integrative Literature Reviews: Guidelines and Examples

You might also like

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Sumalatha G

Literature Review and Theoretical Framework: Understanding the Differences

Nikhil Seethi

Types of Essays in Academic Writing - Quick Guide (2024)

Harvey Cushing/John Hay Whitney Medical Library

  • Collections
  • Research Help

YSN Doctoral Programs: Steps in Conducting a Literature Review

  • Biomedical Databases
  • Global (Public Health) Databases
  • Soc. Sci., History, and Law Databases
  • Grey Literature
  • Trials Registers
  • Data and Statistics
  • Public Policy
  • Google Tips
  • Recommended Books
  • Steps in Conducting a Literature Review

What is a literature review?

A literature review is an integrated analysis -- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.  That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

A literature review may be a stand alone work or the introduction to a larger research paper, depending on the assignment.  Rely heavily on the guidelines your instructor has given you.

Why is it important?

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Discovers relationships between research studies/ideas.
  • Identifies major themes, concepts, and researchers on a topic.
  • Identifies critical gaps and points of disagreement.
  • Discusses further research questions that logically come out of the previous studies.

APA7 Style resources

Cover Art

APA Style Blog - for those harder to find answers

1. Choose a topic. Define your research question.

Your literature review should be guided by your central research question.  The literature represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.

  • Make sure your research question is not too broad or too narrow.  Is it manageable?
  • Begin writing down terms that are related to your question. These will be useful for searches later.
  • If you have the opportunity, discuss your topic with your professor and your class mates.

2. Decide on the scope of your review

How many studies do you need to look at? How comprehensive should it be? How many years should it cover? 

  • This may depend on your assignment.  How many sources does the assignment require?

3. Select the databases you will use to conduct your searches.

Make a list of the databases you will search. 

Where to find databases:

  • use the tabs on this guide
  • Find other databases in the Nursing Information Resources web page
  • More on the Medical Library web page
  • ... and more on the Yale University Library web page

4. Conduct your searches to find the evidence. Keep track of your searches.

  • Use the key words in your question, as well as synonyms for those words, as terms in your search. Use the database tutorials for help.
  • Save the searches in the databases. This saves time when you want to redo, or modify, the searches. It is also helpful to use as a guide is the searches are not finding any useful results.
  • Review the abstracts of research studies carefully. This will save you time.
  • Use the bibliographies and references of research studies you find to locate others.
  • Check with your professor, or a subject expert in the field, if you are missing any key works in the field.
  • Ask your librarian for help at any time.
  • Use a citation manager, such as EndNote as the repository for your citations. See the EndNote tutorials for help.

Review the literature

Some questions to help you analyze the research:

  • What was the research question of the study you are reviewing? What were the authors trying to discover?
  • Was the research funded by a source that could influence the findings?
  • What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions.
  • Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
  • If there are conflicting studies, why do you think that is?
  • How are the authors viewed in the field? Has this study been cited? If so, how has it been analyzed?

Tips: 

  • Review the abstracts carefully.  
  • Keep careful notes so that you may track your thought processes during the research process.
  • Create a matrix of the studies for easy analysis, and synthesis, across all of the studies.
  • << Previous: Recommended Books
  • Last Updated: Jan 4, 2024 10:52 AM
  • URL: https://guides.library.yale.edu/YSNDoctoral

The Sheridan Libraries

  • Write a Literature Review
  • Sheridan Libraries
  • Find This link opens in a new window
  • Evaluate This link opens in a new window

What Will You Do Differently?

Please help your librarians by filling out this two-minute survey of today's class session..

Professor, this one's for you .

Introduction

Literature reviews take time. here is some general information to know before you start.  .

  •  VIDEO -- This video is a great overview of the entire process.  (2020; North Carolina State University Libraries) --The transcript is included --This is for everyone; ignore the mention of "graduate students" --9.5 minutes, and every second is important  
  • OVERVIEW -- Read this page from Purdue's OWL. It's not long, and gives some tips to fill in what you just learned from the video.  
  • NOT A RESEARCH ARTICLE -- A literature review follows a different style, format, and structure from a research article.  

Steps to Completing a Literature Review

model literature review

  • Next: Find >>
  • Last Updated: Sep 26, 2023 10:25 AM
  • URL: https://guides.library.jhu.edu/lit-review

Get science-backed answers as you write with Paperpal's Research feature

What is a Literature Review? How to Write It (with Examples)

literature review

A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship, demonstrating your understanding of the topic and showing how your work contributes to the ongoing conversation in the field. Learning how to write a literature review is a critical tool for successful research. Your ability to summarize and synthesize prior research pertaining to a certain topic demonstrates your grasp on the topic of study, and assists in the learning process. 

Table of Contents

  • What is the purpose of literature review? 
  • a. Habitat Loss and Species Extinction: 
  • b. Range Shifts and Phenological Changes: 
  • c. Ocean Acidification and Coral Reefs: 
  • d. Adaptive Strategies and Conservation Efforts: 
  • How to write a good literature review 
  • Choose a Topic and Define the Research Question: 
  • Decide on the Scope of Your Review: 
  • Select Databases for Searches: 
  • Conduct Searches and Keep Track: 
  • Review the Literature: 
  • Organize and Write Your Literature Review: 
  • Frequently asked questions 

What is a literature review?

A well-conducted literature review demonstrates the researcher’s familiarity with the existing literature, establishes the context for their own research, and contributes to scholarly conversations on the topic. One of the purposes of a literature review is also to help researchers avoid duplicating previous work and ensure that their research is informed by and builds upon the existing body of knowledge.

model literature review

What is the purpose of literature review?

A literature review serves several important purposes within academic and research contexts. Here are some key objectives and functions of a literature review: 2  

  • Contextualizing the Research Problem: The literature review provides a background and context for the research problem under investigation. It helps to situate the study within the existing body of knowledge. 
  • Identifying Gaps in Knowledge: By identifying gaps, contradictions, or areas requiring further research, the researcher can shape the research question and justify the significance of the study. This is crucial for ensuring that the new research contributes something novel to the field. 
  • Understanding Theoretical and Conceptual Frameworks: Literature reviews help researchers gain an understanding of the theoretical and conceptual frameworks used in previous studies. This aids in the development of a theoretical framework for the current research. 
  • Providing Methodological Insights: Another purpose of literature reviews is that it allows researchers to learn about the methodologies employed in previous studies. This can help in choosing appropriate research methods for the current study and avoiding pitfalls that others may have encountered. 
  • Establishing Credibility: A well-conducted literature review demonstrates the researcher’s familiarity with existing scholarship, establishing their credibility and expertise in the field. It also helps in building a solid foundation for the new research. 
  • Informing Hypotheses or Research Questions: The literature review guides the formulation of hypotheses or research questions by highlighting relevant findings and areas of uncertainty in existing literature. 

Literature review example

Let’s delve deeper with a literature review example: Let’s say your literature review is about the impact of climate change on biodiversity. You might format your literature review into sections such as the effects of climate change on habitat loss and species extinction, phenological changes, and marine biodiversity. Each section would then summarize and analyze relevant studies in those areas, highlighting key findings and identifying gaps in the research. The review would conclude by emphasizing the need for further research on specific aspects of the relationship between climate change and biodiversity. The following literature review template provides a glimpse into the recommended literature review structure and content, demonstrating how research findings are organized around specific themes within a broader topic. 

Literature Review on Climate Change Impacts on Biodiversity:

Climate change is a global phenomenon with far-reaching consequences, including significant impacts on biodiversity. This literature review synthesizes key findings from various studies: 

a. Habitat Loss and Species Extinction:

Climate change-induced alterations in temperature and precipitation patterns contribute to habitat loss, affecting numerous species (Thomas et al., 2004). The review discusses how these changes increase the risk of extinction, particularly for species with specific habitat requirements. 

b. Range Shifts and Phenological Changes:

Observations of range shifts and changes in the timing of biological events (phenology) are documented in response to changing climatic conditions (Parmesan & Yohe, 2003). These shifts affect ecosystems and may lead to mismatches between species and their resources. 

c. Ocean Acidification and Coral Reefs:

The review explores the impact of climate change on marine biodiversity, emphasizing ocean acidification’s threat to coral reefs (Hoegh-Guldberg et al., 2007). Changes in pH levels negatively affect coral calcification, disrupting the delicate balance of marine ecosystems. 

d. Adaptive Strategies and Conservation Efforts:

Recognizing the urgency of the situation, the literature review discusses various adaptive strategies adopted by species and conservation efforts aimed at mitigating the impacts of climate change on biodiversity (Hannah et al., 2007). It emphasizes the importance of interdisciplinary approaches for effective conservation planning. 

model literature review

How to write a good literature review

Writing a literature review involves summarizing and synthesizing existing research on a particular topic. A good literature review format should include the following elements. 

Introduction: The introduction sets the stage for your literature review, providing context and introducing the main focus of your review. 

  • Opening Statement: Begin with a general statement about the broader topic and its significance in the field. 
  • Scope and Purpose: Clearly define the scope of your literature review. Explain the specific research question or objective you aim to address. 
  • Organizational Framework: Briefly outline the structure of your literature review, indicating how you will categorize and discuss the existing research. 
  • Significance of the Study: Highlight why your literature review is important and how it contributes to the understanding of the chosen topic. 
  • Thesis Statement: Conclude the introduction with a concise thesis statement that outlines the main argument or perspective you will develop in the body of the literature review. 

Body: The body of the literature review is where you provide a comprehensive analysis of existing literature, grouping studies based on themes, methodologies, or other relevant criteria. 

  • Organize by Theme or Concept: Group studies that share common themes, concepts, or methodologies. Discuss each theme or concept in detail, summarizing key findings and identifying gaps or areas of disagreement. 
  • Critical Analysis: Evaluate the strengths and weaknesses of each study. Discuss the methodologies used, the quality of evidence, and the overall contribution of each work to the understanding of the topic. 
  • Synthesis of Findings: Synthesize the information from different studies to highlight trends, patterns, or areas of consensus in the literature. 
  • Identification of Gaps: Discuss any gaps or limitations in the existing research and explain how your review contributes to filling these gaps. 
  • Transition between Sections: Provide smooth transitions between different themes or concepts to maintain the flow of your literature review. 

Conclusion: The conclusion of your literature review should summarize the main findings, highlight the contributions of the review, and suggest avenues for future research. 

  • Summary of Key Findings: Recap the main findings from the literature and restate how they contribute to your research question or objective. 
  • Contributions to the Field: Discuss the overall contribution of your literature review to the existing knowledge in the field. 
  • Implications and Applications: Explore the practical implications of the findings and suggest how they might impact future research or practice. 
  • Recommendations for Future Research: Identify areas that require further investigation and propose potential directions for future research in the field. 
  • Final Thoughts: Conclude with a final reflection on the importance of your literature review and its relevance to the broader academic community. 

what is a literature review

Conducting a literature review

Conducting a literature review is an essential step in research that involves reviewing and analyzing existing literature on a specific topic. It’s important to know how to do a literature review effectively, so here are the steps to follow: 1  

Choose a Topic and Define the Research Question:

  • Select a topic that is relevant to your field of study. 
  • Clearly define your research question or objective. Determine what specific aspect of the topic do you want to explore? 

Decide on the Scope of Your Review:

  • Determine the timeframe for your literature review. Are you focusing on recent developments, or do you want a historical overview? 
  • Consider the geographical scope. Is your review global, or are you focusing on a specific region? 
  • Define the inclusion and exclusion criteria. What types of sources will you include? Are there specific types of studies or publications you will exclude? 

Select Databases for Searches:

  • Identify relevant databases for your field. Examples include PubMed, IEEE Xplore, Scopus, Web of Science, and Google Scholar. 
  • Consider searching in library catalogs, institutional repositories, and specialized databases related to your topic. 

Conduct Searches and Keep Track:

  • Develop a systematic search strategy using keywords, Boolean operators (AND, OR, NOT), and other search techniques. 
  • Record and document your search strategy for transparency and replicability. 
  • Keep track of the articles, including publication details, abstracts, and links. Use citation management tools like EndNote, Zotero, or Mendeley to organize your references. 

Review the Literature:

  • Evaluate the relevance and quality of each source. Consider the methodology, sample size, and results of studies. 
  • Organize the literature by themes or key concepts. Identify patterns, trends, and gaps in the existing research. 
  • Summarize key findings and arguments from each source. Compare and contrast different perspectives. 
  • Identify areas where there is a consensus in the literature and where there are conflicting opinions. 
  • Provide critical analysis and synthesis of the literature. What are the strengths and weaknesses of existing research? 

Organize and Write Your Literature Review:

  • Literature review outline should be based on themes, chronological order, or methodological approaches. 
  • Write a clear and coherent narrative that synthesizes the information gathered. 
  • Use proper citations for each source and ensure consistency in your citation style (APA, MLA, Chicago, etc.). 
  • Conclude your literature review by summarizing key findings, identifying gaps, and suggesting areas for future research. 

The literature review sample and detailed advice on writing and conducting a review will help you produce a well-structured report. But remember that a literature review is an ongoing process, and it may be necessary to revisit and update it as your research progresses. 

Frequently asked questions

A literature review is a critical and comprehensive analysis of existing literature (published and unpublished works) on a specific topic or research question and provides a synthesis of the current state of knowledge in a particular field. A well-conducted literature review is crucial for researchers to build upon existing knowledge, avoid duplication of efforts, and contribute to the advancement of their field. It also helps researchers situate their work within a broader context and facilitates the development of a sound theoretical and conceptual framework for their studies.

Literature review is a crucial component of research writing, providing a solid background for a research paper’s investigation. The aim is to keep professionals up to date by providing an understanding of ongoing developments within a specific field, including research methods, and experimental techniques used in that field, and present that knowledge in the form of a written report. Also, the depth and breadth of the literature review emphasizes the credibility of the scholar in his or her field.  

Before writing a literature review, it’s essential to undertake several preparatory steps to ensure that your review is well-researched, organized, and focused. This includes choosing a topic of general interest to you and doing exploratory research on that topic, writing an annotated bibliography, and noting major points, especially those that relate to the position you have taken on the topic. 

Literature reviews and academic research papers are essential components of scholarly work but serve different purposes within the academic realm. 3 A literature review aims to provide a foundation for understanding the current state of research on a particular topic, identify gaps or controversies, and lay the groundwork for future research. Therefore, it draws heavily from existing academic sources, including books, journal articles, and other scholarly publications. In contrast, an academic research paper aims to present new knowledge, contribute to the academic discourse, and advance the understanding of a specific research question. Therefore, it involves a mix of existing literature (in the introduction and literature review sections) and original data or findings obtained through research methods. 

Literature reviews are essential components of academic and research papers, and various strategies can be employed to conduct them effectively. If you want to know how to write a literature review for a research paper, here are four common approaches that are often used by researchers.  Chronological Review: This strategy involves organizing the literature based on the chronological order of publication. It helps to trace the development of a topic over time, showing how ideas, theories, and research have evolved.  Thematic Review: Thematic reviews focus on identifying and analyzing themes or topics that cut across different studies. Instead of organizing the literature chronologically, it is grouped by key themes or concepts, allowing for a comprehensive exploration of various aspects of the topic.  Methodological Review: This strategy involves organizing the literature based on the research methods employed in different studies. It helps to highlight the strengths and weaknesses of various methodologies and allows the reader to evaluate the reliability and validity of the research findings.  Theoretical Review: A theoretical review examines the literature based on the theoretical frameworks used in different studies. This approach helps to identify the key theories that have been applied to the topic and assess their contributions to the understanding of the subject.  It’s important to note that these strategies are not mutually exclusive, and a literature review may combine elements of more than one approach. The choice of strategy depends on the research question, the nature of the literature available, and the goals of the review. Additionally, other strategies, such as integrative reviews or systematic reviews, may be employed depending on the specific requirements of the research.

The literature review format can vary depending on the specific publication guidelines. However, there are some common elements and structures that are often followed. Here is a general guideline for the format of a literature review:  Introduction:   Provide an overview of the topic.  Define the scope and purpose of the literature review.  State the research question or objective.  Body:   Organize the literature by themes, concepts, or chronology.  Critically analyze and evaluate each source.  Discuss the strengths and weaknesses of the studies.  Highlight any methodological limitations or biases.  Identify patterns, connections, or contradictions in the existing research.  Conclusion:   Summarize the key points discussed in the literature review.  Highlight the research gap.  Address the research question or objective stated in the introduction.  Highlight the contributions of the review and suggest directions for future research.

Both annotated bibliographies and literature reviews involve the examination of scholarly sources. While annotated bibliographies focus on individual sources with brief annotations, literature reviews provide a more in-depth, integrated, and comprehensive analysis of existing literature on a specific topic. The key differences are as follows: 

References 

  • Denney, A. S., & Tewksbury, R. (2013). How to write a literature review.  Journal of criminal justice education ,  24 (2), 218-234. 
  • Pan, M. L. (2016).  Preparing literature reviews: Qualitative and quantitative approaches . Taylor & Francis. 
  • Cantero, C. (2019). How to write a literature review.  San José State University Writing Center . 

Paperpal is an AI writing assistant that help academics write better, faster with real-time suggestions for in-depth language and grammar correction. Trained on millions of research manuscripts enhanced by professional academic editors, Paperpal delivers human precision at machine speed.  

Try it for free or upgrade to  Paperpal Prime , which unlocks unlimited access to premium features like academic translation, paraphrasing, contextual synonyms, consistency checks and more. It’s like always having a professional academic editor by your side! Go beyond limitations and experience the future of academic writing.  Get Paperpal Prime now at just US$19 a month!

Related Reads:

  • Empirical Research: A Comprehensive Guide for Academics 
  • How to Write a Scientific Paper in 10 Steps 
  • Life Sciences Papers: 9 Tips for Authors Writing in Biological Sciences
  • What is an Argumentative Essay? How to Write It (With Examples)

6 Tips for Post-Doc Researchers to Take Their Career to the Next Level

Self-plagiarism in research: what it is and how to avoid it, you may also like, paperpal’s new ai research finder empowers authors to..., why traditional editorial process needs an upgrade, what is hedging in academic writing  , how to use ai to enhance your college..., ai + human expertise – a paradigm shift..., how to use paperpal to generate emails &..., ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., do plagiarism checkers detect ai content, word choice problems: how to use the right....

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health ( m-health ) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

U.S. flag

An official website of the United States government, Department of Justice.

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Model Programs Guide

Literature Reviews

Implementation Guides

Model Programs Guide Literature Reviews provide practitioners and policymakers with relevant research and evaluations for several youth-related topics and programs.  

  • Afterschool Programs  (2010)
  • Alternatives to Detention and Confinement  (2014)
  • Age Boundaries of the Juvenile Justice System (2024)
  • Arts-Based Programs and Arts Therapies for At-Risk, Justice-Involved, and Traumatized Youths  (2021)
  • Bullying and Cyberbullying  (2023)
  • Child Labor Trafficking  (2016)
  • Children Exposed to Violence  (2022)
  • Cognitive Behavioral Treatment  (2010)
  • Commercial Sexual Exploitation of Children and Sex Trafficking  (2014)
  • Community- and Problem-Oriented Policing  (2023)
  • Community Awareness/Mobilization  (2009)
  • Conflict Resolution/Interpersonal Skills  (2011)
  • Day Treatment  (2011)
  • Diversion from Formal Juvenile Court Processing  (2017)
  • Drug Court  (2010)
  • Education for Youth Under Formal Supervision of the Juvenile Justice System  (2019) 
  • Family Drug Courts  (2016)
  • Family Engagement in Juvenile Justice  (2018)
  • Family Therapy  (2014)
  • Formal, Post-Adjudication Juvenile Probation Services  (2017)
  • Gang Prevention  (2014)
  • Girls in the Juvenile Justice System (2023)
  • Group Homes  (2008)
  • Gun Court  (2010)
  • Gun Violence and Youth/Young Adults (2024)
  • Hate Crimes and Youth  (2022) 
  • Home Confinement and Electronic Monitoring  (2014)
  • Implementation Science  (2015)
  • Indigent Defense for Juveniles  (2018)
  • Interactions between Youth and Law Enforcement  (2018) 
  • Intersection between Mental Health and the Juvenile Justice System  (2017)
  • Intersection of Juvenile Justice and Child Welfare Systems  (2021)
  • Juvenile Reentry  (2017)
  • LGBTQ Youths in the Juvenile Justice System  (2014)
  • Mental Health Court  (2010)
  • Parent Training  (2010)
  • Positive Youth Development  (2014)
  • Protective Factors Against Delinquency  (2015)
  • Racial and Ethnic Disparity in Juvenile Justice Processing  (2022)
  • Reentry Court  (2010)
  • Residential Programs  (2019)
  • Restorative Justice for Juveniles  (2021)
  • Risk Factors for Delinquency  (2015)
  • Risk/Needs Assessments for Youths  (2015)
  • School/Classroom Environment  (2000)
  • Status Offenders  (2016)
  • Substance Use Prevention Programs  (2022)
  • Substance Use Treatment Programs  (2023)
  • Teen Dating Violence  (2022)
  • Teen/Youth Court  (2010)
  • Tribal Youth in the Juvenile Justice System  (2016)
  • Truancy Prevention  (2010)
  • Vocational/Job Training  (2010)
  • Wraparound Process  (2014)
  • Youth in the Adult Criminal Justice System (2024)
  • Youth Mentoring and Delinquency Prevention  (2019)
  • Youths with Intellectual and Developmental Disabilities in the Juvenile Justice System  (2017)
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • The C.A.R.S. Model
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

The Creating a Research Space [C.A.R.S.] Model was developed by John Swales based upon his analysis of journal articles representing a variety of discipline-based writing practices. His model attempts to explain and describe the organizational pattern of writing the introduction to scholarly research studies. Following the C.A.R.S. Model can be useful approach because it can help you to: 1) begin the writing process [getting started is often the most difficult task]; 2) understand the way in which an introduction sets the stage for the rest of your paper; and, 3) assess how the introduction fits within the larger scope of your study. The model assumes that writers follow a general organizational pattern in response to two types of challenges [“competitions”] relating to establishing a presence within a particular domain of research: 1) the competition to create a rhetorical space and, 2) the competition to attract readers into that space. The model proposes three actions [Swales calls them “moves”], accompanied by specific steps, that reflect the development of an effective introduction for a research paper. These “moves” and steps can be used as a template for writing the introduction to your own social sciences research papers.

"Introductions." The Writing Lab and The OWL. Purdue University; Coffin, Caroline and Rupert Wegerif. “How to Write a Standard Research Article.” Inspiring Academic Practice at the University of Exeter; Kayfetz, Janet. "Academic Writing Workshop." University of California, Santa Barbara, Fall 2009; Pennington, Ken. "The Introduction Section: Creating a Research Space CARS Model." Language Centre, Helsinki University of Technology, 2005; Swales, John and Christine B. Feak. Academic Writing for Graduate Students: Essential Skills and Tasks. 2nd edition. Ann Arbor, MI: University of Michigan Press, 2004.

Creating a Research Space Move 1: Establishing a Territory [the situation] This is generally accomplished in two ways: by demonstrating that a general area of research is important, critical, interesting, problematic, relevant, or otherwise worthy of investigation and by introducing and reviewing key sources of prior research in that area to show where gaps exist or where prior research has been inadequate in addressing the research problem. The steps taken to achieve this would be:

  • Step 1 -- Claiming importance of, and/or  [writing action = describing the research problem and providing evidence to support why the topic is important to study]
  • Step 2 -- Making topic generalizations, and/or  [writing action = providing statements about the current state of knowledge, consensus, practice or description of phenomena]
  • Step 3 -- Reviewing items of previous research  [writing action = synthesize prior research that further supports the need to study the research problem; this is not a literature review but more a reflection of key studies that have touched upon but perhaps not fully addressed the topic]

Move 2: Establishing a Niche [the problem] This action refers to making a clear and cogent argument that your particular piece of research is important and possesses value. This can be done by indicating a specific gap in previous research, by challenging a broadly accepted assumption, by raising a question, a hypothesis, or need, or by extending previous knowledge in some way. The steps taken to achieve this would be:

  • Step 1a -- Counter-claiming, or  [writing action = introduce an opposing viewpoint or perspective or identify a gap in prior research that you believe has weakened or undermined the prevailing argument]
  • Step 1b -- Indicating a gap, or  [writing action = develop the research problem around a gap or understudied area of the literature]
  • Step 1c -- Question-raising, or  [writing action = similar to gap identification, this involves presenting key questions about the consequences of gaps in prior research that will be addressed by your study. For example, one could state, “Despite prior observations of voter behavior in local elections in urban Detroit, it remains unclear why do some single mothers choose to avoid....”]
  • Step 1d -- Continuing a tradition  [writing action = extend prior research to expand upon or clarify a research problem. This is often signaled with logical connecting terminology, such as, “hence,” “therefore,” “consequently,” “thus” or language that indicates a need. For example, one could state, “Consequently, these factors need to examined in more detail....” or “Evidence suggests an interesting correlation, therefore, it is desirable to survey different respondents....”]

Move 3: Occupying the Niche [the solution] The final "move" is to announce the means by which your study will contribute new knowledge or new understanding in contrast to prior research on the topic. This is also where you describe the remaining organizational structure of the paper. The steps taken to achieve this would be:

  • Step 1a -- Outlining purposes, or  [writing action = answering the “So What?” question. Explain in clear language the objectives of your study]
  • Step 1b -- Announcing present research [writing action = describe the purpose of your study in terms of what the research is going to do or accomplish. In the social sciences, the “So What?” question still needs to addressed]
  • Step 2 -- Announcing principle findings  [writing action = present a brief, general summary of key findings written, such as, “The findings indicate a need for...,” or “The research suggests four approaches to....”]
  • Step 3 -- Indicating article structure  [writing action = state how the remainder of your paper is organized]

"Introductions." The Writing Lab and The OWL. Purdue University; Atai, Mahmood Reza. “Exploring Subdisciplinary Variations and Generic Structure of Applied Linguistics Research Article Introductions Using CARS Model.” The Journal of Applied Linguistics 2 (Fall 2009): 26-51; Chanel, Dana. "Research Article Introductions in Cultural Studies: A Genre Analysis Explorationn of Rhetorical Structure." The Journal of Teaching English for Specific and Academic Purposes 2 (2014): 1-20; Coffin, Caroline and Rupert Wegerif. “How to Write a Standard Research Article.” Inspiring Academic Practice at the University of Exeter; Kayfetz, Janet. "Academic Writing Workshop." University of California, Santa Barbara, Fall 2009; Pennington, Ken. "The Introduction Section: Creating a Research Space CARS Model." Language Centre, Helsinki University of Technology, 2005; Swales, John and Christine B. Feak. Academic Writing for Graduate Students: Essential Skills and Tasks . 2nd edition. Ann Arbor, MI: University of Michigan Press, 2004; Swales, John M. Genre Analysis: English in Academic and Research Settings . New York: Cambridge University Press, 1990; Chapter 5: Beginning Work. In Writing for Peer Reviewed Journals: Strategies for Getting Published . Pat Thomson and Barbara Kamler. (New York: Routledge, 2013), pp. 93-96.

Writing Tip

Swales showed that establishing a research niche [move 2] is often signaled by specific terminology that expresses a contrasting viewpoint, a critical evaluation of gaps in the literature, or a perceived weakness in prior research. The purpose of using these words is to draw a clear distinction between perceived deficiencies in previous studies and the research you are presenting that is intended to help resolve these deficiencies. Below is a table of common words used by authors.

NOTE : You may prefer not to adopt a negative stance in your writing when placing it within the context of prior research. In such cases, an alternative approach is to utilize a neutral, contrastive statement that expresses a new perspective without giving the appearance of trying to diminish the validity of other people's research. Examples of how to take a more neutral contrasting stance can be achieved in the following ways, with A representing the findings of prior research, B representing your research problem, and X representing one or more variables that have been investigated.

  • Prior research has focused primarily on A , rather than on B ...
  • Prior research into A can be beneficial but to rectify X , it is important to examine B ...
  • These studies have placed an emphasis in the areas of A as opposed to describing B ...
  • While prior studies have examined A , it may be preferable to contemplate the impact of B ...
  • After consideration of A , it is important to also distinguish B ...
  • The study of A has been thorough, but changing circumstances related to X support a need for examining [or revisiting] B ...
  • Although research has been devoted to A , less attention has been paid to B ...
  • Earlier research offers insights into the need for A , though consideration of B would be particularly helpful to...

In each of these example statements, what follows the ellipsis is the justification for designing a study that approaches the problem in the way that contrasts with prior research but which does not devalue its ongoing contributions to current knowledge and understanding.

Dretske, Fred I. “Contrastive Statements.” The Philosophical Review 81 (October 1972): 411-437; Kayfetz, Janet. "Academic Writing Workshop." University of California, Santa Barbara, Fall 2009; Pennington, Ken. "The Introduction Section: Creating a Research Space CARS Model." Language Centre, Helsinki University of Technology, 2005; Swales, John M. Genre Analysis: English in Academic and Research Settings . New York: Cambridge University Press, 1990

  • << Previous: 4. The Introduction
  • Next: Background Information >>
  • Last Updated: Apr 11, 2024 1:27 PM
  • URL: https://libguides.usc.edu/writingguide

Book cover

Repositioning Pedagogical Content Knowledge in Teachers’ Knowledge for Teaching Science pp 3–76 Cite as

Towards a Consensus Model: Literature Review of How Science Teachers’ Pedagogical Content Knowledge Is Investigated in Empirical Studies

  • Kennedy Kam Ho Chan 4 &
  • Anne Hume 5  
  • First Online: 29 January 2019

3565 Accesses

54 Citations

This chapter presents a systematic review of the science education literature to identify how researchers investigate science teachers’ pedagogical content knowledge (PCK). Specifically, we focus on empirical studies of individual science teachers’ PCK published in peer-reviewed science education and teacher education journals since 2008. For each of the reviewed studies, we identify (1) the research context of the investigation; (2) the major purpose of the study; (3) the conceptualisation of PCK in the study; (4) the data sources used to investigate teachers’ PCK; and (5) the approaches used to determine teachers’ PCK. Using this collated information, we provide an overview of how the PCK concept is used, interpreted and investigated within the science education community. The review reveals that researchers conceptualise and operationalise PCK differently. Consequently, they investigate PCK in highly diverse ways and use a wide range of data sources and approaches to capture and determine teachers’ PCK, which in turn generates different kinds of qualitative and quantitative data. Collectively, our findings reveal gaps in the PCK literature and highlight several points of divergence in thinking around the PCK concept within the PCK research community in the field of science education. The findings also provide evidence from the literature supporting the need to build upon and further refine the Consensus Model  (CM) that emerged from the first (1st) PCK Summit in 2012 to further science education research.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Articles that were accepted and published online, even if not assigned to an issue, before 31 December 2017 were included in this review.

In line with Shulman ( 1987 ), we use the term category to refer to a distinct domain of knowledge. PCK components refer to the knowledge components of PCK.

CoRe stands for content representation—a portrayal of PCK structured by big ideas related to a topic with responses to key prompts (Loughran, Mulhall, & Berry, 2004 ).

Despite the cyclical nature of the teaching process, it can be difficult to determine whether a particular activity belongs to the pre-active or the post-active phase of the teaching cycle. As such, analysis of student work, predicting student responses in assessment tasks, classifying assessment questions and reflecting on one’s own teaching actions were assigned to the post-active phase of teaching.

Because 9 studies involved both pre-service and in-service teachers, the total number of studies is 108, rather than 99.

Because 10 studies involved both primary and secondary school teachers, the total number of studies is 109, rather than 99.

The total number does not add up to 99 as some studies used both approaches.

The total number does not add up to 64 as two studies contained separate tasks that allowed the examination of the teaching artefacts/actions only in one task and the teachers’ decisions and reasoning around the other teaching task.

The total number does not add up to 60 as four studies used both simulated teaching tasks and investigated real life in situ.

Abbitt, J. T. (2011). Measuring technological pedagogical content knowledge in preservice teacher education. Journal of Research on Technology in Education, 43 (4), 281–300.

Article   Google Scholar  

Abd-El-Khalick, F. (2006). Pre-service and experienced biology teachers’ global and specific subject matter structures: Implications for conceptions of pedagogical content knowledge. EURASIA Journal of Mathematics, Science & Technology Education, 2 (1), 1–29.

Abell, S. K. (2007). Research on science teacher knowledge. In S. K. Abell & N. Lederman (Eds.), Handbook of research on science education (pp. 1105–1149). Mahwah, NJ: Lawrence Erlbaum Associates.

Google Scholar  

Abell, S. K. (2008). Twenty years later: Does pedagogical content knowledge remain a useful idea? International Journal of Science Education, 30 (10), 1405–1416.

Abell, S. K., Rogers, M. A. P., Hanuscin, D. L., Lee, M. H., & Gagnon, M. J. (2008). Preparing the next generation of science teacher educators: A model for developing PCK for teaching science teachers. Journal of Science Teacher Education, 20 (1), 77–93.

Adadan, E., & Oner, D. (2014). Exploring the progression in preservice chemistry teachers’ pedagogical content knowledge representations: The case of “Behavior of gases”. Research in Science Education, 44 (6), 829–858.

Akerson, V. L., Pongsanon, K., Park Rogers, M. A., Carter, I., & Galindo, E. (2017). Exploring the use of lesson study to develop elementary preservice teachers’ pedagogical content knowledge for teaching nature of science. International Journal of Science and Mathematics Education, 15 (2), 293–312.

Akin, F. N., & Uzuntiryaki-Kondakçı, E. (2018). The nature of the interplay among components of pedagogical content knowledge in reaction rate and chemical equilibrium topics of novice and experienced chemistry teachers. Chemistry Education Research and Practice .

Alonzo, A. C., & Kim, J. (2016). Declarative and dynamic pedagogical content knowledge as elicited through two video-based interview methods. Journal of Research in Science Teaching, 53 (8), 1259–1286.

Alonzo, A. C., Kobarg, M., & Seidel, T. (2012). Pedagogical content knowledge as reflected in teacher–student interactions: Analysis of two video cases. Journal of Research in Science Teaching, 49 (10), 1211–1239.

Alvarado, C., Cañada, F., Garritz, A., & Mellado, V. (2015). Canonical pedagogical content knowledge by CoRes for teaching acid–base chemistry at high school. Chemistry Education Research and Practice, 16 (3), 603–618.

An, S., Kulm, G., & Wu, Z. (2004). The pedagogical content knowledge of middle school, mathematics teachers in China and the U.S. Journal of Mathematics Teacher Education, 7 (2), 145–172.

Avraamidou, L., & Zembal-Saul, C. (2005). Giving priority to evidence in science teaching: A first-year elementary teacher’s specialized practices and knowledge. Journal of Research in Science Teaching, 42 (9), 965–986.

Aydin, S., & Boz, Y. (2013). The nature of integration among PCK components: A case study of two experienced chemistry teachers. Chemistry Education Research and Practice, 14 (4), 615–624.

Aydin, S., Demirdöğen, B., Akin, F. N., Uzuntiryaki-Kondakçı, E., & Tarkin, A. (2015). The nature and development of interaction among components of pedagogical content knowledge in practicum. Teaching and Teacher Education, 46, 37–50.

Aydin, S., Demirdöğen, B., Tarkin, A., Kutucu, E. S., Ekiz, B., Akin, F. N., et al. (2013). Providing a set of research-based practices to support preservice teachers’ long-term professional development as learners of science teaching. Science Education, 97 (6), 903–935.

Aydin, S., Friedrichsen, P. J., Boz, Y., & Hanuscin, D. L. (2014). Examination of the topic-specific nature of pedagogical content knowledge in teaching electrochemical cells and nuclear reactions. Chemistry Education Research and Practice, 15 (4), 658–674.

Barendsen, E., & Henze, I. (2017). Relating teacher PCK and teacher practice using classroom observation. Research in Science Education .

Barnett, E., & Friedrichsen, P. J. (2015). Educative mentoring: How a mentor supported a preservice biology teacher’s pedagogical content knowledge development. Journal of Science Teacher Education, 26 (7), 647–668.

Baxter, J. A., & Lederman, N. G. (1999). Assessment and measurement of pedagogical content knowledge. In J. Gess-Newsome & N. G. Lederman (Eds.), Examining pedagogical content knowledge: The construct and its implications for science education (pp. 147–161). Dordrecht: Kluwer Academic.

Bektas, O., Ekiz, B., Tuysuz, M., Kutucu, E. S., Tarkin, A., & Uzuntiryaki-Kondakçı, E. (2013). Pre-service chemistry teachers’ pedagogical content knowledge of the nature of science in the particle nature of matter. Chemistry Education Research and Practice, 14 (2), 201–213.

Bennett, J., Lubben, F., & Hogarth, S. (2007). Bringing science to life: A synthesis of the research evidence on the effects of context-based and STS approaches to science teaching. Science Education, 91 (3), 347–370.

Bergqvist, A., & Chang Rundgren, S.-N. (2017). The influence of textbooks on teachers’ knowledge of chemical bonding representations relative to students’ difficulties understanding. Research in Science & Technological Education, 35 (2), 215–237.

Bergqvist, A., Drechsler, M., & Chang Rundgren, S.-N. (2016). Upper secondary teachers’ knowledge for teaching chemical bonding models. International Journal of Science Education, 38 (2), 298–318.

Berry, A., Depaepe, F., & van Driel, J. H. (2016). Pedagogical content knowledge in teacher education. In J. Loughran & M. L. Hamilton (Eds.), International handbook of teacher education (Vol. 1, pp. 347–386). Singapore: Springer Singapore.

Chapter   Google Scholar  

Berry, A., Friedrichsen, P. J., & Loughran, J. (2015). Re-examining pedagogical content knowledge in science education . New York: Routledge.

Book   Google Scholar  

Bindernagel, J. A., & Eilks, I. (2009). Evaluating roadmaps to portray and develop chemistry teachers’ PCK about curricular structures concerning sub-microscopic models. Chemistry Education Research and Practice, 10 (2), 77–85.

Blömeke, S., Suhl, U., & Kaiser, G. (2011). Teacher education effectiveness: Quality and equity of future primary teachers’ mathematics and mathematics pedagogical content knowledge. Journal of Teacher Education, 62 (2), 154–171.

Boesdorfer, S., & Lorsbach, A. (2014). PCK in action: Examining one chemistry teacher’s practice through the lens of her orientation toward science teaching. International Journal of Science Education, 36 (13), 2111–2132.

Bravo, P., & Cofré, H. (2016). Developing biology teachers’ pedagogical content knowledge through learning study: The case of teaching human evolution. International Journal of Science Education, 38 (16), 2500–2527.

Brown, P., Friedrichsen, P. J., & Abell, S. K. (2013). The development of prospective secondary biology teachers PCK. Journal of Science Teacher Education, 24 (1), 133–155.

Burton, E. P. (2013). Student work products as a teaching tool for nature of science pedagogical knowledge: A professional development project with in-service secondary science teachers. Teaching and Teacher Education, 29, 156–166.

Cetin-Dindar, A., Boz, Y., Yildiran Sonmez, D., & Demirci Celep, N. (2018). Development of pre-service chemistry teachers’ technological pedagogical content knowledge. Chemistry Education Research and Practice .

Chan, K. K. H., & Yung, B. H. W. (2015). On-site pedagogical content knowledge development. International Journal of Science Education, 37 (8), 1246–1278.

Chan, K. K. H., & Yung, B. H. W. (2017). Developing pedagogical content knowledge for teaching a new topic: More than teaching experience and subject matter knowledge. Research in Science Education , 1–33.

Charalambous, C. Y. (2016). Investigating the knowledge needed for teaching mathematics. Journal of Teacher Education, 67 (3), 220–237.

Chen, B., & Wei, B. (2015). Examining chemistry teachers’ use of curriculum materials: in view of teachers’ pedagogical content knowledge. Chemistry Education Research and Practice, 16 (2), 260–272.

Cochran, K. F., DeRuiter, J. A., & King, R. A. (1993). Pedagogical content knowing: An integrative model for teacher preparation. Journal of Teacher Education, 44 (4), 263–272.

Cochran, K. F., & Jones, L. L. (1998). The subject matter knowledge of preservice science teachers. In B. J. Fraser & K. G. Tobin (Eds.), International handbook of science education (Vol. 2, pp. 707–718). Dordrecht: Kluwer.

Cohen, R., & Yarden, A. (2009). Experienced junior-high-school teachers’ PCK in light of a curriculum change: “The cell is to be studied longitudinally”. Research in Science Education, 39 (1), 131–155.

Cook, S. D. N., & Brown, J. S. (1999). Bridging epistemologies: The generative dance between organizational knowledge and organizational knowing. Organization Science, 10 (4), 381–400.

Daehler, K. R., & Shinohara, M. (2001). A complete circuit is a complete circle: Exploring the potential of case materials and methods to develop teachers’ content knowledge and pedagogical content knowledge of science. Research in Science Education, 31 (2), 267–288.

Davidowitz, B., & Potgieter, M. (2016). Use of the Rasch measurement model to explore the relationship between content knowledge and topic-specific pedagogical content knowledge for organic chemistry. International Journal of Science Education, 38 (9), 1483–1503.

Davis, E., & Krajcik, J. (2005). Designing educative curriculum materials to promote teacher learning. Educational Researcher, 34 (3), 3–14.

Demirdöğen, B. (2016). Interaction between science teaching orientation and pedagogical content knowledge components. Journal of Science Teacher Education, 27 (5), 495–532.

Demirdöğen, B., Hanuscin, D. L., Uzuntiryaki-Kondakçı, E., & Köseoğlu, F. (2016). Development and nature of preservice chemistry teachers’ pedagogical content knowledge for nature of science. Research in Science Education, 46 (6), 831–855.

Demirdöğen, B., & Uzuntiryaki-Kondakçı, E. (2016). Closing the gap between beliefs and practice: Change of pre-service chemistry teachers’ orientations during a PCK-based NOS course. Chemistry Education Research and Practice, 17 (4), 818–841.

Denzin, N. K. (1989). The research act: A theoretical introduction to sociological methods (3rd ed.). Englewood Cliffs, NJ: Prentice Hall.

Depaepe, F., Verschaffel, L., & Kelchtermans, G. (2013). Pedagogical content knowledge: A systematic review of the way in which the concept has pervaded mathematics educational research. Teaching and Teacher Education, 34, 12–25.

Diezmann, C. M., & Watters, J. J. (2015). The knowledge base of subject matter experts in teaching: A case study of a professional scientist as a beginning teacher. International Journal of Science and Mathematics Education, 13 (6), 1517–1537.

Donnelly, D. F., & Hume, A. (2015). Using collaborative technology to enhance pre-service teachers’ pedagogical content knowledge in Science. Research in Science & Technological Education, 33 (1), 61–87.

Driver, R., Asoko, H., Leach, J., Scott, P., & Mortimer, E. (1994). Constructing scientific knowledge in the classroom. Educational Researcher, 23 (7), 5–12.

Faikhamta, C., & Clarke, A. (2012). A self-study of a Thai teacher educator developing a better understanding of PCK for teaching about teaching science. Research in Science Education, 43 (3), 955–979.

Falk, A. (2012). Teachers learning from professional development in elementary science: Reciprocal relations between formative assessment and pedagogical content knowledge. Science Education, 96 (2), 265–290.

Fernández-Balboa, J.-M., & Stiehl, J. (1995). The generic nature of pedagogical content knowledge among college professors. Teaching and Teacher Education, 11 (3), 293–306.

Findlay, M., & Bryce, T. G. K. (2012). From teaching physics to teaching children: Beginning teachers learning from pupils. International Journal of Science Education, 34 (17), 2727–2750.

Fischer, H. E., Borowski, A., & Tepner, O. (2012). Professional knowledge of science teachers. In B. J. Fraser, K. Tobin, & C. J. McRobbie (Eds.), Second international handbook of science education (pp. 435–448). Dordrecht: Springer Netherlands.

Förtsch, C., Werner, S., von Kotzebue, L., & Neuhaus, B. J. (2016). Effects of biology teachers’ professional knowledge and cognitive activation on students’ achievement. International Journal of Science Education, 38 (17), 2642–2666.

Fraser, S. P. (2016). Pedagogical content knowledge (PCK): Exploring its usefulness for science lecturers in higher education. Research in Science Education, 46 (1), 141–161.

Friedrichsen, P. J., Abell, S. K., Pareja, E. M., Brown, P. L., Lankford, D. M., & Volkmann, M. J. (2009). Does teaching experience matter? Examining biology teachers’ prior knowledge for teaching in an alternative certification program. Journal of Research in Science Teaching, 46 (4), 357–383.

Friedrichsen, P. J., van Driel, J. H., & Abell, S. K. (2011). Taking a closer look at science teaching orientations. Science Education, 95 (2), 358–376.

Geddis, A. N., Onslow, B., Beynon, C., & Oesch, J. (1993). Transforming content knowledge: Learning to teach about isotopes. Science Education, 77 (6), 575–591.

Gess-Newsome, J. (1999). Pedagogical content knowledge: An introduction and orientation. In J. Gess-Newsome & N. G. Lederman (Eds.), Examining pedagogical content knowledge: The construct and its implications for science education (pp. 3–20). Dordrecht: Kluwer Academic.

Gess-Newsome, J. (2015). A model of teacher professional knowledge and skill including PCK: Results of the thinking from the PCK Summit. In A. Berry, P. J. Friedrichsen, & J. Loughran (Eds.), Re-examining pedagogical content knowledge in science education (pp. 28–42). New York: Routledge.

Gess-Newsome, J., Taylor, J. A., Carlson, J., Gardner, A. L., Wilson, C. D., & Stuhlsatz, M. A. M. (2017). Teacher pedagogical content knowledge, practice, and student achievement. International Journal of Science Education , 1–20.

Grossman, P. L. (1990). The making of a teacher: Teacher knowledge and teacher education . New York: Teachers College Press.

Großschedl, J., Harms, U., Kleickmann, T., & Glowinski, I. (2015). Preservice biology teachers’ professional knowledge: Structure and learning opportunities. Journal of Science Teacher Education, 26 (3), 291–318.

Großschedl, J., Mahler, D., Kleickmann, T., & Harms, U. (2014). Content-related knowledge of biology teachers from secondary schools: Structure and learning opportunities. International Journal of Science Education, 36 (14), 2335–2366.

Guerriero, S. (2017). Pedagogical knowledge and the changing nature of the teaching profession . Paris: OECD Publishing.

Hallman-Thrasher, A., Connor, J., & Sturgill, D. (2017). Strong discipline knowledge cuts both ways for novice mathematics and science teachers. International Journal of Science and Mathematics Education .

Hanuscin, D. L. (2013). Critical incidents in the development of pedagogical content knowledge for teaching the nature of science: A prospective elementary teacher’s journey. Journal of Science Teacher Education, 24 (6), 933–956.

Hanuscin, D. L., Lee, M. H., & Akerson, V. L. (2011). Elementary teachers’ pedagogical content knowledge for teaching the nature of science. Science Education, 95 (1), 145–167.

Hashweh, M. Z. (2005). Teacher pedagogical constructions: A reconfiguration of pedagogical content knowledge. Teachers and Teaching: Theory and Practice, 11 (3), 273–292.

Heller, J. I., Daehler, K. R., Wong, N., Shinohara, M., & Miratrix, L. W. (2012). Differential effects of three professional development models on teacher knowledge and student achievement in elementary science. Journal of Research in Science Teaching, 49 (3), 333–362.

Henze, I., van Driel, J. H., & Verloop, N. (2008). Development of experienced science teachers’ pedagogical content knowledge of models of the solar system and the universe. International Journal of Science Education, 30 (10), 1321–1342.

Hill, H. C., Ball, D. L., & Schilling, S. G. (2008). Unpacking pedagogical content knowledge: Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal for Research in Mathematics Education, 39 (4), 372–400.

Jackson, P. W. (1986). Life in classrooms . New York: Holt, Rinehart & Winston.

Jin, H., Shin, H., Johnson, M. E., Kim, J., & Anderson, C. W. (2015). Developing learning progression-based teacher knowledge measures. Journal of Research in Science Teaching, 52 (9), 1269–1295.

Jüttner, M., & Neuhaus, B. J. (2012). Development of items for a pedagogical content knowledge test based on empirical analysis of pupils’ errors. International Journal of Science Education, 34 (7), 1125–1143.

Kaiser, G., Blömeke, S., König, J., Busse, A., Döhrmann, M., & Hoth, J. (2017). Professional competencies of (prospective) mathematics teachers—Cognitive versus situated approaches. Educational Studies in Mathematics, 94 (2), 161–182.

Kanter, D. E., & Konstantopoulos, S. (2010). The impact of a project-based science curriculum on minority student achievement, attitudes, and careers: The effects of teacher content and pedagogical content knowledge and inquiry-based practices. Science Education, 94 (5), 855–887.

Käpylä, M., Heikkinen, J. P., & Asunta, T. (2009). Influence of content knowledge on pedagogical content knowledge: The case of teaching photosynthesis and plant growth. International Journal of Science Education, 31 (10), 1395–1415.

Kaya, O. N. (2009). The nature of relationships among the components of pedagogical content knowledge of preservice science teachers: ‘Ozone layer depletion’ as an example. International Journal of Science Education, 31 (7), 961–988.

Keller, M. M., Neumann, K., & Fischer, H. E. (2017). The impact of physics teachers’ pedagogical content knowledge and motivation on students’ achievement and interest. Journal of Research in Science Teaching, 54 (5), 586–614.

Kellner, E., Gullberg, A., Attorps, I., Thorén, I., & Tärneberg, R. (2011). Prospective teachers’ initial conceptions about pupils’ difficulties in science and mathematics: A potential resource in teacher education. International Journal of Science and Mathematics Education, 9 (4), 843–866.

Kersting, N. B. (2008). Using video clips of mathematics classroom instruction as item prompts to measure teachers’ knowledge of teaching mathematics. Educational and Psychological Measurement, 68 (5), 845–861.

Kersting, N. B., Sutton, T., Kalinec-Craig, C., Stoehr, K. J., Heshmati, S., Lozano, G., et al. (2016). Further exploration of the classroom video analysis (CVA) instrument as a measure of usable knowledge for teaching mathematics: Taking a knowledge system perspective. ZDM Mathematics Education, 48 (1), 97–109.

Khourey-Bowers, C., & Fenk, C. (2009). Influence of constructivist professional development on chemistry content knowledge and scientific model development. Journal of Science Teacher Education, 20 (5), 437–457.

Kind, V. (2009). Pedagogical content knowledge in science education: Perspectives and potential for progress. Studies in Science Education, 45 (2), 169–204.

Kind, V. (2017). Development of evidence-based, student-learning-oriented rubrics for pre-service science teachers’ pedagogical content knowledge. International Journal of Science Education , 1–33.

Kirschner, S., Borowski, A., Fischer, H. E., Gess-Newsome, J., & von Aufschnaiter, C. (2016). Developing and evaluating a paper-and-pencil test to assess components of physics teachers’ pedagogical content knowledge. International Journal of Science Education, 38 (8), 1343–1372.

Knievel, I., Lindmeier, A. M., & Heinze, A. (2015). Beyond knowledge: Measuring primary teachers’ subject-specific competences in and for teaching Mathematics with items based on video vignettes. International Journal of Science and Mathematics Education, 13 (2), 309–329.

König, J., Blömeke, S., Klein, P., Suhl, U., Busse, A., & Kaiser, G. (2014). Is teachers’ general pedagogical knowledge a premise for noticing and interpreting classroom situations? A video-based assessment approach. Teaching and Teacher Education, 38, 76–88.

Krepf, M., Plöger, W., Scholl, D., & Seifert, A. (2017). Pedagogical content knowledge of experts and novices—What knowledge do they activate when analyzing science lessons? Journal of Research in Science Teaching , 1–23.

Lin, J.-W. (2016). Do skilled elementary teachers hold scientific conceptions and can they accurately predict the type and source of students’ preconceptions of electric circuits? International Journal of Science and Mathematics Education, 14 (2), 287–307.

Lin, J.-W. (2017). A comparison of experienced and preservice elementary school teachers’ content knowledge and pedagogical content knowledge about electric circuits. EURASIA Journal of Mathematics, Science and Technology Education, 13 (3).

Lin, J.-W., & Chiu, M. H. (2010). The mismatch between students’ mental models of acids/bases and their sources and their teacher’s anticipations thereof. International Journal of Science Education, 32 (12), 1617–1646.

Loughran, J., Milroy, P., Berry, A., Gunstone, R., & Mulhall, P. (2001). Documenting science teachers’ pedagogical content knowledge through PaP-eRs. Research in Science Education, 31 (2), 289–307.

Loughran, J., Mulhall, P., & Berry, A. (2004). In search of pedagogical content knowledge in science: Developing ways of articulating and documenting professional practice. Journal of Research in Science Teaching, 41 (4), 370–391.

Lucero, M. M., Petrosino, A. J., & Delgado, C. (2017). Exploring the relationship between secondary science teachers’ subject matter knowledge and knowledge of student conceptions while teaching evolution by natural selection. Journal of Research in Science Teaching, 54 (2), 219–246.

Luft, J. A. (2009). Beginning secondary science teachers in different induction programmes: The first year of teaching. International Journal of Science Education, 31 (17), 2355–2384.

Luft, J. A., Firestone, J. B., Wong, S. S., Ortega, I., Adams, K., & Bang, E. (2011). Beginning secondary science teacher induction: A two-year mixed methods study. Journal of Research in Science Teaching, 48 (10), 1199–1224.

Magnusson, S., Krajcik, J., & Borko, H. (1999). Nature, sources, and development of pedagogical content knowledge for science teaching. In J. Gess-Newsome & N. G. Lederman (Eds.), Examining pedagogical content knowledge: The construct and its implications for science education (pp. 95–132). Dordrecht: Kluwer Academic.

Mahler, D., Großschedl, J., & Harms, U. (2017). Using doubly latent multilevel analysis to elucidate relationships between science teachers’ professional knowledge and students’ performance. International Journal of Science Education, 39 (2), 213–237.

Marks, R. (1990). Pedagogical content knowledge: From a mathematical case to a modified conception. Journal of Teacher Education, 41 (3), 3–11.

Marshall, J. C., Smart, J., & Alston, D. M. (2016). Development and validation of Teacher Intentionality of Practice Scale (TIPS): A measure to evaluate and scaffold teacher effectiveness. Teaching and Teacher Education, 59, 159–168.

Matthew, J. K., Tae Seob, S., & Punya, M. (2012). How do we measure TPACK? Let me count the ways. In N. R. Robert, R. R. Christopher, & L. N. Margaret (Eds.), Educational technology, teacher knowledge, and classroom impact: A research handbook on frameworks and approaches (pp. 16–31). Hershey, PA, USA: IGI Global.

Mavhunga, E. (2016). Transfer of the pedagogical transformation competence across chemistry topics. Chemistry Education Research and Practice, 17 (4), 1081–1097.

Mavhunga, E., & Rollnick, M. (2013). Improving PCK of chemical equilibrium in pre-service teachers. African Journal of Research in Mathematics, Science and Technology Education, 17 (1–2), 113–125.

Mavhunga, E., & Rollnick, M. (2016). Teacher- or learner-centred? Science teacher beliefs related to topic specific pedagogical content knowledge: A South African case study. Research in Science Education, 46 (6), 831–855.

McNeill, K. L., González-Howard, M., Katsh-Singer, R., & Loper, S. (2016). Pedagogical content knowledge of argumentation: Using classroom contexts to assess high-quality PCK rather than pseudoargumentation. Journal of Research in Science Teaching, 53 (2), 261–290.

Mellado, V., Bermejo, M. L., Blanco, L. J., & Ruiz, C. (2007). The classroom practice of a prospective secondary biology teacher and his conceptions of the nature of science and of teaching and learning science. International Journal of Science and Mathematics Education, 6 (1), 37–62.

Melo-Niño, L. V., Cañada, F., & Mellado, V. (2017a). Exploring the emotions in pedagogical content knowledge about the electric field. International Journal of Science Education, 39 (8), 1025–1044.

Melo-Niño, L. V., Cañada, F., & Mellado, V. (2017b). Initial characterization of colombian high school physics teachers’ pedagogical content knowledge on electric fields. Research in Science Education, 47 (1), 25–48.

Meschede, N., Fiebranz, A., Möller, K., & Steffensky, M. (2017). Teachers’ professional vision, pedagogical content knowledge and beliefs: On its relation and differences between pre-service and in-service teachers. Teaching and Teacher Education, 66, 158–170.

Miller, M. (2007). Pedagogical content knowledge. In G. Bodner & M. Orgill (Eds.), Theoretical frameworks for research in chemistry/science education (pp. 86–106). Upper Saddle River, NJ: Pearson Prentice Hall.

Monet, J. A., & Etkina, E. (2008). Fostering self-reflection and meaningful learning: Earth science professional development for middle school science teachers. Journal of Science Teacher Education, 19 (5), 455–475.

Moodley, K., & Gaigher, E. (2017). Teaching electric circuits: Teachers’ perceptions and learners’ misconceptions. Research in Science Education .

Mthethwa-Kunene, E., Onwu, G. O., & de Villiers, R. (2015). Exploring biology teachers’ pedagogical content knowledge in the teaching of genetics in Swaziland science classrooms. International Journal of Science Education, 37 (7), 1140–1165.

National Council for Accreditation of Teacher Education. (2008). Professional standards for the accreditation of teacher preparation institutions . Washington, DC: National Council for Accreditation of Teacher Education.

National Research Council. (1996). The national science education standards . Washington, DC: National Academy Press.

Nilsson, P. (2014). When teaching makes a difference: Developing science teachers’ pedagogical content knowledge through learning study. International Journal of Science Education, 36 (11), 1794–1814.

Nilsson, P., & Elm, A. (2017). Capturing and developing early childhood teachers’ science pedagogical content knowledge through CoRes. Journal of Science Teacher Education , 1–19.

Nilsson, P., & Loughran, J. (2012). Exploring the development of pre-service science elementary teachers’ pedagogical content knowledge. Journal of Science Teacher Education, 23 (7), 699–721.

Nilsson, P., & van Driel, J. H. (2010). Teaching together and learning together—Primary science student teachers’ and their mentors’ joint teaching and learning in the primary classroom. Teaching and Teacher Education, 26 (6), 1309–1318.

Nilsson, P., & Vikström, A. (2015). Making PCK explicit—Capturing science teachers’ pedagogical content knowledge (PCK) in the science classroom. International Journal of Science Education, 37 (17), 2836–2857.

Oh, P. S., & Kim, K. S. (2013). Pedagogical transformations of science content knowledge in Korean elementary classrooms. International Journal of Science Education, 35 (9), 1590–1624.

Park, S., & Chen, Y.-C. (2012). Mapping out the integration of the components of pedagogical content knowledge (PCK): Examples from high school biology classrooms. Journal of Research in Science Teaching, 49 (7), 922–941.

Park, S., Jang, J.-Y., Chen, Y.-C., & Jung, J. (2011). Is pedagogical content knowledge (PCK) necessary for reformed science teaching?: Evidence from an empirical study. Research in Science Education, 41 (2), 245–260.

Park, S., & Oliver, J. S. (2008a). National Board Certification (NBC) as a catalyst for teachers’ learning about teaching: The effects of the NBC process on candidate teachers’ PCK development. Journal of Research in Science Teaching, 45 (7), 812–834.

Park, S., & Oliver, J. S. (2008b). Revisiting the conceptualisation of pedagogical content knowledge (PCK): PCK as a conceptual tool to understand teachers as professionals. Research in Science Education, 38 (3), 261–284.

Park, S., Suh, J., & Seo, K. (2017). Development and validation of measures of secondary science teachers’ PCK for teaching photosynthesis. Research in Science Education , 1–25.

Paulick, I., Großschedl, J., Harms, U., & Möller, J. (2016). Preservice teachers’ professional knowledge and its relation to academic self-concept. Journal of Teacher Education, 67 (3), 173–182.

Piliouras, P., Plakitsi, K., Seroglou, F., & Papantoniou, G. (2017). Teaching explicitly and reflecting on elements of nature of science: A discourse-focused professional development program with four fifth-grade teachers. Research in Science Education , 1–26.

Qhobela, M., & Kolitsoe Moru, E. (2014). Examining secondary school physics teachers’ beliefs about teaching and classroom practices in Lesotho as a foundation for professional development. International Journal of Science and Mathematics Education, 12 (6), 1367–1392.

Rollnick, M. (2017). Learning about semi conductors for teaching—The role played by content knowledge in Pedagogical Content Knowledge (PCK) development. Research in Science Education , 1–36.

Rollnick, M., Bennett, J., Rhemtula, M., Dharsey, N., & Ndlovu, T. (2008). The place of subject matter knowledge in pedagogical content knowledge: A case study of South African teachers teaching the amount of substance and chemical equilibrium. International Journal of Science Education, 30 (10), 1365–1387.

Rosenkränzer, F., Hörsch, C., Schuler, S., & Riess, W. (2017). Student teachers’ pedagogical content knowledge for teaching systems thinking: Effects of different interventions. International Journal of Science Education , 1–20.

Roth, K. J., Garnier, H. E., Chen, C., Lemmens, M., Schwille, K., & Wickler, N. I. Z. (2011). Videobased lesson analysis: Effective science PD for teacher and student learning. Journal of Research in Science Teaching, 48 (2), 117–148.

Sadler, P. M., Sonnert, G., Coyle, H. P., Cook-Smith, N., & Miller, J. L. (2013). The Influence of teachers’ knowledge on student learning in middle school physical science classrooms. American Educational Research Journal, 50 (5), 1020–1049.

Salloum, S. L., & BouJaoude, S. (2008). Careful! It is H 2 O? Teachers’ conceptions of chemicals. International Journal of Science Education, 30 (1), 33–64.

Scharfenberg, F.-J., & Bogner, F. X. (2016). A new role change approach in pre-service teacher education for developing pedagogical content knowledge in the context of a student outreach lab. Research in Science Education, 46 (5), 743–766.

Schmelzing, S., van Driel, J. H., Jüttner, M., Brandenbusch, S., Sandmann, A., & Neuhaus, B. J. (2013). Development, evaluation and validation of a paper-and-pencil test for measuring two components of biology teachers’ pedagogical content knowledge concerning the “cardiovascular system” International Journal of Science and Mathematics Education, 11 (6), 1369–1390.

Schneider, R. M., & Plasman, K. (2011). Science teacher learning progressions. Review of Educational Research, 81 (4), 530–565.

Settlage, J. (2013). On acknowledging PCK’s shortcomings. Journal of Science Teacher Education, 24 (1), 1–12.

Seung, E. (2013). The process of physics teaching assistants’ pedagogical content knowledge development. International Journal of Science and Mathematics Education, 11 (6), 1303–1326.

Shavelson, R. J., Ruiz-Primo, M. A., & Wiley, E. W. (2005). Windows into the mind. Higher Education, 49 (4), 413–430.

Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15 (2), 4–14.

Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57 (1), 1–22.

Shulman, L. S. (2015). PCK: Its genesis and exodus. In A. Berry, P. J. Friedrichsen, & J. Loughran (Eds.), Re-examining pedagogical content knowledge in science education (pp. 3–13). New York: Routledge.

Sickel, A. J., & Friedrichsen, P. J. (2017). Using multiple lenses to examine the development of beginning biology teachers’ pedagogical content knowledge for teaching natural selection simulations. Research in Science Education , 1–42.

Smit, R., Rietz, F., & Kreis, A. (2017). What are the effects of science lesson planning in peers?—Analysis of attitudes and knowledge based on an actor–partner interdependence model. Research in Science Education , 1–18.

Smit, R., Weitzel, H., Blank, R., Rietz, F., Tardent, J., & Robin, N. (2017). Interplay of secondary pre-service teacher content knowledge (CK), pedagogical content knowledge (PCK) and attitudes regarding scientific inquiry teaching within teacher training. Research in Science & Technological Education , 1–23.

Smith, S. P., & Banilower, E. R. (2015). Assessing PCK: A new application of the uncertainty principle. In A. Berry, P. J. Friedrichsen, & J. Loughran (Eds.), Re-examining pedagogical content knowledge in science education (pp. 88–103). New York: Routledge.

Sorge, S., Kröger, J., Petersen, S., & Neumann, K. (2017). Structure and development of pre-service physics teachers’ professional knowledge. International Journal of Science Education , 1–28.

Stasinakis, P. K., & Athanasiou, K. (2016). Investigating Greek biology teachers’ attitudes towards evolution teaching with respect to their pedagogical content knowledge: Suggestions for their professional development. EURASIA Journal of Mathematics, Science & Technology Education, 13, 1605–1617.

Stender, A., Brückmann, M., & Neumann, K. (2017). Transformation of topic-specific professional knowledge into personal pedagogical content knowledge through lesson planning. International Journal of Science Education , 1–25.

Suh, J., & Park, S. (2017). Exploring the relationship between pedagogical content knowledge (PCK) and sustainability of an innovative science teaching approach. Teaching and Teacher Education, 64, 246–259.

Supprakob, S., Faikhamta, C., & Suwanruji, P. (2016). Using the lens of pedagogical content knowledge for teaching the nature of science to portray novice chemistry teachers’ transforming NOS in early years of teaching profession. Chemistry Education Research and Practice, 17 (4), 1067–1080.

Tamir, P. (1988). Subject matter and related pedagogical knowledge in teacher education. Teaching & Teacher Education, 4, 99–110.

Tay, S. L., & Yeo, J. (2017). Analysis of a physics teacher’s pedagogical ‘micro-actions’ that support 17-year-olds’ learning of free body diagrams via a modelling approach. International Journal of Science Education , 1–30.

Tepner, O., Borowski, A., Dollny, S., Fischer, H. E., Jüttner, M., Kirschner, S., et al. (2012). Modell zur Entwicklung von Testitems zur Erfassung des Professionswissens von Lehrkräften in den Naturwissenschaften [Item development model for assessing professional knowledge of science teachers]. Zeitschrift für Didaktik der Naturwissenschaften, 18, 7–28.

Uzuntiryaki-Kondakçı, E., Demirdöğen, B., Akin, F. N., Tarkin, A., & Aydin, S. (2017). Exploring the complexity of teaching: The interaction between teacher self-regulation and pedagogical content knowledge. Chemistry Education Research and Practice, 18 (1), 250–270.

van Dijk, E. M. (2009). Teachers’ views on understanding evolutionary theory: A PCK-study in the framework of the ERTE-model. Teaching and Teacher Education, 25 (2), 259–267.

van Dijk, E. M. (2014). Understanding the heterogeneous nature of science: A comprehensive notion of PCK for scientific literacy. Science Education, 98 (3), 397–411.

van Dijk, E. M., & Kattmann, U. (2007). A research model for the study of science teachers’ PCK and improving teacher education. Teaching and Teacher Education, 23 (6), 885–897.

van Driel, J. H., & Abell, S. K. (2010). Science teacher education. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopedia of education (3rd ed., pp. 712–718). Oxford: Elsevier.

van Driel, J. H., & Berry, A. (2010). The teacher education knowledge base: Pedagogical content knowledge. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopedia of education (3rd ed., pp. 656–661). Oxford: Elsevier.

van Driel, J. H., & Berry, A. (2012). Teacher professional development focusing on pedagogical content knowledge. Educational Researcher, 41 (1), 26–28.

van Driel, J. H., & Berry, A. (2017). Developing pre-service teachers’ pedagogical content knowledge. In D. J. Clandinin & J. Husu (Eds.), The SAGE handbook of research on teacher education (Vol. 2, pp. 561–576). Thousand Oaks: California SAGE Publications Ltd.

van Driel, J. H., Berry, A., & Meirink, J. A. (2014). Research on science teacher knowledge. In N. G. Lederman & S. K. Abell (Eds.), Handbook of research on science education (Vol. 2, pp. 848–870). New York, NY: Routledge.

van Driel, J. H., Verloop, N., & de Vos, W. (1998). Developing science teachers’ pedagogical content knowledge. Journal of Research in Science Teaching, 35 (6), 673–695.

Veal, W. R., & MaKinster, J. (1999). Pedagogical content knowledge taxonomies. Electronic Journal of Science Education, 3 (4).

Walan, S., Nilsson, P., & Ewen, B. M. (2017). Why inquiry? Primary teachers’ objectives in choosing inquiry- and context-based instructional strategies to stimulate students’ science learning. Research in Science Education , 1–20.

Wang, Y.-L., Tsai, C.-C., & Wei, S.-H. (2015). The sources of science teaching self-efficacy among elementary school teachers: A mediational model approach. International Journal of Science Education, 37 (14), 2264–2283.

Willermark, S. (2017). Technological pedagogical and content knowledge: A review of empirical studies published from 2011 to 2016. Journal of Educational Computing Research , 1–28.

Wilson, S. M., Shulman, L. S., & Richert, E. R. (1987). ‘150 different ways’ of knowing: Representations of knowledge in Teaching. In J. Calderhead (Ed.), Exploring teachers’ thinking . London: Cassell.

Wongsopawiro, D. S., Zwart, R. C., & van Driel, J. H. (2017). Identifying pathways of teachers’ PCK development. Teachers and Teaching, 23 (2), 191–210.

Zembylas, M. (2007). Emotional ecology: The intersection of emotional knowledge and pedagogical content knowledge in teaching. Teaching and Teacher Education, 23 (4), 355–367.

Zhou, S., Wang, Y., & Zhang, C. (2016). Pre-service science teachers’ PCK: Inconsistency of pre-service teachers’ predictions and student learning difficulties in Newton’s Third Law. EURASIA Journal of Mathematics, Science and Technology Education, 12 (3), 373–385.

Download references

Acknowledgements

This research was supported by the Early Career Scheme of the Research Grants Council of Hong Kong [Project Number 27608717].

Author information

Authors and affiliations.

The University of Hong Kong, Hong Kong, Hong Kong

Kennedy Kam Ho Chan

University of Waikato, Hamilton, New Zealand

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kennedy Kam Ho Chan .

Editor information

Editors and affiliations.

Monash University, Melbourne, VIC, Australia

Rebecca Cooper

University of Potsdam, Potsdam, Germany

Andreas Borowski

An overview of the studies reviewed. Note all reviewed articles can be found in the reference list for this chapter.

1. The number of years of teaching experience (in general) of in-service teachers is given in parentheses. ‘ M ’ denotes the mean number of teaching years of the teachers in the study.

2. Grade level refers to the grades that the teachers taught or for which they were certified. Teachers from grade 1 to 6 were labelled as primary teachers, whereas those from grade 7 to 12 were categorised as secondary teachers. Studies that involved middle school teachers were categorised as investigating both primary and secondary teachers. If the studies specified that the target participants were middle school teachers, middle school was also included in the above table.

3. If teachers from more than two subject domains were included, the subject was listed as ‘science’ or ‘science and non-science’.

4. The subject domain of primary teachers is not shown as primary teachers are often generalists rather than subject specialists.

5. If more than two science topics were investigated in the study, the topics were labelled as ‘multiple’.

6. If the topics under investigation were not clearly listed, the topic was categorised as ‘not specified’.

7. Only teachers, not scientists, were included in the determination of the sample size in Kirschner et al. ( 2016 ) and Schmelzing et al. ( 2013 ).

8. As Salloum and BouJaoude ( 2008 ) adopted purposeful sampling of chemistry teachers, the subject domain was considered as ‘chemistry’.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter.

Chan, K.K.H., Hume, A. (2019). Towards a Consensus Model: Literature Review of How Science Teachers’ Pedagogical Content Knowledge Is Investigated in Empirical Studies. In: Hume, A., Cooper, R., Borowski, A. (eds) Repositioning Pedagogical Content Knowledge in Teachers’ Knowledge for Teaching Science. Springer, Singapore. https://doi.org/10.1007/978-981-13-5898-2_1

Download citation

DOI : https://doi.org/10.1007/978-981-13-5898-2_1

Published : 29 January 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-13-5897-5

Online ISBN : 978-981-13-5898-2

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 15 February 2023

Literature review of stroke assessment for upper-extremity physical function via EEG, EMG, kinematic, and kinetic measurements and their reliability

  • Rene M. Maura   ORCID: orcid.org/0000-0001-6023-9038 1 ,
  • Sebastian Rueda Parra 4 ,
  • Richard E. Stevens 2 ,
  • Douglas L. Weeks 3 ,
  • Eric T. Wolbrecht 1 &
  • Joel C. Perry 1  

Journal of NeuroEngineering and Rehabilitation volume  20 , Article number:  21 ( 2023 ) Cite this article

5993 Accesses

15 Citations

Metrics details

Significant clinician training is required to mitigate the subjective nature and achieve useful reliability between measurement occasions and therapists. Previous research supports that robotic instruments can improve quantitative biomechanical assessments of the upper limb, offering reliable and more sensitive measures. Furthermore, combining kinematic and kinetic measurements with electrophysiological measurements offers new insights to unlock targeted impairment-specific therapy. This review presents common methods for analyzing biomechanical and neuromuscular data by describing their validity and reporting their reliability measures.

This paper reviews literature (2000–2021) on sensor-based measures and metrics for upper-limb biomechanical and electrophysiological (neurological) assessment, which have been shown to correlate with clinical test outcomes for motor assessment. The search terms targeted robotic and passive devices developed for movement therapy. Journal and conference papers on stroke assessment metrics were selected using PRISMA guidelines. Intra-class correlation values of some of the metrics are recorded, along with model, type of agreement, and confidence intervals, when reported.

A total of 60 articles are identified. The sensor-based metrics assess various aspects of movement performance, such as smoothness, spasticity, efficiency, planning, efficacy, accuracy, coordination, range of motion, and strength. Additional metrics assess abnormal activation patterns of cortical activity and interconnections between brain regions and muscle groups; aiming to characterize differences between the population who had a stroke and the healthy population.

Range of motion, mean speed, mean distance, normal path length, spectral arc length, number of peaks, and task time metrics have all demonstrated good to excellent reliability, as well as provide a finer resolution compared to discrete clinical assessment tests. EEG power features for multiple frequency bands of interest, specifically the bands relating to slow and fast frequencies comparing affected and non-affected hemispheres, demonstrate good to excellent reliability for populations at various stages of stroke recovery. Further investigation is needed to evaluate the metrics missing reliability information. In the few studies combining biomechanical measures with neuroelectric signals, the multi-domain approaches demonstrated agreement with clinical assessments and provide further information during the relearning phase. Combining the reliable sensor-based metrics in the clinical assessment process will provide a more objective approach, relying less on therapist expertise. This paper suggests future work on analyzing the reliability of metrics to prevent biasedness and selecting the appropriate analysis.

Stroke is one of the leading causes of death and disability in developed countries. In the United States, a stroke occurs every 40 s, ranking stroke as the fifth leading cause of death and the first leading cause of disability in the country [ 1 ]. The high prevalence of stroke, coupled with increasing stroke survival rates, puts a growing strain on already limited healthcare resources; the cost of therapy is elevated [ 2 ] and restricted mostly to a clinical setting [ 3 ], leading to 50% of survivors that reach the chronic stage experiencing severe motor disability for upper extremities [ 4 ]. This highlights the need for refined (improved) assessment which can help pair person-specific impairment with appropriately targeted therapeutic strategies.

Rehabilitation typically starts with a battery of standardized tests to assess impairment and function. This initial evaluation serves as a baseline of movement capabilities and usually includes assessment of function during activities of daily living (ADL). Because these clinical assessments rely on trained therapists as raters, the scoring scale is designed to be discrete and, in some cases, bounded. While this improves the reliability of the metric [ 5 ] (i.e., raters more likely to agree), it also reduces the sensitivity of the scale. Furthermore, those assessment scales that are bounded, such as the Fugl-Meyer Assessment (FMA) [ 6 ], Ashworth or Modified Ashworth (MA) Scale [ 7 ], and Barthel Index [ 8 ], suffer from floor/ceiling effects where the limits of the scales become insensitive to the extremes of impairment and function. It is therefore important to develop new clinical assessment methods that are objective, quantifiable, reliable, and sensitive to change over the full range of function and impairment.

Over the last several decades, robotic devices have been designed and studied for administering post-stroke movement therapy. These devices have begun being adopted into clinical rehabilitation practice. More recently, researchers have proposed and studied the use of robotic devices to assess stroke-related impairments as an approach to overcome the limitations of existing clinical measures previously discussed [ 9 , 10 , 11 , 12 ]. Robots may be equipped with sensitive measurement devices that can be used to rate the person’s performance in a predefined task. These devices can include measuring kinematic (position/velocity), kinetic (force/torque), and/or neuromuscular (electromyography/electroencephalography) output from the subject during the task. Common sensor-based robotic metrics for post-stroke assessment included speed of response, planning time, movement planning, smoothness, efficiency, range, and efficacy [ 13 , 14 ]. Figure  1 demonstrates an example method for comprehensive assessment of a person who has suffered a stroke with data acquired during robotically administered tests. Furthermore, there is potential for new and more comprehensive knowledge to be gained from a wider array of assessment methods and metrics that combine the benefits of biomechanical (e.g., kinematic and kinetic) and neurological (e.g., electromyographic and electroencephalographic) measures [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ].

figure 1

Example of instrument for upper extremities bilateral biomechanical and neuromuscular assessment. From this data, a wide variety of measures and metrics for assessment of upper-extremity impairment and function may be reported

  • Biomechanical assessment

Many classical methods of assessing impairment or function involve manual and/or instrumented quantification of performance through measures of motion (i.e., kinematic) and force (i.e., kinetic) capabilities. These classical methods rely on the training of the therapist to evaluate the capabilities of the person through keen observation (e.g., FMA [ 6 ] and MA [ 7 ]). The quality of kinematic and kinetic measures can be improved with the use of electronic-based measurements [ 23 ]. Robotic devices equipped with electronic sensors have the potential to improve the objectivity, sensitivity, and reliability of the assessment process by providing a means for more quantitative, precise, and accurate information [ 9 , 10 , 11 , 12 , 24 , 25 , 26 , 27 , 28 ]. Usually, the electronic sensors on a rehabilitation robotic device are used for control purposes [ 29 , 30 , 31 ]. Robotics can also measure movement outputs, such as force or joint velocities, which the clinician may not be able to otherwise measure as accurately (or simultaneously) using existing clinical assessment methods [ 23 ]. With accurate and repeatable measurement of forces and joint velocities, sensor-based assessments have the potential to assess the person’s movement in an objective and quantifiable way. This article reviews validity and reliability of biomechanical metrics in relationship to assessment of motor function for upper extremities.

Electrophysiological features for assessment

Neural signals that originate from the body can be measured using non-invasive methods. Among others, electroencephalograms (EEG) measure cortical electrical activity, and electromyograms (EMG) measure muscle electrical activity. The relative low cost, as well as the noninvasive nature of these technologies make them suitable for studying changes in cortical or muscle activation caused by conditions or injuries of the brain, such as the ones elicited by stroke lesions [ 32 ].

Initially, EMG/EEG were used strictly as clinical diagnostic tools [ 33 , 34 ]. Recent improvements in signal acquisition hardware and computational processing methods have increased their use as viable instruments for understanding and treating neuromuscular diseases and neural conditions [ 32 ]. Features extracted from these signals are being researched to assess their relationship to motor and cognitive deficits [ 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 ] and delayed ischemia [ 34 , 43 ], as well as to identify different uses of the signals that could aid rehabilitation [ 44 ]. Applications of these features in the context of stroke include: (1) commanding robotic prostheses [ 45 , 46 ], exoskeletons [ 21 , 47 , 48 ], and brain-machine interfaces [ 44 , 49 , 50 , 51 ]; and (2) bedside monitoring for sub-acute patients and thrombolytic therapy [ 52 , 53 , 54 ]. Here we review the validity and reliability of metrics derived from electrophysiological signals in relationship to stroke motor assessment for upper extremity.

Reliability of metrics

Robotic or sensor-based assessment tools have not gained widespread clinical acceptance for stroke assessment. Numerous barriers to their clinical adoption remain, including demonstrating their reliability and providing sufficient validation of robotic metrics with respect to currently accepted assessment techniques [ 55 ]. In the assessment of motor function with sensor-based systems, several literature reviews reveal a wide spectrum of sensor-based metrics to use for stroke rehabilitation and demonstrate their validity [ 13 , 42 , 56 , 57 , 58 , 59 , 63 , 64 ]. However, in addition to demonstrating validity, new clinical assessments must also demonstrate good or excellent reliability in order to support their adoption in the clinical field. This is achieved by: (1) comparing multiple measurements on the same subject (test–retest reliability), and (2) checking agreement between multiple raters of the same subject (inter-rater reliability). Reliability quantifies an assessment’s ability to deliver scores that are free from measurement error [ 65 ]. Previous literature reviews have presented limited, if any, information on the reliability of the biomechanical robotic metrics. Murphy and Häger [ 66 ], Wang et al. [ 56 ], and Shishov et al. [ 67 ] reviewed reliability, but omitted some important aspects of intra-class correlation methods used in the study (e.g., the model type and/or the confidence interval), which are required when analyzing intra-class correlation methods for reliability [ 68 ]. If the reliability is not properly analyzed and reported, the study runs the risk of having a biased result. Murphy and Häger [ 66 ] also found a lack of studies determining the reliability of metrics in 2015. Since electronic-based assessments require the use of a therapist or an operator to administer the test, an inter-observer reliability test should be investigated to observe the effect of the test administrators on the assessment process. Therefore, both test–retest and inter-observer reliability in biomechanical and electrophysiological metrics are reviewed to provide updated information on the current findings of the metrics’ reliability.

Integrated metrics

Over the past 50 years, numerous examples of integrated metrics have provided valuable insight into the inner workings of human arm function. In the 1970s EMG was combined with kinematic data in patients with spasticity to understand muscle patterns during ballistic arm reach movements [ 69 ], the affects of pharmacological intervention on spastic stretch reflexes during passive vs. voluntary movement [ 70 ], and in the 1990s EMG was combined with kinetic data to understand the effects of abnormal synergy patterns on reach workspace when lifting the arm against gravity [ 71 ]. This work dispelled long-standing theories of muscular weakness and spasticity alone being the major contributors to arm impairment. More recently, quantified aspects of processed EEG and EMG signals are being combined with kinematic data to investigate the compensatory role, and relation to shoulder-related abnormal muscle synergies of the contralesional secondary sensorimotor cortex, in a group of chronic stroke survivors [ 72 ]. These and other works demonstrate convincingly the value of combined metrics and the insights they can uncover that isolated metrics cannot discover alone.

To provide further information on the stroke severity and the relearning process during stroke therapy, researchers are investigating a multi-modal approach using biomechanical and neuromuscular features [ 15 , 16 , 18 , 19 , 21 , 22 ]. Combining both neuromuscular and biomechanical metrics will provide a comprehensive assessment of the person’s movement starting from motor planning to the end of motor execution. Neuromuscular output provides valuable information on the feedforward control and the movement planning phase [ 22 ]. However, neuromuscular signals provides little information on the movement quality that is often investigated with movement function tests or biomechanical output [ 21 ]. Also, using neuromuscular data will provide information to therapist on the neurological status and nervous system reorganization of the person that biomechanical information cannot provide [ 73 ]. The additional information can assist in developing more personalized care for the person with stroke, as well as offer considerable information on the changes that occur at the physiological level.

Paper overview

This paper reviews published sensor-based methods, for biomechanical and neuromuscular assessment of impairment and function after neurological damage, and how the metrics resulting from the assessments, both alone and in combination, may be able to provide further information on the recovery process. Specifically, methods and metrics utilizing digitized kinematic, kinetic, EEG, and EMG data were considered. The “Methods” section explains how the literature review was performed. In “Measures and methods based on biomechanical performance” section, prevailing robotic assessment metrics are identified and categorized including smoothness, resistance, efficiency, accuracy, efficacy, planning, range-of-motion, strength, inter-joint coordination, and intra-joint coordination. In “Measures and methods based on neural activity using EEG/EMG” section, EEG- and EMG-derived measures are discussed by the primary category of analysis performed to obtain them, including frequency power and coherence analyses. The relationship of each method and metric to stroke impairment and/or function is also discussed. Section “Reliability of measures” discusses the reliability of sensor-based metrics and some of the complications in demonstrating the effectiveness of the metrics. Section “Integrated metrics” reviews previous studies on combining biomechanical and neuromuscular data to provide further information on the changes occurring during assessment and training. Finally, Section “Discussions and conclusions” concludes the paper with a discussion on the advantages of combining multi-domain data, which of the metrics from the earlier sections should be considered in future robotic applications, as well as the ones that still require more investigation for either validity and/or reliability.

A literature review was performed following PRISMA guidelines [ 74 ] on biomechanical and neuromuscular assessment in upper-limb stroke rehabilitation. The review was composed of two independent searches on (1) biomechanical robotic devices, and (2) electrophysiological digital signal processing. Figures  2 and 3 show the selection process of the electrophysiological and biomechanical papers, respectively. Each of these searches applied the following steps: In step 1, each researcher searched in Google Scholar for papers between 2000 and 2021 (see Table 1 for search terms and prompts). In step 2, resulting titles and abstracts were screened to remove duplicates, articles in other languages, and articles not related to the literature review. In step 3, researchers read the full texts of articles screened in step 2, papers qualifying for inclusion using the Literature Review Criteria in Table 1 were selected. Finally, in step 4, selected articles from independent review process were read by the other researcher. Uncertainties in determining if a paper should be included/excluded were discussed with the whole research group. Twenty-four papers focus on biomechanical measures (kinematic and kinetic), thirty-three focus on electrophysiological measures (EEG/EMG), and six papers on multimodal approaches combining biomechanical and neuromuscular measures to assess stroke. Three of the six multimodal papers are also reported in the biomechanical section and 3 papers were hand-picked. A total of 60 papers are reviewed and reported.

figure 2

PRISMA flowchart on the selection for electrophysiological papers

figure 3

PRISMA flow chart for the selection for biomechanical papers

Measures and methods based on biomechanical performance

This review presents common robotic metrics which have been previously used to assess impairment and function after stroke. Twenty-five biomechanical papers are reviewed, which used both sensor-based and traditional clinical metrics to assess upper-extremity impairment and function. The five common metrics included in the reviewed studies measured the number of velocity peaks (~ 9 studies), path-length ratio (~ 8 studies), the max speed of the arm (~ 7 studies), active range of motion (~ 7 studies), and movement time (~ 7 studies). The metrics are often compared to an established clinical assessment to determine validity of the metric. The sensor-based metrics can be categorized by the aspect in which they evaluate movement quality similar to De Los Reyes-Guzmán et al.: smoothness, efficiency, efficacy, accuracy, coordination, or range of motion [ 14 ]. Resistance, Movement Planning, Coordination, and Strength are included as additional categories since some of the reviewed sensor-based metrics best evaluate those movement aspects. Examples of common evaluation activities and specific metrics that have been computed to quantify movement quality are outlined in Table 2 .

Lack of arm movement smoothness is a key indicator of underlying impairment [ 79 ]. Traditional therapist-administered assessments do not computationally measure smoothness leaving therapists unable to determine the degree to which disruption to movement smoothness is compromising motor function and, therefore, ADL. Most metrics that have been developed to quantify smoothness are based on features of the velocity profile of an arm movement, such as speed [ 80 , 81 ], speed arc length [ 79 ], local minima of velocity [ 10 ], velocity peaks [ 75 , 76 , 81 ], tent [ 80 ], spectral [ 25 ], spectral arc length [ 25 , 81 ], modified spectral arc length [ 79 ], and mean arrest period ratio [ 76 ]. Table 3 summarizes the smoothness metrics and their corresponding equations with equation numbers for reference. The speed metric is expressed as a ratio between the mean speed and the peak speed (Eq. 1). The speed arc length is the temporal length of the velocity profile (Eq. 2). Local minima of velocity and the velocity peaks metrics are measured by counting the number of minimum (Eq. 3) or maximum (Eq. 4) peaks in the velocity profile, respectively. The tent metric is a graphical approach that divides the area under the velocity curve by the area of a single peak velocity curve (Eq. 5). The spectral metric is the summation of the maximal Fourier transformed velocity vector (Eq. 6). The spectral arc-length metric is calculated from the frequency spectrum of the velocity profile by performing a fast Fourier transform operation and then computing the length (Eq. 7). The modified spectral arc length adapts the cutoff frequency according to a given threshold velocity and an upper-bound cutoff frequency (Eq. 8). The modified spectral arc length is then independent of temporal movement scaling. The mean arrest period ratio is the time portion that movement speed exceeds a given percentage of peak speed (Eq. 9).

Another commonly used approach is to analyze the jerk (i.e., the derivative of acceleration) profile. The common ways to assess smoothness using the jerk profile are root mean square jerk, mean rectified jerk, normalized jerk, and the logarithm of dimensionless jerk. The root mean square jerk takes the root-mean-square of the jerk that is then normalized by the movement duration [ 82 ] (Eq. 10). The mean rectified jerk (normalized mean absolute jerk) is the mean of the magnitude jerk normalized or divided by the peak velocity [ 80 , 82 ] (Eq. 11). The normalized jerk (dimensionless-squared jerk) is the square of the jerk times the duration of the movement to the fifth power over the length squared (Eq. 12). It is then integrated over the duration and square rooted. The normalized jerk can be normalized by mean speed, max speed, or mean jerk [ 80 ]. The logarithm of dimensionless jerk (Eq. 13) is the logarithm of normalized jerk defined in Eq. 12 [ 81 ].

It has yet to be determined which smoothness metric is more effective for characterizing recovery of smooth movement. According to Rohrer et al. [ 80 ], the metrics of speed, local minima of velocity, peaks, tent, and mean arrest period ratio showed increases in smoothness for inpatient recovery from stroke, but the mean rectified jerk metric seemed to show a decrease in smoothness as survivors of stroke recovered. Rohrer et al. warned that a low smoothness factor in jerk does not always mean the person is highly impaired. The spectral arc-length metric showed a consistent increase in smoothness as the number of sub-movements decreased [ 25 ], whereas the other metrics showed sudden changes in smoothness. For example, the mean arrest period ratio and the speed metric showed an increase in smoothness with two or more sub-movements, but when two sub-movements started to merge, the smoothness decreased. As a result, the spectral arc-length metric appears to capture change over a wider range of movement conditions in recovery in comparison to other metrics.

The presence of a velocity-dependent hyperactive stretch reflex is referred to as spasticity [ 83 ]. Spasticity results in a lack of smoothness during both passive and active movements and is more pronounced with activities that involve simultaneous shoulder abduction loading and extension of the elbow, wrist, or fingers [ 83 ], which are unfortunately quite common in ADL. A standard approach to assessing spasticity by a therapist involves moving a subject’s passive arm at different velocities and checking for the level of resistance. While this manual approach is subjective, electronic sensors have the potential to assess severity of spasticity in much more objective ways. Centen et al. report a method to assess the spasticity of the elbow using an upper-limb exoskeleton [ 84 ] involving the measurement of peak velocity, final angle, and creep. Sin et al., similarly performed a comparison study between a therapist moving the arm versus a robot moving the arm. An EMG sensor was used to detect the catch and compared with a torque sensor to detect catch angle for the robotic motion [ 85 ]. The robot moving the arm seemed to perform better with the inclusion of either an EMG or a torque sensor than with the therapist moving the arm and the robot simply recording the movement. A related measure that may be correlated with spasticity is the assessment of joint resistance torques during passive movement [ 76 ]. This can provide an assessment of the velocity-dependent resistance to movement that arises following stroke.

Efficiency measures movement fluency in terms of both task completion times and spatial trajectories. In point-to-point reaching, people who have suffered a stroke commonly display inefficient paths in comparison to their healthy side or compared to subjects who are unimpaired [ 10 ]. During the early phases of recovery after stroke, subjects may show slow overall movement speed resulting in longer task times. As recovery progresses, overall speed tends to increase and task times decrease, indicating more effective and efficient motor planning and path execution. Therapists usually observe the person’s efficiency in completing a task and then rate the person’s ability in completing a task in a timely manner. Therefore, both task time (or movement time) [ 10 , 76 , 77 , 86 , 87 ] and mean speed [ 25 , 75 , 77 , 81 , 86 ] are effective ways to assess temporal efficiency. Similar measures used by Wagner et al. include peak-hand velocity and time to peak-hand velocity [ 87 ]. To measure spatial efficiency of movement, both Colombo et al. [ 75 ], Mostafavi [ 77 ], and Germanotta [ 86 ] calculated the movement path length and divided it by the straight-line distance between the start and end points. This is known as the path-length ratio.

Movement planning

Movement planning is associated with feedforward sensorimotor control, elements that occur before the initial phase of movement. A common approach is to use reaction time to assess the duration of the planning phase. In a typical clinical assessment, a therapist can only observe/quantify whether movement can be initiated or not, but has no way to quantify the lag between the signal to initiate movement and initiation of movement. Keller et al., Frisoli et al., and Mostafavi et al. quantified the reaction time to assess movement planning [ 10 , 76 , 77 ] in subjects who have suffered a stroke. Mostafavi assessed movement planning in three additional ways by assessing characteristics of the actual movement: change in direction, movement distance ratio, and maximum speed ratio [ 77 ]. The change in direction is the angular deviation between the initial movement vector and the straight line between the start and end points. The first-movement-distance ratio is the ratio between the distance the hand traveled during the initial movement and the total distance between start and end points. The first-movement-maximum speed ratio is the ratio of the maximum hand speed during the initial phase of the movement divided by the global hand speed for the entire movement task.

Movement efficacy 

Movement efficacy measures the person’s ability to achieve the desired task without assistance. While therapists can assess the number of completed repetitions, they have no means to kinetically quantify amount of assistance required to perform a given task. Movement efficacy is quantified by robot sensor systems that can measure: (a) person-generated movement, and/or (b) the amount of work performed by the robot to complete the movement (e.g., when voluntary person-generated movement fails to achieve a target). Hence, movement efficacy can involve both kinematic and kinetic measures. A kinematic metric that can be used to represent movement efficacy is the active movement index, which is calculated by dividing the portion of the distance the person is to complete by the total target distance for the task [ 75 ]. An example metric based on kinetic data is the amount of assistance metric, proposed by Balasubramanian et al. [ 25 ]. It is calculated by estimating the work performed by the robot to assist voluntary movement, and then dividing it by the work performed by the robot as if the person performs the task without assistance from the robot. A similar metric obtained by Germanotta et al. calculates the total work by using the movement’s path length, but Germanotta et al. also calculate the work generated towards the target [ 86 ].

Movement accuracy

Movement accuracy has been characterized by the error in the end-effector trajectory compared to a theoretical trajectory. It measures the person’s ability to follow a prescribed path, whereas movement efficiency assesses the person’s ability to find the most ideal path to reach a target. Colombo et al. measured movement accuracy in people after stroke by calculating the mean-absolute value of the distance, which is the mean absolute value of the distance between each point on the person’s path and the theoretical path [ 75 ]. Figure  4 demonstrates the difference between path-length ratio and mean-absolute value of the distance. The mean-absolute value of the distance computes the error between a desired trajectory and the actual, and the path-length ratio computes the total path length the person’s limb has traveled. Another similar metric is the average inter-quartile range, which quantifies the average “spread” among several trajectories [ 15 ]. Balasubramanian et al. characterized movement accuracy as a measure of the subject’s ability to achieve a target during active reaching. They refer to the metric as movement synergy [ 25 ], and calculate it by finding the distance between the end-effector’s final location and the target location.

figure 4

Difference between path-length ratio and mean absolute value of the distance. A Path-length ratio. \(d_{ref}\) is the theoretical distance the hand should travel between the start and end point. \(d_{total}\) is the total distance the hand travelled from Start to End. B Mean absolute value of the distance. \(d_{i}\) is the distance between the theoretical path and the actual hand path

Intra-limb coordination

Intra-limb (inter-joint) coordination is a measure of the level of coordination achieved by individual joints of a limb or between multiple joints of the same limb (i.e., joint synergy) when performing a task. Since the upper limb consists of kinematic redundancies, the human arm can achieve a desired outcome in multiple ways. For example, a person might choose to move an atypical joint in order to compensate for a loss of mobility in another joint. Frisoli et al. and Bosecker et al. used the shoulder and elbow angle to find a linear correlation between the two angles in a movement task that required multi-joint movement [ 10 , 78 ]. In terms of clinical assessment, joint angle correlations can illustrate typical or atypical contribution of a joint while performing a multi-joint task.

Inter-limb coordination

Inter-limb coordination refers to a person’s ability to appropriately perform bilateral movements with affected and unaffected arms. Therapists observe the affected limb by often comparing to the unaffected limb during a matching task, such as position matching. Matching can either be accomplished with both limbs moving simultaneously or sequentially, and typically without the use of vision. Dukelow et al. used position matching to obtain measures of inter-limb coordination [ 24 ], including trial-to-trial variability, spatial contraction/expansion, and systematic shifts. Trial-to-trial variability is the standard deviation of the matching hand’s position for each location in the x (distal/proximal), y (anterior/posterior), and both in x and y in the transverse plane. Spatial contraction/expansion is the ratio of the 2D work area of the target hand to the 2D work area of the matching hand during a matching task. Systematic shifts were found by calculating the mean absolute position error between the target and matching hand for each target location.

Semrau et al. analyzed the performance of subjects in their ability to match their unaffected arm with the location of their affected arm [ 88 ]. In the experiment, a robot moved the affected arm to a position and the person then mirrored the position with the unaffected side. The researchers compared the data when the person was able to see the driven limb versus when they were unable to see the driven limb. The initial direction error, path length ratio, response latency, peak speed ratio, and their variabilities were calculated to assess the performance of the person’s ability to perform the task.

Range of motion

Range of motion is a measure of the extent of mobility in one or multiple joints. Traditionally, range of motion can be measured with the use of a goniometer [ 89 ]. The goniometer measures the individual joint range of motion, which takes considerable time. Range of motion can be expressed as a 1-DOF angular measure [ 76 , 89 ], a 2-DOF planar measure (i.e., work area) [ 82 ], or a 3-DOF spatial measure (i.e., workspace) [ 77 ]. Individual joints are commonly measured in joint space, whereas measures of area or volume are typically given in Cartesian space. In performing an assessment of work area or workspace with a robotic device, the measure can be estimated either by: (a) measuring individual joint angles with an exoskeleton device and then using these angles to compute the region swept out by the hand, or (b) directly measuring the hand or fingertips with a Cartesian (end-effector) device. The measurement of individual joint range of motion (ROM) as well as overall workspace have significant clinical importance in assessing both passive (pROM) and active (aROM) range of motion. To measure pROM, the robot drives arm movement while the person remains passive. The pROM is the maximum range of motion the person has with minimal or no pain. For aROM, a robot may place the arm in an initial position/orientation from which the person performs unassisted joint movements to determine the ROM of particular joints [ 76 ], or the area or volume swept by multiple joints. Lin et al. quantified the work area of the elbow and shoulder using potentiometers and derived test–retest reliability [ 89 ]. The potentiometer measurements were then compared to therapist measurements to determine validity.

Measures of strength evaluate a person’s ability to generate a force in a direction or a torque about a joint. Strength measurements may involve single or multiple joints. At the individual joint level, strength is typically measured from a predefined position of a person’s arm and/or hand. The person then applies a contraction to produce a torque at the assessed joint [ 76 , 78 ]. Multi-joint strength may also be measured by assessing strength and/or torque in various directions at distal locations along the arm, such as the hand. Lin et al. compared the grip strength obtained from load cells to a clinical method using precise weights, which showed excellent concurrent validity [ 89 ].

Measures and methods based on neural activity using EEG/EMG

Although much information can be captured and analyzed using the kinematic and kinetic measures listed above, their purview is limited. These measures provide insight into the functional outcomes of neurological system performance but provide limited perspective on potential contributing sources of measured impairment [ 90 ]. For a deeper look into the neuromuscular system, measures based on neurological activation are often pursued. As a complement to biomechanical measures, methods based on quantization of neural activity like EEG and EMG have been used to characterize the impact of stroke and its underlying mechanisms of impairments [ 91 , 92 ]. Over the past 20 years, numerous academic research studies have used these measures to explore the effects of stroke, therapeutic interventions, or time on the evolution of abnormal neural activity [ 91 ]. Groups with different levels of neurological health are commonly compared (e.g., chronic/acute/subacute stroke vs. non-impaired, or impairment level) or other specific experimental characteristics (e.g., different rehabilitation paradigms [ 93 , 94 ]). With this evidence, the validity of these metrics has been tested; however, the study of reliability of these metrics is needed to complete the jump from academic to clinical settings.

Extracting biomarkers from non-invasive neural activity requires careful decomposition and processing of raw EEG and EMG recordings [ 32 ]. Various methods have been used, and the results have produced a growing body of evidence for the validity of these biomarkers in providing insight on the current and future state of motor, cognitive, and language skills in people after stroke [ 38 , 95 ]. Some of the biomarkers derived from EEG signals include: power-related band-specific information [ 34 , 35 , 43 , 47 , 53 , 54 , 96 , 97 , 98 , 99 , 100 , 101 ], band frequency event-related synchronization and desynchronization (ERS/ERD) [ 22 , 51 , 102 , 103 ], intra-cortical coherence or functional connectivity [ 39 , 59 , 73 , 94 , 104 , 105 , 106 , 107 , 108 , 109 ], corticomuscular coherence (CMC) [ 37 , 110 , 111 , 112 , 113 ], among others [ 114 , 115 ]. Biomarkers extracted from EEG can be used to assess residual functional ability [ 38 , 54 , 73 , 97 , 98 , 99 ], derive prognostic indicators [ 34 , 43 , 104 ], or categorize people into groups (e.g., to better match impairments with therapeutic strategies) [ 39 , 47 , 58 , 116 ].

In the following subsections, valid biomarkers derived mostly from EEG signal features (relationship with motor outcome for a person after stroke) will be discussed and introduced theoretically. Distinctions will be made about the stage after stroke when signals were taken. Findings are reported from 33 studies that have examined the relationship between extracted neural features and motor function for different groups of people after stroke. These records are grouped by quantization methods used including approaches based on measures of frequency spectrum power (n = 9), inter-regional coherence (n = 10 for cortical coherence and n = 9 for CMC), and reliability (n = 5).

Frequency spectrum power

Power measures the amount of activity within a signal that occurs at a specific frequency or range of frequencies. Power can be computed in absolute or relative terms (i.e., with respect to other signals). It is often displayed as a power density spectrum where the magnitudes of signal power can be seen across a range of frequencies. In electro-cognitive research, the representation of power within specific frequency bands has been useful to explain brain activity and to characterize abnormal oscillatory activity due to regional neurological damage [ 32 , 117 ].

Frequency bands in EEG content

Electrical activity in the brain is dominated primarily by frequencies from 0–100 Hz where different frequency bands correspond with different states of activity: Delta (0–4 Hz) is associated with deep sleep, Theta (4–8 Hz) with drowsiness, Alpha (8–13 Hz) with relaxed alertness and important motor activity [ 117 ], and Beta (13–31 Hz) with focused alertness. Gamma waves (> 32 Hz) are also seen in EEG activity; however, their specific relationship to level of alertness or consciousness is still debated [ 32 , 117 ]. Important cognitive tasks have been found to trigger activity in these bands in different ways. Levels of both Alpha and Delta activity have also been shown to be affected by stroke and can therefore be examined as indicators of prognosis or impairment in sub-acute and chronic stroke [ 52 , 100 , 118 ].

Power in acute and sub-acute stroke

For individuals in the early post-stroke (i.e., sub-acute) phase, abnormal power levels can be an indicator of neurological damage [ 98 ]. Attenuation of activity in Alpha and Beta bands have been observed in the first hours after stroke [ 100 ] preceding the appearance of abnormally high Delta activity. Tolonen et al. reported a high correlation between Delta power and regional Cerebral Blood Flow (rCBF). This relationship appears during the sub-acute stroke phase and has been used to predict clinical, cognitive, and functional outcomes [ 119 ]. Delta activity has also been shown to positively correlate with 1-month National Institutes of Health Stroke Scale (NIHSS) [ 52 ] and 3-month Rankin scale [ 36 ] assessments.

Based on these findings, several QEEG (Quantitative Electroencephalography) metrics involving ratios of abnormal slow (Delta) and abnormal fast (Alpha and Beta) activity have been developed. The Delta-Alpha Ratio (DAR), Delta-Theta Ratio (DTR), and (Delta + Theta)/(Alpha + Beta) Ratio (DTABR also known as PRI for Power Ratio Index) relate amount of abnormal slow activity with the activity from faster bands and have been shown to provide valuable insight into prognosis of stroke outcome and thrombolytic therapy monitoring [ 98 ]. Increased DAR and DTABR have been repeatedly found to be the QEEG indices that best predict worse outcome for the following: comparing with the Functional Independence Measure and Functional Assessment Measure (FIM-FAM) at 105 days [ 53 ], Montreal Cognitive Assessment (MoCa) at 90 days [ 54 ], NIHSS at 1 month [ 35 ], modified ranking scale (mRS) at 6 months [ 105 ], NIHSS evolution at multiple times [ 120 ], and NIHSS at 12 months [ 96 ]. DAR was also used to classify people in the acute phase and healthy subjects with an accuracy of 100% [ 58 ].

The ability of basic EEG monitoring to derive useful metrics during the early stage of stroke has made EEG collection desirable for people who have suffered a stroke in intensive care settings. The derived QEEG indices have proven to be helpful to determine Delayed Cerebral Ischemia (DCI), increased DAR [ 43 ], and increased Delta power [ 34 , 118 ]. However, finding the electrode montage with the least number of electrodes that still reveals the necessary information for prognoses is one of the biggest challenges for this particular use of EEG. Comparing DAR from 19 electrodes on the scalp with 4 electrodes on the frontal cortex suggests that DAR from 4 frontal electrodes may be enough to detect early cognitive and functional deficits [ 53 ]. Studies explored the possibility of a single-electrode montage over the Fronto-Parietal area (FP1); the DAR and DTR from this electrode might be a valid predictor of cognitive function after stroke when correlated with the MoCA [ 54 ], relative power in Theta band correlated with mRS and modified Barthel Index (mBI) 30 and 90 days after stroke [ 121 ].

Power in chronic stroke

The role of power-related QEEG indices during chronic stroke and progression of motor functional performance have been examined with respect to rehabilitation therapies, since participants have recovered their motion to a certain degree [ 4 ]. Studies have shown that therapy and functional activity improvements correlate with changes of the shape and delay of event-related desynchronization and synchronization (ERD-ERS) for time–frequency power features when analyzing Alpha and Beta bands on the primary motor cortex for ipsilesional and contralesional hemispheres [ 21 , 22 , 122 ]. Therapies with better outcome tend to have reduced Delta rhythms and increased Alpha rhythms [ 122 ].

Bertolucci [ 47 ] compared starting power spectrum density in different bands for both hemispheres with changes in WMFT and FMA over time. Increased global Alpha and Beta activity was shown to correlate with better WMFT evolution while, increase in contralesional Beta activity was shown to be correlated with FMA evolution. Metrics combining slow and fast activity have also been tested in the chronic stage of stroke, significant negative correlation between DTABR (PRI) at the start of therapy was related to FMA change during robotic therapy [ 99 ]. This finding suggests that DTABR may have promise as prognostic indicators for all stages of stroke.

Brain Symmetry Index (BSI) is a generalized measure of “left to right” (affected to non-affected) power symmetry of mean spectral power per hemisphere. These inter-hemispheric relationships of power have been used as prognostic measures during all stages of stroke. Baseline BSI (during the sub-acute stage) was found to correlate with the FMA at 2 months [ 73 ], mRS at 6 months [ 123 ], and FM-UE predictor when using only theta band BSI for patients in the chronic stage [ 124 ]. BSI can be modified to account for the direction of asymmetry, the directed BSI at Delta and Theta bands proved meaningful to describe evolution from acute to chronic stages of upper limb impairment as measured by FM-UE [ 120 , 125 ]. Table 4 and Table 11 in Appendix 1 communicate power-derived metrics across different stages of stroke documented in this section and their main reported relationships with motor function. Findings are often reported in terms of correlation with clinical tests of motor function.

Brain connectivity (cortical coherence)

Brain connectivity is a measure of interaction and synchronization between distributed networks of the brain and allows for a clearer understanding of brain function. Although cortical damage from ischemic stroke is focal, cortical coherence can explain abnormalities in functionality of remote zones that share functional connections to the stroke-affected zone [ 59 ].

Several estimators of connectivity have been proposed in the literature. Coherency, partial coherence (pCoh) [ 125 ], multiple coherence (mCoh), imaginary part of coherence (iCoh) [ 126 ], Phase Lagged Index (PLI), weighted Phase Lagged Index (wPLI) [ 127 ], and simple ratios of power at certain frequency bands [ 73 ] describe synchronic symmetric activity between ROIs and are referred to as non-directed or functional connectivity [ 128 ]. Estimators based on Granger’s prediction such as partial directed coherence (PDC) [ 129 , 130 , 131 ], or directed transfer Function (DTF) [ 132 , 133 ] and any of their normalizations describe causal relationships between variables and are referred to as directed or effective connectivity [ 134 ]. Connectivity also allows the analysis of brain activity as network topologies, borrowing methods from graph theory [ 32 , 134 ]. Network features such as complexity, linearity, efficiency, clustering, path length, node hubs, and more can be derived from graphs [ 128 ]. Comparisons of these network features among groups with impairment and healthy controls have proven to be interesting tools to understand and characterize motor and functional deficits after stroke [ 108 ].

Studies have used intra- and inter-cortical coherence to expand the clinical understanding of the neural reorganization process [ 59 , 106 , 107 , 108 , 109 ], as a clinical motor and cognitive predictor [ 38 , 94 , 104 , 135 , 136 ], and as a tool to predict the efficacy of rehabilitation therapy [ 94 ]. Table 5 and Table 12 in Appendix 2 briefly summarize the main metrics discussed in this section and their results that are related with motor function assessment. In general, studies have shown that motor deficits in stroke survivors are related to less connectivity to main sensory motor areas [ 38 , 94 , 104 , 137 ], weak interhemispheric sensorimotor connectivity [ 109 , 138 ], less efficient networks [ 106 , 135 ], with less “small world” network patterns [ 108 , 134 ] (small-world networks are optimized to integrate specialized processes in the whole network and are known as an important feature of healthy brain networks).

Survivors of stroke tend to exhibit more modular (i.e., more clustered, less integrated) and less efficient networks than non-impaired controls with the biggest difference occurring in the Beta and Gamma bands [ 106 ]. Modular networks are less “small-world” [ 134 ]; small-world networks are optimized to integrate specialized processes in the whole network and are known as an important feature of healthy brain networks. Such a transition to a less small-world network was observed during the acute stage of stroke (first hours after stroke) and documented to be bilaterally decreased in the Delta band and bilaterally increased in the high Alpha band (also known as Alpha2: 10.5–13 Hz) [ 108 ].

Global connectivity with the ipsilesional primary motor cortex (M1) is the most researched biomarker derived from connectivity and has been studied in longitudinal experiments as a plasticity indicator leading to future outcome improvement [ 38 ], motor and therapy gains [ 94 ], upper limb gains during the sub-acute stage [ 137 ], and as a feature that characterizes stroke survivors’ cognitive deficits [ 104 ]. Pietro [ 38 ] used iCoh to test the weighted node degree (WND), a measure that quantifies the importance of a ROI in the brain, for M1 and reported that Beta-band features are linearly related with motor improvement as measured by FM-UE and Nine-Hole-Peg Test. Beta-band connectivity to ipsilesional M1, as measured by spectral coherence, can be used as a therapy outcome predictor, and more than that, results point heavily toward connectivity between M1 and ipsilesional frontal premotor area (PM) to be the most important variable as a therapy gain predictor; predictions can be further improved by using lesion-related information such as CST or MRI to yield more accurate results [ 94 ]. Comparisons between groups of people with impairment and controls showed significant differences on Alpha connectivity involving ipsilesional M1, this value showed a relation with FMA 3 months for the group with impairment due to stroke [ 104 ].

The relationship between interhemispheric ROI connectivity and motor impairment has been studied. The normalized interhemispheric strength (nIHS) from PDC was used to quantify the coupling between structures in the brain, Beta- and lower Gamma-band features of this quantity in sensorimotor areas exhibited linear relationships with the degree of motor impairment measured by CST [ 136 ]. A similar measure, also derived from PDC used to measure ROI interhemispheric importance named EEG-PDC was used in [ 109 ]; here the results show that Mu-band (10–12 Hz) and Beta-band features could be used to explain results for hand motor function from FM-UE. In another study, Beta debiased weighted phase lag index (dwPLI), correlated with outcome measured by Action Research Arm Test (ARAT) and FM-UE [ 138 ].

Global and local network efficiency for Beta and Gamma bands seem to be significantly decreased in the population who suffered from a stroke compared to healthy controls as reported in [ 106 ]. Newer results, such as the ones pointed out by [ 135 ] found statistically significant relationships between Beta network efficiency, network intradensity derived using a non-parametric method (named Generalized Measure of Association), and functional recovery results given by FM-UE. Global maximal coherence features in the Alpha band have been recently recognized as FM-UE predictors, where coherence was computed using PLI and related to motor outcome by means of linear regression [ 139 ].

Corticomuscular coherence

Corticomuscular coherence (CMC) is a measure of the amount of synchronous activity between signals in the brain (i.e., EEG or MEG) and associated musculature (i.e., EMG) of the body [ 92 ]. Typically measured during voluntary contractions [ 110 ], the presence of coherence demonstrates a direct relationship between cortical rhythms in the efferent motor commands and the discharge of neurons in the motor cortex [ 140 ]. CMC is computed as correlation between EEG and EMG signals at a given frequency. Early CMC research found synchronous (correlated) activity in Beta and low Gamma bands [ 40 , 41 , 42 ]. CMC is strongest in the contralateral motor cortex [ 141 ]. This metric seems to be affected by stroke-related lesions, and thus provides an interesting tool to assess motor recovery [ 111 , 142 , 143 , 144 ]. The level of CMC is lower in the chronic stage of stroke than in healthy subjects [ 112 , 145 ], with chronic stroke survivors showing lower peak CMC frequency [ 146 ], and topographical patterns that are more widespread than in healthy people; highlighting a connection to muscle synergies [ 142 , 147 , 148 ]. CMC has been shown to increase with training [ 37 , 112 , 144 ].

Corticomuscular coherence has been proposed as a tool to: (a) identify the functional contribution of reorganized cortical areas to motor recovery [ 37 , 112 , 141 , 144 , 146 ]; (b) understand functional remapping [ 93 , 142 , 145 ]; and (c) study the mechanisms underlying synergies [ 147 , 148 ]. CMC has shown increased abnormal correlation with deltoid EMG during elbow flexion for people who have motor impairment [ 147 ], and the best muscles to target with rehabilitative interventions [ 148 ]. Changes in CMC have been shown to correlate with motor improvement for different stages of stroke, although follow-up scores based on CMC have not shown statistically significant correlations when compared to clinical metrics [ 37 , 93 ]. Results summarizing CMC on stroke can be found in Table 6 and Table 13 in Appendix 3.

Reliability of measures

Each of the aforementioned measures have the potential to be integrated into robotic devices for upper-limb assessment. However, to improve the clinical acceptability of robotic-assisted assessment, the measurements and derived metrics must meet reliability standards in a clinical setting [ 55 ]. Reliability can be defined as the degree of consistency between measurements or the degree to which a measurement is free of error. A common method to represent the relative reliability of a measurement process is the intraclass correlation coefficient (ICC) [ 150 ]. Koo and Li suggest a guideline on reporting ICC values for reliability that includes the ICC value, analysis model (one-way random effects, two-way random effects, two-way fixed effects, or two-way mixed effects), the model type per Shrout and Fleiss (individual trials or mean of k trials), model definition (absolute agreement or consistency), and confidence interval [ 68 ]. Koo and Li also provide a flowchart in selecting the appropriate ICC based on the type of reliability and rater information. An ICC value below 0.5 indicates poor reliability, 0.5 to 0.75 moderate reliability, 0.75 to 0.9 good reliability, and above 0.9 excellent reliability. The reviewed papers will be evaluated based on these guidelines. For reporting the ICC, the Shrout and Fleiss convention is used [ 68 ]. The chosen reliability studies are included in the tables if the chosen ICC model, type, definition, and confidence interval are identifiable, and the metrics have previously been used in electronic-based metrics. For studies that report multiple ICC scores due to assessment of test–retest reliability for multiple raters, the lowest ICC reported is included to avoid bias in the reported results.

In the assessment of reliability of data from robotic sensors, common ways to assess reliability are to correlate multiple measurements in a single session (intra-session) and correlate multiple measurements between different sessions (inter-session) measurements (i.e., test–retest reliability) [ 151 ]. Checking for test–retest reliability determines the repeatability of the robotic metric. The repeatability is the ability to reproduce the same measurements under the same conditions. Table 7 shows the test–retest reliability of several robotic metrics. For metrics checking for test–retest reliability, a two-way mixed-effects model with either single or multiple measurements may be used [ 68 ]. Since the same set of sensors will be used to assess subjects, the two-way mixed model is used. The test–retest reliability should be checking for absolute agreement. Checking for absolute agreement (y = x) rather than consistency (y = x + b) determines the reliability without a bias or systematic error. For example, in Fig.  5 , for a two-way random effect with a single measurement checking for agreement gives a score of 0.18. When checking for consistency, the ICC score reaches to 1.00. In other words, the bias has no effect on the ICC score when checking for consistency. Therefore, when performing test–retest reliability, it is important to check for absolute agreement to prevent bias in the test–retest result.

figure 5

Checking agreement versus consistency among ratings. For y = x, the absolute ICC score is 1 and the consistency ICC score is 1.00. For y = x + 1, the agreement ICC score is 0.18 and the consistency ICC score is 1.00. For y = 3x, the absolute ICC score is 0.32 and the consistency ICC score is 0.60. For y = 3x + 1, the absolute ICC score is 0.13 and the consistency ICC score is 0.60

Not only should a robotic metric demonstrate repeatability, it should also be reproducible when different operators are using the same device. Reproducibility evaluates the change in measurements when conditions have changed. Inter-rater reliability tests have been performed to determine the effect raters have when collecting measurements when two or more raters perform the same experimental protocol [ 68 ]. To prevent a biased result, raters should have no knowledge of the evaluations given by other raters, ensuring that raters’ measurements are independent from one another. Table 8 shows the reproducibility of several robotic biomechanical metrics. All the included studies have used two raters to check for reproducibility. The researchers performed a two-way random effects analysis with either a single measurement or multiple measurements to check for agreement.

Measurement reliability of robotic biomechanical assessment

Of the 24 papers reviewed for biomechanical metrics, 13 papers reported on reliability. 6 papers reported reproducibility and 9 papers reported on repeatability. Overall, the metrics seem to demonstrate good to moderate reliability for both repeatability and reproducibility. However, caution should be exercised in determining which robotic metric is more effective in assessing movement quality based on reliability studies. The quality of measurements is highly dependent on the quality of the robotic device and sensors [ 85 ]. Having a completely transparent robot with a sensitive and accurate sensor will further improve assessment of reliability. Also, the researchers have used different versions of the ICC, as seen in Tables 7 and 8 , which complicates direct comparisons of the metrics.

Reliability of electrophysiological signal features

Of the 33 papers reviewed for electrophysiological metrics, 5 papers reported on reliability. 6 papers reported on repeatability. Convenience of acquiring electrophysiological signals non-invasively is relatively new. Metrics for assessment of upper limb motor impairment in stroke, derived from these signals have shown to be valid in academic settings, but most of these valid metrics have yet to be tested for intra- and inter-session reliability to be used in clinical and rehabilitation settings. Few studies found as a result of our systematic search have looked at test–retest reliability of these metrics. Therefore, we found and manually added records reporting on intra- and inter-session reliability on metrics based on electrophysiological features described in section “Measures and methods based on neural activity using EEG/EMG”, even if reliability was not assessed on people with stroke. Relevant results are illustrated in Table 9 .

Spectral power features of EEG signals have been tested during rest [ 153 , 154 ] and task (cognitive and motor) conditions for different cohorts of subjects [ 102 , 103 ]. Some of the spectral features observed during these experiments are related to timed behavior of oscillatory activity due to cued experiments, such as event-related desynchronization of the Beta band (ERD and Beta rebound) [ 102 ] and topographical patterns of Alpha activity R = 0.9302, p < 0.001 [ 103 ].

Test–retest reliability for rest EEG functional connectivity has been explored for few of the estimators listed in section “Measures and methods based on neural activity using EEG/EMG”: (1) for a cohort of people with Alzheimer by means of the amplitude envelope correlation (AEC), phase lag index (PLI) and weighted phase lag index (wPLI) [ 155 ]; (2) in healthy subjects using iCoh and PLI [ 156 ]; and (3) in infants, by studying differences of inter-session PLI graph metrics such as path length, cluster coefficient, and network “small-worldness” [ 60 ]. Reliability for upper limb CMC has not yet been documented (at least to our knowledge). However, an experiment involving testing reliability of CMC for gait reports low CMC reliability in groups with different ages [ 61 ].

EEG and EMG measurements could be combined with kinematic and kinetic measurements to provide additional information about the severity of impairment and decrease the number of false positives from individual measurements [ 21 ]. This could further be used to explain abnormal relationships between brain activation, muscle activation and movement kinematics, as well as provide insight about subject motor performance during therapy [ 15 ]. The availability of EEG and EMG measures can also enhance aspects of biofeedback given during tests or be used to complement other assessments to provide a more holistic picture of an individual’s neurological function.

It has been shown that combining EEG, EMG, and kinematic data using a multi-domain approach can produce correlations to traditional clinical assessments, a summary of some of the reviewed studies is presented in Table 10 . Belfatto et al. have assessed people’s ROM for shoulder and elbow flexion, task time, and computed jerk to measure people’s smoothness, while the EMG was used to measure muscle synergies, and EEG detected ERD and a lateralization coefficient [ 21 ]. Comani et al. used task time, path length, normalized jerk, and speed to measure motor performance while observing ERD and ERS during motor training [ 22 ]. Pierella et al. gathered kinematic data from an upper-limb exoskeleton, which assessed the mean tangential velocity, path-length ratio, the number of speed peaks, spectral arc length, the amount of assistance, task time, and percentage of workspace, while observing EEG and EMG activity [ 18 ]. Mazzoleni et al. used the InMotion2 robot system to capture the movement accuracy, movement efficiency, mean speed, and the number of velocity peaks, while measuring brain activity with EEG [ 16 ]. However, further research is necessary to determine the effectiveness of the chosen metrics and methods compared to other more promising methods to assess function. Furthermore, greater consensus in literature is needed to support the clinical use of more reliable metrics. For example, newer algorithms to estimate smoothness such as spectral arc length have been shown to provide greater validity and reliability than the commonly used normalized jerk metric. Despite this evidence, normalized jerk remains a widely accepted measure of movement smoothness.

Discussions and conclusions

In this paper we reviewed studies that used different sensor-acquired biomechanical and electrophysiological signals to derive metrics related to neuromuscular impairment for stroke survivors; such metrics are of interest for robotic therapy and assessment applications. To assess the ability of a given measure to relate with impairment or motor outcome, we looked for metrics where results have been demonstrated to correlate or predict scores from established clinical assessment metrics for impairment and function (validity). Knowing that a metric has some relationship with impairment and function (i.e., that it is valid) is not enough for it to be used in clinical settings if those results are not repeatable (reliable). Thus, we also reviewed the reliability of metrics and related signal features looking for metrics which produce similar results for the same subject during different test sessions and for different raters. With this information, researchers can aim to use metrics that not only seem to be related with stroke, but also can be trusted, with less bias, and with a simpler interpretation. The main conclusions of this review paper are presented as answers to the following research questions.

Which biomechanical-based metrics show promise for valid assessment of function and impairment?

Metrics derived from kinematic (e.g., position & velocity) and kinetic (e.g., force & torque) sensors affixed to robotic and passive mechanical devices have successfully been used to measure biomechanical aspects of upper-extremity function and impairment in people after stroke. The five common metrics included in the reviewed studies measured the number of velocity peaks (~ 9 studies), path-length ratio (~ 8 studies), the maximum speed of the arm (~ 7 studies), active range of motion (~ 7 studies), and movement time (~ 7 studies). The metrics are often compared to an established clinical assessment to determine validity of the metric. According to the review study by Murphy and Häger, the Fugl-Meyer Assessment for Upper Extremity had significant correlation with movement time, movement smoothness, peak velocity, elbow extension, and shoulder flexion [ 66 ]. The movement time and smoothness showed strong correlation with the Action Research Arm Test, whereas speed, path-length ratio, and end-point error showed moderate correlation. Tran et al. reviewed specifically validation of robotic metrics with clinical assessments [ 57 ]. The review found mean speed, number of peak velocities, movement accuracy, and movement duration to be most promising metrics based on validation with clinical assessments. However, the review mentioned that some studies seem to conflict on the correlation between the robotic metric and clinical measures, which could be due to assessment task, subject characteristics, type of intervention, and robotic device. For further information about the validation of sensor-based metrics, please refer to the previously mentioned literature reviews [ 57 , 66 ].

Which biomechanical-based metrics show promise for repeatable assessment?

Repeatable measures, in which measurement taken by a single instrument and/or person produce low variation within a single task, are a critical requirement for assessment of impairment and function. The biomechanical based metrics that show the most promise for repeatability are range of motion, mean speed, mean distance, normal path length, spectral arc length, number of peaks, and task time. Two or more studies used these metrics and demonstrated good and excellent reliability, which implies the metric is robust against measurement noise and/or disturbances. Since the metrics have been used on different measuring instruments, the sensors’ resolution and signal-to-noise ratio appear to have a minimal impact on the reliability. However, more investigation is needed to confirm this robustness. In lieu of more evidence, it is recommended that investigators choose sensors similar or superior in quality to those used in the measuring devices presented in Tables 7 and 8 to achieve the same level of reliability.

What aspects of biomechanical-based metrics lack evidence or require more investigation?

Although many metrics (see previous section) demonstrate good or excellent repeatability across multiple studies, the evidence for reproducibility is limited to single studies. When developing a novel device capable of robotic assistance and assessment, researchers have typically focused their efforts to create a device capable of repeatable and reliable measurements. However, since the person administering the test is using the device to measure the subject’s performance, the reproducibility of the metric must also be considered. The reproducibility of a metric is affected by the ease-of-use of the device; if the device is too complicated to setup and use, there is an increased probability that different operators will observe different measurements. Also, the operator’s instructions to the subject affects the reproducibility, especially in the initial sessions, which may lead to different learning effects, and different assessment results. More studies are needed across multiple sites and operators to determine the reproducibility of the biomechanical metrics reviewed in this paper.

Which neural activity-based metrics (EEG & EMG) show the most promise for reliable assessment?

Electrical neurological signals such as EEG and EMG have successfully been used to understand changes in motor performance and outcome variability across all stages of post-stroke recovery including the first few hours after onset. Experimental results have shown that metrics derived from slow frequency power (delta power, relative delta power, and theta power), and power ratio between slow and fast EEG frequency bands like DAR and DTABR convey useful information both about current and future motor capabilities, as presented in Table 4 and Table 11 in Appendix 1. Multimodal studies using robotic tools for assessment of motor performance have expanded the study of power signal features in people who suffered a stroke in the chronic recovery stage by studying not only rest EEG activity but also task-related activity [ 19 , 21 , 122 ]; ERD-ERS features like amplitude and latency along with biomechanical measures have been shown to correlate with clinical measures of motor performance and to predict a person’s response to movement therapies. EEG power features in general have been found to have good to excellent reliability for test–retest conditions among different populations, across all frequency bands of interest (see Table 9 ).

Functional connectivity (i.e., non-directed connectivity) expands the investigative capacity of EEG measurements, enabling analyzing the brain as a network system by investigating the interactions between regions of interest in the brain while resting or during movement tasks. Inter-hemispheric interactions (interactions between the same ROI in both hemispheres) and global interactions (interactions between the entire brain and an ROI) reported as power or graph indices in Beta and Gamma bands have fruitfully been used to explain motor outcome scores. Although results seem promising, connectivity reliability is still debated with results ranging mostly between moderate to good reliability only for a few connectivity estimators ( PLI, wPLI and iCoh ).

Which neural activity-based metrics (EEG and EMG) lack evidence or require more investigation?

EEG and EMG provide useful non-invasive insight into the human neuromuscular system allowing researchers to make conjectures about its function and structure; however, interpretation of results based on these measures solely must be carefully analyzed within the frame of experimental conditions. Overall, the field needs more studies involving cohorts of stroke survivors to determine the reliability (test–retest) of metrics derived from EEG and EMG signal features that have already shown validity in academic studies.

Metrics calculated from power imbalance between interhemispheric activity like BSI , pwBSI and PRI [ 62 , 73 , 124 ] are a great premise to measure how the brain relies on foreign regions to accomplish tasks related with affected areas. A battery of diverse estimators for connectivity, especially those of effective (directed) connectivity, open the door to investigations into the relationship between abnormal communication of regions of interest and impairment (see Table 5 and Table 12 in Appendix 2). These metrics, although valid have yet to be tested in terms of reliability in clinical use. Reliability for connectivity metrics should specify which estimator was used to derive the metric.

CMC is another exciting neural-activity-based metric lacking sufficient evidence to support its significance. CMC considers and bridges two of the most affected domains for motor execution in neuromuscular system, making it a good candidate for robotic-based therapy and assessment of survivors of stroke [ 147 ]. Although features in the Beta and Gamma bands seem to be related to motor impairment, there is still not agreement about which one is most closely related to motor outcomes. Studies reviewed in this paper considered cortical spatial patterns of maximum coherence, peak frequency shift when compared to healthy controls, latency for peak coherence, among others (see Table 6 and Table 13 in Appendix 3). However, when comparing to motor outcomes, results are not always significant, and test–retest reliability for this metric is yet (to our knowledge) to be documented for the upper extremity (see [ 61 ] for a lower-extremity study).

What standards should be adopted for reporting biomechanical and neural activity-based metrics and their reliability?

For metrics to be accepted as reliable in the clinical field, researchers are asked to follow the guidelines presented in Koo and Li [ 68 ], which provide guidance on which ICC model to use depending on the type of reliability study and what should be reported (e.g., the software they used to compute the ICC and confidence interval). In the papers reviewed, some investigated the learning effects of the assessment task and checked for consistency rather than agreement (see Table 7 ). However, the learning effects should be minimal in a clinical setting between each session, and potential effects should be taken into consideration during protocol design; common practices to minimize the implications of learning effects is to allow practice runs by the patients [ 99 , 122 ] and to remove the first experimental runs [ 81 , 85 ]. By removing this information, signal analysis focuses performance of learned tasks with similar associated behaviors. Therefore, to demonstrate test–retest reliability (i.e., repeatability), the researcher should be checking for absolute agreement. Also, as can be seen in Tables 7 and 8 , there does not seem to be a standard on reporting ICC values. Some researchers report the confidence interval of the ICC value, while others do not. It was also difficult to determine the ICC model used in some of the studies. Therefore, a standard on reporting ICC values is needed to help readers understand the ICC used and prevent bias (see [ 68 ] for suggestive guideline on how to report ICC scores). Also, authors are asked to include the means of each individual session or rater would provide additional information on the variation of the means between the groups. The variation between groups can be shown with Bland–Altman plot, but readers are unable to perform other forms of analysis. To help with this, data from studies should be made publicly available to allow results to be verified and enable further analysis in the future.

When is it advantageous to combine biomechanical and neural activity-based metrics for assessment?

Biomechanical and neural activity provide distinct but complementary information about the neuro-musculoskeletal system, potentially offering a more complete picture of impairment and function after stroke. Metrics derived from kinematic/kinetic information assess motor performance based on motor execution; however, compensatory strategies related to stroke may mask underlying neural deficits (i.e., muscle synergies line up to complete a given task) [ 18 , 21 , 69 , 70 , 71 , 72 , 122 ]. Information relevant to these compensatory strategies can be obtained when analyzing electrophysiological activity, as has been done using connectivity [ 59 , 107 ], CMC [ 147 , 148 ] and brain cortical power [ 91 ].

Combining signals from multiple domains, although beneficial in the sense that it would allow a deeper understanding of a subject’s motor ability, is still a subject of exploration. Experimental paradigms play an important role that influences the decision of feature selection; increasing the dimensionality of signals may provide more useful information for analysis, but comes at the expense of experimental costs (e.g., hardware) and time (e.g., subject setup). With all this in mind, merging information from different domains in the hierarchy of the neuro-musculoskeletal system may provide a more comprehensive quantitative profile of a person’s impairment and performance. Examples of robotic multidomain methods such as the ones in [ 18 , 21 ], highlight the importance of this type of assessment for monitoring and understanding the impact of rehabilitation in chronic stroke survivors. In both cases, these methodologies allowed pairing of observed behavioral changes in task execution (i.e., biomechanical data) with corresponding functional recovery, instead of adopted compensation strategies.

What should be the focus of future investigations of biomechanical and/or neural activity-based metrics?

Determining the reliability and validity of sensor-based metrics requires carefully designed experiments. In future investigations, experiments should be conducted that calculate multiple metrics from multiple sensors and device combinations, allowing the effect of sensor type and quality on the measure’s reliability to be quantified. After the conclusion of such experiments, researchers are strongly encouraged to make their anonymized raw data public to allow other researchers to compute different ICCs. Performing comparison studies on the reliability of metrics will produce reliability data to expand Tables 7 , 8 , 9 and improve our ability to compare similar sensor-based metrics. Additional reliability studies should also be performed that include neural features of survivors of stroke, with increased focus on modeling the interactions between these domains (biomechanical and neural activity). It is also important to understand how to successfully combine data from multimodal experiments; many of the studies reviewed in this paper recorded multidimensional data, but performed analysis for each domain separately.

Availability of data and materials

Not applicable.

Abbreviations

Activities of daily living

Amplitude envelope correlation

Action research arm test

Active range of motion

Autism spectrum disorder

Box and Blocks test

Brain Symmetry Index

Canonical correlation analysis

Cortico-spinal tract

Delta-alpha ratio

Delayed cerebral ischemia

Direct directed transfer function

Degree of freedom

(Delta + Theta)/(Alpha + Beta)

Directed transfer function

Delta-theta ratio

  • Electroencephalography

Electromyography

Event related desynchronization

Event related synchronization

Full frequency directed transfer function

Functional independence measure and functional assessment measure

Fugl-Meyer assessment for upper extremity

Generalized Measure of Association

Generalized partial directed coherence

Intra-class correlations

Imaginary part of coherence

Primary motor cortex

Modified Ashworth

Modified Barthel Index

Multiple coherence

Motricity Index

Montreal Cognitive Assessment

Movement related beta desynchronization

Magnetic resonance imaging

Modified Ranking Scale

Normalized interhemispheric strength

National Institutes of Health Stroke Scale

Non-negative matrix factorization algorithm

Principal component analysis

Partial coherence

Partial directed coherence

Phase lag index, weight phase lag index, debiased weighted phase lag index

Premotor area

Post movement beta rebound

Power Ratio Index

Passive range of motion

Quantitative EEG

Regional cerebral blood flow

Region of interest

Renormalized partial directed coherence

Singular value decomposition

Wolf motor function

Weighted Node Degree Index

Stroke Facts. 2020. https://www.cdc.gov/stroke/facts.htm . Accessed 26 Mar 2020.

Ottenbacher KJ, Smith PM, Illig SB, Linn RT, Ostir GV, Granger CV. Trends in length of stay, living setting, functional outcome, and mortality following medical rehabilitation. JAMA. 2004;292(14):1687–95. https://doi.org/10.1001/jama.292.14.1687 .

Article   CAS   PubMed   Google Scholar  

Lang CE, MacDonald JR, Gnip C. Counting repetitions: an observational study of outpatient therapy for people with hemiparesis post-stroke. J Neurol Phys Ther. 2007;31(1). https://journals.lww.com/jnpt/Fulltext/2007/03000/Counting_Repetitions__An_Observational_Study_of.4.aspx .

Gresham GE, Phillips TF, Wolf PA, McNamara PM, Kannel WB, Dawber TR. Epidemiologic profile of long-term stroke disability: the Framingham study. Arch Phys Med Rehabil. 1979;60(11):487–91.

CAS   PubMed   Google Scholar  

Duncan EA, Murray J. The barriers and facilitators to routine outcome measurement by allied health professionals in practice: a systematic review. BMC Health Serv Res. 2012;12(1):96.

Article   PubMed   PubMed Central   Google Scholar  

Sullivan KJ, Tilson JK, Cen SY, Rose DK, Hershberg J, Correa A, et al. Fugl-meyer assessment of sensorimotor function after stroke: standardized training procedure for clinical practice and clinical trials. Stroke. 2011;42(2):427–32.

Article   PubMed   Google Scholar  

Ansari NN, Naghdi S, Arab TK, Jalaie S. The interrater and intrarater reliability of the Modified Ashworth Scale in the assessment of muscle spasticity: limb and muscle group effect. NeuroRehabilitation. 2008;23:231–7.

Wade DT, Collin C. The Barthel ADL Index: a standard measure of physical disability? Int Disabil Stud. 1988;10(2):64–7.

Maggioni S, Melendez-Calderon A, van Asseldonk E, Klamroth-Marganska V, Lünenburger L, Riener R, et al. Robot-aided assessment of lower extremity functions: a review. J Neuroeng Rehabil. 2016;13(1):72. https://doi.org/10.1186/s12984-016-0180-3 .

Frisoli A, Procopio C, Chisari C, Creatini I, Bonfiglio L, Bergamasco M, et al. Positive effects of robotic exoskeleton training of upper limb reaching movements after stroke. J Neuroeng Rehabil. 2012;9(1):36. https://doi.org/10.1186/1743-0003-9-36 .

Groothuis-Oudshoorn CGM, Prange GB, Hermens HJ, Ijzerman MJ, Jannink MJA. Systematic review of the effect of robot-aided therapy on recovery of the hemiparetic arm after stroke. J Rehabil Res Dev. 2006;43(2):171.

Harwin WS, Murgia A, Stokes EK. Assessing the effectiveness of robot facilitated neurorehabilitation for relearning motor skills following a stroke. Med Biol Eng Comput. 2011;49(10):1093–102.

Nordin N, Xie SQ, Wünsche B. Assessment of movement quality in robot- assisted upper limb rehabilitation after stroke: a review. J NeuroEngineering Rehabil. 2014;11:137. https://doi.org/10.1186/1743-0003-11-137 .

Article   Google Scholar  

De Los Reyes-Guzman A, Dimbwadyo-Terrer I, Trincado-Alonso F, Monasterio-Huelin F, Torricelli D, Gil-Agudo A. Quantitative assessment based on kinematic measures of functional impairments during upper extremity movements: a review. Clin Biomech. 2014;29(7):719–27. https://doi.org/10.1016/j.clinbiomech.2014.06.013 .

Molteni E, Preatoni E, Cimolin V, Bianchi AM, Galli M, Rodano R. A methodological study for the multifactorial assessment of motor adaptation: integration of kinematic and neural factors. 2010 Annu Int Conf IEEE Eng Med Biol Soc EMBC’10. 2010;4910–3.

Mazzoleni S, Coscia M, Rossi G, Aliboni S, Posteraro F, Carrozza MC. Effects of an upper limb robot-mediated therapy on paretic upper limb in chronic hemiparetic subjects: a biomechanical and EEG-based approach for functional assessment. 2009 IEEE Int Conf Rehabil Robot ICORR 2009. 2009;92–7.

Úbeda A, Azorín JM, Chavarriaga R, Millán JdR. Classification of upper limb center-out reaching tasks by means of EEG-based continuous decoding techniques. J Neuroeng Rehabil. 2017;14(1):1–14.

Pierella C, Pirondini E, Kinany N, Coscia M, Giang C, Miehlbradt J, et al. A multimodal approach to capture post-stroke temporal dynamics of recovery. J Neural Eng. 2020;17(4): 045002.

Steinisch M, Tana MG, Comani S. A post-stroke rehabilitation system integrating robotics, VR and high-resolution EEG imaging. IEEE Trans Neural Syst Rehabil Eng. 2013;21(5):849–59. https://doi.org/10.1596/978-1-4648-1002-2_Module14 .

Úbeda A, Hortal E, Iáñez E, Perez-Vidal C, Azorín JM. Assessing movement factors in upper limb kinematics decoding from EEG signals. PLoS ONE. 2015;10(5):1–12.

Belfatto A, Scano A, Chiavenna A, Mastropietro A, Mrakic-Sposta S, Pittaccio S, et al. A multiparameter approach to evaluate post-stroke patients: an application on robotic rehabilitation. Appl Sci. 2018;8(11):2248.

Comani S, Schinaia L, Tamburro G, Velluto L, Sorbi S, Conforto S, et al. Assessing Neuromotor Recovery in a stroke survivor with high resolution EEG, robotics and virtual reality. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2015. p. 3925–8.

Kwon HM, Yang IH, Lee WS, Yu ARL, Oh SY, Park KK. Reliability of intraoperative knee range of motion measurements by goniometer compared with robot-assisted arthroplasty. J Knee Surg. 2019;32(3):233–8.

Dukelow SP, Herter TM, Moore KD, Demers MJ, Glasgow JI, Bagg SD, et al. Quantitative assessment of limb position sense following stroke. Neurorehabil Neural Repair. 2010;24(2):178–87.

Balasubramanian S, Wei R, Herman R, He J. Robot-measured performance metrics in stroke rehabilitation. In: 2009 ICME International Conference on Complex Medical Engineering, CME 2009. 2009.

Otaka E, Otaka Y, Kasuga S, Nishimoto A, Yamazaki K, Kawakami M, et al. Clinical usefulness and validity of robotic measures of reaching movement in hemiparetic stroke patients. J Neuroeng Rehabil. 2015;12(1):66.

Singh H, Unger J, Zariffa J, Pakosh M, Jaglal S, Craven BC, et al. Robot-assisted upper extremity rehabilitation for cervical spinal cord injuries: a systematic scoping review. Disabil Rehabil Assist Technol. 2018;13(7):704–15. https://doi.org/10.1080/17483107.2018.1425747 .

Molteni F, Gasperini G, Cannaviello G, Guanziroli E. Exoskeleton and end-effector robots for upper and lower limbs rehabilitation: narrative review. PMR. 2018;10(9):174–88.

Jutinico AL, Jaimes JC, Escalante FM, Perez-Ibarra JC, Terra MH, Siqueira AAG. Impedance control for robotic rehabilitation: a robust markovian approach. Front Neurorobot. 2017;11(AUG):1–16.

Google Scholar  

Li Z, Huang Z, He W, Su CY. Adaptive impedance control for an upper limb robotic exoskeleton using biological signals. IEEE Trans Ind Electron. 2017;64(2):1664–74.

Marchal-Crespo L, Reinkensmeyer DJ. Review of control strategies for robotic movement training after neurologic injury. J NeuroEngineering Rehabil. 2009;6:20. https://doi.org/10.1186/1743-0003-6-20 .

Cohen MX. Analyzing neural time series data: theory and practice. Cambridge: MIT Press; 2014.

Book   Google Scholar  

Stafstrom CE, Carmant L. Seizures and epilepsy: an overview. Cold Spring Harb Perspect Med. 2015;5(6):65–77.

Machado C, Cuspineda E, Valdãs P, Virues T, Liopis F, Bosch J, et al. Assessing acute middle cerebral artery ischemic stroke by quantitative electric tomography. Clin EEG Neurosci. 2004;35(3):116–24.

Finnigan SP, Walsh M, Rose SE, Chalk JB. Quantitative EEG indices of sub-acute ischaemic stroke correlate with clinical outcomes. Clin Neurophysiol. 2007;118(11):2525–31.

Cuspineda E, Machado C, Galán L, Aubert E, Alvarez MA, Llopis F, et al. QEEG prognostic value in acute stroke. Clin EEG Neurosci. 2007;38(3):155–60.

Belardinelli P, Laer L, Ortiz E, Braun C, Gharabaghi A. Plasticity of premotor cortico-muscular coherence in severely impaired stroke patients with hand paralysis. NeuroImage Clin. 2017;14:726–33.

Di PM, Schnider A, Nicolo P, Rizk S, Guggisberg AG. Coherent neural oscillations predict future motor and language improvement after stroke. Brain. 2015;138(10):3048–60.

Chen CC, Lee SH, Wang WJ, Lin YC, Su MC. EEG-based motor network biomarkers for identifying target patients with stroke for upper limb rehabilitation and its construct validity. PLoS ONE. 2017;12(6):1–20. https://doi.org/10.1371/journal.pone.0178822 .

Article   CAS   Google Scholar  

Conway BA, Halliday DM, Farmer SF, Shahani U, Maas P, Weir AI, et al. Synchronization between motor cortex and spinal motoneuronal pool during the performance of a maintained motor task in man. J Physiol. 1995;489(3):917–24.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Salenius S, Portin K, Kajola M, Salmelin R, Hari R. Cortical control of human motoneuron firing during isometric contraction. J Neurophysiol. 1997;77(6):3401–5.

Mima T, Hallett M. Electroencephalographic analysis of cortico-muscular coherence: reference effect, volume conduction and generator mechanism. Clin Neurophysiol. 1999;110(11):1892–9.

Claassen J, Hirsch LJ, Kreiter KT, Du EY, Sander Connolly E, Emerson RG, et al. Quantitative continuous EEG for detecting delayed cerebral ischemia in patients with poor-grade subarachnoid hemorrhage. Clin Neurophysiol. 2004;115(12):2699–710.

Sullivan JL, Bhagat NA, Yozbatiran N, Paranjape R, Losey CG, Grossman RG, et al. Improving robotic stroke rehabilitation by incorporating neural intent detection: preliminary results from a clinical trial. In: 2017 International Conference on Rehabilitation Robotics (ICORR). IEEE; 2017. p. 122–7.

Muralidharan A, Chae J, Taylor DM. Extracting attempted hand movements from EEGs in people with complete hand paralysis following stroke. Front Neurosci. 2011. https://doi.org/10.3389/fnins.2011.00039 .

Nam C, Rong W, Li W, Xie Y, Hu X, Zheng Y. The effects of upper-limb training assisted with an electromyography-driven neuromuscular electrical stimulation robotic hand on chronic stroke. Front Neurol. 2017. https://doi.org/10.3389/fneur.2017.00679 .

Bertolucci F, Lamola G, Fanciullacci C, Artoni F, Panarese A, Micera S, et al. EEG predicts upper limb motor improvement after robotic rehabilitation in chronic stroke patients. Ann Phys Rehabil Med. 2018;61:e200–1.

Cantillo-Negrete J, Carino-Escobar RI, Carrillo-Mora P, Elias-Vinas D, Gutierrez-Martinez J. Motor imagery-based brain-computer interface coupled to a robotic hand orthosis aimed for neurorehabilitation of stroke patients. J Healthc Eng. 2018;3(2018):1–10.

Bhagat NA, Venkatakrishnan A, Abibullaev B, Artz EJ, Yozbatiran N, Blank AA, et al. Design and optimization of an EEG-based brain machine interface (BMI) to an upper-limb exoskeleton for stroke survivors. Front Neurosci. 2016;10(MAR):122.

PubMed   PubMed Central   Google Scholar  

Biasiucci A, Leeb R, Iturrate I, Perdikis S, Al-Khodairy A, Corbet T, et al. Brain-actuated functional electrical stimulation elicits lasting arm motor recovery after stroke. Nat Commun. 2018;9(1):1–13. https://doi.org/10.1038/s41467-018-04673-z .

Ang KK, Guan C, Chua KSG, Ang BT, Kuah C, Wang C, et al. Clinical study of neurorehabilitation in stroke using EEG-based motor imagery brain-computer interface with robotic feedback. Annu Int Conf IEEE Eng Med Biol. 2010. pp. 5549–52.

Finnigan SP, Rose SE, Walsh M, Griffin M, Janke AL, Mcmahon KL, et al. Correlation of quantitative EEG in acute ischemic stroke with 30-day NIHSS score: comparison with diffusion and perfusion MRI. Stroke. 2004;35(4):899–903.

Schleiger E, Sheikh N, Rowland T, Wong A, Read S, Finnigan S. Frontal EEG delta / alpha ratio and screening for post-stroke cognitive de fi cits: the power of four electrodes. Int J Psychophysiol. 2014;94(1):19–24. https://doi.org/10.1016/j.ijpsycho.2014.06.012 .

Aminov A, Rogers JM, Johnstone SJ, Middleton S, Wilson PH. Acute single channel EEG predictors of cognitive function after stroke. PLoS ONE. 2017;12(10): e0185841.

Andresen EM. Criteria for assessing the tools of disability outcomes research. Arch Phys Med Rehabil. 2000. https://doi.org/10.1053/apmr.2000.20619 .

Wang Q, Markopoulos P, Yu B, Chen W, Timmermans A. Interactive wearable systems for upper body rehabilitation: a systematic review. J Neuroeng Rehabil. 2017;14(1):1–21.

Tran VD, Dario P, Mazzoleni S. Kinematic measures for upper limb robot-assisted therapy following stroke and correlations with clinical outcome measures: a review. Med Eng Phys. 2018;53:13–31. https://doi.org/10.1016/j.medengphy.2017.12.005 .

Finnigan S, Wong A, Read S. Defining abnormal slow EEG activity in acute ischaemic stroke: Delta/alpha ratio as an optimal QEEG index. Clin Neurophysiol. 2016;127(2):1452–9. https://doi.org/10.1016/j.clinph.2015.07.014 .

Carter AR, Shulman GL, Corbetta M. Why use a connectivity-based approach to study stroke and recovery of function? Neuroimage. 2012;62(4):2271–80.

van der Velde B, Haartsen R, Kemner C. Test-retest reliability of EEG network characteristics in infants. Brain Behav. 2019;9(5):1–10.

Gennaro F, de Bruin ED. A pilot study assessing reliability and age-related differences in corticomuscular and intramuscular coherence in ankle dorsiflexors during walking. Physiol Rep. 2020;8(4):1–12.

Brihmat N, Loubinoux I, Castel-Lacanal E, Marque P, Gasq D. Kinematic parameters obtained with the ArmeoSpring for upper-limb assessment after stroke: a reliability and learning effect study for guiding parameter use. J Neuroeng Rehabil. 2020;17(1):130. https://doi.org/10.1186/s12984-020-00759-2 .

Dewald JPA, Ellis MD, Acosta AM, McPherson JG, Stienen AHA. Implementation of impairment- based neurorehabilitation devices and technologies following brain injury. Neurorehabilitation technology, 2nd edn. 2016. 375–392 p.

Subramanian SK, Yamanaka J, Chilingaryan G, Levin MF. Validity of movement pattern kinematics as measures of arm motor impairment poststroke. Stroke. 2010;41(10):2303–8.

Fayers PM, Machin D. Quality of life: the assessment, analysis and reporting of patient‐reported outcomes . John Wiley & Sons, Incorporated. 2016;3:89-124.

Alt Murphy M, Häger CK. Kinematic analysis of the upper extremity after stroke—how far have we reached and what have we grasped? Phys Ther Rev. 2015;20(3):137–55.

Shishov N, Melzer I, Bar-Haim S. Parameters and measures in assessment of motor learning in neurorehabilitation; a systematic review of the literature. Front Hum Neurosci. 2017. https://doi.org/10.3389/fnhum.2017.00082 .

Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155–63. https://doi.org/10.1016/j.jcm.2016.02.012 .

Angel RW. Electromyographic patterns during ballistic movement of normal and spastic limbs. Brain Res. 1975;99(2):387–92.

McLellan DL. C0-contraction and stretch reflexes in spasticity during treatment with baclofen. J Neurol Neurosurg Psychiatry. 1977;40(1):30–8.

Dewald JPA, Pope PS, Given JD, Buchanan TS, Rymer WZ. Abnormal muscle coactivation patterns during isometric torque generation at the elbow and shoulder in hemiparetic subjects. Brain. 1995;118(2):495–510. https://doi.org/10.1093/brain/118.2.495 .

Wilkins KB, Yao J, Owen M, Karbasforoushan H, Carmona C, Dewald JPA. Limited capacity for ipsilateral secondary motor areas to support hand function post-stroke. J Physiol. 2020;598(11):2153–67. https://doi.org/10.1113/JP279377 .

Agius Anastasi A, Falzon O, Camilleri K, Vella M, Muscat R. Brain symmetry index in healthy and stroke patients for assessment and prognosis. Stroke Res Treat. 2017;30(2017):1–9.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339: b2700.

Colombo R, Pisano F, Micera S, Mazzone A, Delconte C, Carrozza MC, et al. Assessing mechanisms of recovery during robot-aided neurorehabilitation of the upper limb. Neurorehabil Neural Repair. 2008;22(1):50–63. https://doi.org/10.1177/1545968307303401 .

Keller U, Schölch S, Albisser U, Rudhe C, Curt A, Riener R, et al. Robot-assisted arm assessments in spinal cord injured patients: a consideration of concept study. PLoS One. 2015;10(5):e0126948. https://doi.org/10.1371/journal.pone.0126948 .

Mostafavi SM. Computational models for improved diagnosis and prognosis of stroke using robot-based biomarkers. 2016. http://hdl.handle.net/1974/14563 .

Bosecker C, Dipietro L, Volpe B, Krebs HI. Kinematic robot-based evaluation scales and clinical counterparts to measure upper limb motor performance in patients with chronic stroke. Neurorehabil Neural Repair. 2010;24(1):62–9.

Balasubramanian S, Melendez-Calderon A, Roby-Brami A, Burdet E. On the analysis of movement smoothness. J Neuroeng Rehabil. 2015;12(1):112. https://doi.org/10.1186/s12984-015-0090-9 .

Rohrer B, Fasoli S, Krebs HI, Hughes R, Volpe B, Frontera WR, et al. Movement smoothness changes during stroke recovery. J Neurosci. 2002;22(18):8297–304.

Mobini A, Behzadipour S, Saadat M. Test-retest reliability of Kinect’s measurements for the evaluation of upper body recovery of stroke patients. Biomed Eng Online. 2015;14(1):1–14.

Zariffa J, Myers M, Coahran M, Wang RH. Smallest real differences for robotic measures of upper extremity function after stroke: implications for tracking recovery. J Rehabil Assist Technol Eng. 2018;5:205566831878803. https://doi.org/10.1177/2055668318788036 .

Elovic E, Brashear A. Spasticity : diagnosis and management. New York: Demos Medical; 2011. http://ida.lib.uidaho.edu:2048/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=e000xna&AN=352265&site=ehost-live&scope=site .

Centen A, Lowrey CR, Scott SH, Yeh TT, Mochizuki G. KAPS (kinematic assessment of passive stretch): a tool to assess elbow flexor and extensor spasticity after stroke using a robotic exoskeleton. J Neuroeng Rehabil. 2017;14(1):1–13.

Sin M, Kim WS, Cho K, Cho S, Paik NJ. Improving the test-retest and inter-rater reliability for stretch reflex measurements using an isokinetic device in stroke patients with mild to moderate elbow spasticity. J Electromyogr Kinesiol. 2017;2018(39):120–7. https://doi.org/10.1016/j.jelekin.2018.01.012 .

Germanotta M, Cruciani A, Pecchioli C, Loreti S, Spedicato A, Meotti M, et al. Reliability, validity and discriminant ability of the instrumental indices provided by a novel planar robotic device for upper limb rehabilitation. J Neuroeng Rehabil. 2018;15(1):1–14.

Wagner JM, Rhodes JA, Patten C. Reproducibility and minimal detectable change of three-dimensional kinematic analysis of reaching tasks in people with hemiparaesis after stroke. 2008. https://doi.org/10.2522/ptj.20070255 .

Semrau JA, Herter TM, Scott SH, Dukelow SP. Inter-rater reliability of kinesthetic measurements with the KINARM robotic exoskeleton. J Neuroeng Rehabil. 2017;14(1):1–10.

Lin CH, Chou LW, Wei SH, Lieu FK, Chiang SL, Sung WH. Validity and reliability of a novel device for bilateral upper extremity functional measurements. Comput Methods Programs Biomed. 2014;114(3):315–23. https://doi.org/10.1016/j.cmpb.2014.02.012 .

Wolf S, Butler A, Alberts J, Kim M. Contemporary linkages between EMG, kinetics and stroke. J Electromyogr Kinesiol. 2005;15(3):229–39.

Iyer KK. Effective assessments of electroencephalography during stroke recovery : contemporary approaches and considerations. J Neurophysiol. 2017;118(5):2521–5.

Liu J, Sheng Y, Liu H. Corticomuscular coherence and its applications: a review. Front Hum Neurosci. 2019;13(March):1–16.

Pan LLH, Yang WW, Kao CL, Tsai MW, Wei SH, Fregni F, et al. Effects of 8-week sensory electrical stimulation combined with motor training on EEG-EMG coherence and motor function in individuals with stroke. Sci Rep. 2018;8(1):1–10.

Wu J, Quinlan EB, Dodakian L, McKenzie A, Kathuria N, Zhou RJ, et al. Connectivity measures are robust biomarkers of cortical function and plasticity after stroke. Brain. 2015;138(8):2359–69.

Mrachacz-Kersting N, Jiang N, Thomas Stevenson AJ, Niazi IK, Kostic V, Pavlovic A, et al. Efficient neuroplasticity induction in chronic stroke patients by an associative brain-computer interface. J Neurophysiol. 2016;115(3):1410–21.

Bentes C, Peralta AR, Viana P, Martins H, Morgado C, Casimiro C, et al. Quantitative EEG and functional outcome following acute ischemic stroke. Clin Neurophysiol. 2018;129(8):1680–7.

Leon-carrion J, Martin-rodriguez JF, Damas-lopez J, Manuel J, Dominguez-morales MR. Delta–alpha ratio correlates with level of recovery after neurorehabilitation in patients with acquired brain injury. Clin Neurophysiol. 2009;120(6):1039–45. https://doi.org/10.1016/j.clinph.2009.01.021 .

Finnigan S, van Putten MJAM. EEG in ischaemic stroke: qEEG can uniquely inform (sub-)acute prognoses and clinical management. Clin Neurophysiol. 2013;124(1):10–9.

Trujillo P, Mastropietro A, Scano A, Chiavenna A, Mrakic-Sposta S, Caimmi M, et al. Quantitative EEG for predicting upper limb motor recovery in chronic stroke robot-assisted rehabilitation. IEEE Trans Neural Syst Rehabil Eng. 2017;25(7):1058–67.

Jordan K. Emergency EEG and continuous EEG monitoring in acute ischemic stroke. Clin Neurophysiol. 2004;21(5):341–52.

Comani S, Velluto L, Schinaia L, Cerroni G, Serio A, Buzzelli S, et al. Monitoring neuro-motor recovery from stroke with high-resolution EEG, robotics and virtual reality: a proof of concept. IEEE Trans Neural Syst Rehabil Eng. 2015;23(6):1106–16.

Espenhahn S, de Berker AO, van Wijk BCM, Rossiter HE, Ward NS. Movement-related beta oscillations show high intra-individual reliability. Neuroimage. 2017;147:175–85. https://doi.org/10.1016/j.neuroimage.2016.12.025 .

Vázquez-Marrufo M, Galvao-Carmona A, Benítez Lugo ML, Ruíz-Peña JL, Borges Guerra M, Izquierdo AG. Retest reliability of individual alpha ERD topography assessed by human electroencephalography. PLoS ONE. 2017;12(10):1–16.

Dubovik S, Ptak R, Aboulafia T, Magnin C, Gillabert N, Allet L, et al. EEG alpha band synchrony predicts cognitive and motor performance in patients with ischemic stroke. In: Behavioural Neurology. Hindawi Limited; 2013. p. 187–9.

Sheorajpanday RVAA, Nagels G, Weeren AJTMTM, Putten MJAMV, Deyn PPD, van Putten MJAM, et al. Quantitative EEG in ischemic stroke: correlation with functional status after 6 months. Clin Neurophysiol. 2011;122(5):874–83. https://doi.org/10.1016/j.clinph.2010.07.028 .

De Vico Fallani F, Astolfi L, Cincotti F, Mattia D, La Rocca D, Maksuti E, et al. Evaluation of the brain network organization from EEG signals: a preliminary evidence in stroke patient. In: Anatomical Record. 2009. p. 2023–31.

Westlake KP, Nagarajan SS. Functional connectivity in relation to motor performance and recovery after stroke. Front Syst Neurosci. 2011;18(5):8.

Caliandro P, Vecchio F, Miraglia F, Reale G, Della Marca G, La Torre G, et al. Small-world characteristics of cortical connectivity changes in acute stroke. Neurorehabil Neural Repair. 2017;31(1):81–94.

Eldeeb S, Akcakaya M, Sybeldon M, Foldes S, Santarnecchi E, Pascual-Leone A, et al. EEG-based functional connectivity to analyze motor recovery after stroke: a pilot study. Biomed Signal Process Control. 2019;49:419–26.

Myers LJ, O’Malley M. The relationship between human cortico-muscular coherence and rectified EMG. In: International IEEE/EMBS Conference on Neural Engineering, NER. IEEE Computer Society; 2003. p. 289–92.

Braun C, Staudt M, Schmitt C, Preissl H, Birbaumer N, Gerloff C. Crossed cortico-spinal motor control after capsular stroke. Eur J Neurosci. 2007;25(9):2935–45.

Larsen LH, Zibrandtsen IC, Wienecke T, Kjaer TW, Christensen MS, Nielsen JB, et al. Corticomuscular coherence in the acute and subacute phase after stroke. Clin Neurophysiol. 2017;128(11):2217–26.

Ang KK, Chua KSG, Phua KS, Wang C, Chin ZY, Kuah CWK, et al. A randomized controlled trial of EEG-based motor imagery brain-computer interface robotic rehabilitation for stroke. Clin EEG Neurosci. 2015;46(4):310–20.

Liu S, Guo J, Meng J, Wang Z, Yao Y, Yang J, et al. Abnormal EEG complexity and functional connectivity of brain in patients with acute thalamic ischemic stroke. Comput Math Methods Med. 2016;14(2016):1–9.

CAS   Google Scholar  

Sun R, Wong W, Wang J, Tong RK. Changes in electroencephalography complexity using a brain computer interface-motor observation training in chronic stroke patients : a fuzzy approximate entropy analysis. Front Hum Neurosci. 2017;5(11):444.

Auriat AM, Neva JL, Peters S, Ferris JK, Boyd LA. A review of transcranial magnetic stimulation and multimodal neuroimaging to characterize post-stroke neuroplasticity. Front Neurol. 2015;6:1–20.

Niedermeyer E, Schomer DL, Lopes da Silva FH. Niedermeyer’s electroencephalography: basic principles, clinical applications, and related fields, 6th edn. Philadelphia: Lippincott Williams & Wilkins.; 2011.

Foreman B, Claasen J. Update in intensive care and emergency medicine. Update in intensive care and emergency medicine. Springer Berlin Heidelberg; 2012.

Tolonen U, Ahonen A, Kallanranta T, Hokkanen E. Non-invasive external regional measurement of cerebral circulation time changes in supratentorial infarctions using pertechnetate. Stroke. 1981;12(4):437–44.

Saes M, Zandvliet SB, Andringa AS, Daffertshofer A, Twisk JWR, Meskers CGM, et al. Is resting-state EEG longitudinally associated with recovery of clinical neurological impairments early poststroke? A prospective cohort study. Neurorehabil Neural Repair. 2020;34(5):389–402.

Rogers J, Middleton S, Wilson PH, Johnstone SJ. Predicting functional outcomes after stroke: an observational study of acute single-channel EEG. Top Stroke Rehabil. 2020;27(3):161–72. https://doi.org/10.1080/10749357.2019.1673576 .

Sale P, Infarinato F, Lizio R, Babiloni C. Electroencephalographic markers of robot-aided therapy in stroke patients for the evaluation of upper limb rehabilitation. Rehabil Res. 2015;38(4):294–305.

Sheorajpanday RVA, Nagels G, Weeren AJTM, De Surgeloose D, De Deyn PP, De DPP. Additional value of quantitative EEG in acute anterior circulation syndrome of presumed ischemic origin. Clin Neurophysiol. 2010;121(10):1719–25.

Saes M, Meskers CGM, Daffertshofer A, van Wegen EEH, Kwakkel G. Are early measured resting-state EEG parameters predictive for upper limb motor impairment six months poststroke? Clin Neurophysiol. 2021;132(1):56–62. https://doi.org/10.1016/j.clinph.2020.09.031 .

Saes M, Meskers CGM, Daffertshofer A, de Munck JC, Kwakkel G, van Wegen EEH. How does upper extremity Fugl-Meyer motor score relate to resting-state EEG in chronic stroke? A power spectral density analysis. Clin Neurophysiol. 2019;130(5):856–62. https://doi.org/10.1016/j.clinph.2019.01.007 .

Nolte G, Bai O, Mari Z, Vorbach S, Hallett M. Identifying true brain interaction from EEG data using the imaginary part of coherency. Clin Neurophysiol. 2004;115(10):2292–307.

Stam CJ, Nolte G, Daffertshofer A. Phase lag index: assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Hum Brain Mapp. 2007;28(11):1178–93.

Bullmore E, Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci. 2009;10(3):186–98.

Baccalá LA, Sameshima K. Partial directed coherence: a new concept in neural structure determination. Biol Cybern. 2001;84(6):463–74.

Baccalá LA, Sameshima K, Takahashi D. Generalized partial directed coherence. Int Conf Digit Signal Process. 2007;3:163–6.

Schelter B, Timmer J, Eichler M. Assessing the strength of directed influences among neural signals using renormalized partial directed coherence. J Neurosci Methods. 2009;179(1):121–30.

Kamiński M, Ding M, Truccolo WA, Bressler SL. Evaluating causal relations in neural systems: Granger causality, directed transfer function and statistical assessment of significance. Biol Cybern. 2001;85(2):145–57.

Korzeniewska A, Mańczak M, Kamiński M, Blinowska KJ, Kasicki S. Determination of information flow direction among brain structures by a modified directed transfer function (dDTF) method. J Neurosci Methods. 2003;125(1–2):195–207.

Fornito A, Bullmore ET, Zalesky A. Fundamentals of brain network analysis. Cambridge: Academic Press; 2016.

Philips GR, Daly JJ, Príncipe JC. Topographical measures of functional connectivity as biomarkers for post-stroke motor recovery. J Neuroeng Rehabil. 2017;14(1):67.

Pichiorri F, Petti M, Caschera S, Astolfi L, Cincotti F, Mattia D. An EEG index of sensorimotor interhemispheric coupling after unilateral stroke: clinical and neurophysiological study. Eur J Neurosci. 2018;47(2):158–63.

Hoshino T, Oguchi K, Inoue K, Hoshino A, Hoshiyama M. Relationship between upper limb function and functional neural connectivity among motor related-areas during recovery stage after stroke. Top Stroke Rehabil. 2020;27(1):57–66. https://doi.org/10.1080/10749357.2019.1658429 .

Hordacre B, Goldsworthy MR, Welsby E, Graetz L, Ballinger S, Hillier S. Resting state functional connectivity is associated with motor pathway integrity and upper-limb behavior in chronic stroke. Neurorehabil Neural Repair. 2020;34(6):547–57.

Riahi N, Vakorin VA, Menon C. Estimating Fugl-Meyer upper extremity motor score from functional-connectivity measures. IEEE Trans Neural Syst Rehabil Eng. 2020;28(4):860–8.

Gwin JT, Ferris DP. Beta- and gamma-range human lower limb corticomuscular coherence. Front Hum Neurosci. 2012;11(6):258.

Zheng Y, Peng Y, Xu G, Li L, Wang J. Using corticomuscular coherence to reflect function recovery of paretic upper limb after stroke: a case study. Front Neurol. 2018;10(8):728.

Rossiter HE, Eaves C, Davis E, Boudrias MH, Park CH, Farmer S, et al. Changes in the location of cortico-muscular coherence following stroke. NeuroImage Clin. 2013;2(1):50–5.

Mima T, Toma K, Koshy B, Hallett M. Coherence between cortical and muscular activities after subcortical stroke. Stroke. 2001;32(11):2597–601.

Krauth R, Schwertner J, Vogt S, Lindquist S, Sailer M, Sickert A, et al. Cortico-muscular coherence is reduced acutely post-stroke and increases bilaterally during motor recovery: a pilot study. Front Neurol. 2019;20(10):126.

Bao SC, Wong WW, Leung TW, Tong KY. Low gamma band cortico-muscular coherence inter-hemisphere difference following chronic stroke. In: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. Institute of Electrical and Electronics Engineers Inc.; 2018. p. 247–50.

von Carlowitz-Ghori K, Bayraktaroglu Z, Hohlefeld FU, Losch F, Curio G, Nikulin VV. Corticomuscular coherence in acute and chronic stroke. Clin Neurophysiol. 2014;125(6):1182–91.

Chen X, Xie P, Zhang Y, Chen Y, Cheng S, Zhang L. Abnormal functional corticomuscular coupling after stroke. NeuroImage Clin. 2018;19:147–59. https://doi.org/10.1016/j.nicl.2018.04.004 .

Curado MR, Cossio EG, Broetz D, Agostini M, Cho W, Brasil FL, et al. Residual upper arm motor function primes innervation of paretic forearm muscles in chronic stroke after brain-machine interface (BMI) training. PLoS ONE. 2015;10(10):1–18.

Guo Z, Qian Q, Wong K, Zhu H, Huang Y, Hu X, et al. Altered corticomuscular coherence (CMCoh) pattern in the upper limb during finger movements after stroke. Front Neurol. 2020. https://doi.org/10.3389/fneur.2020.00410 .

Bruton A, Conway JH, Holgate ST. Reliability: what is it, and how is it measured? Physiotherapy. 2000;86(2):94–9.

Colombo R, Cusmano I, Sterpi I, Mazzone A, Delconte C, Pisano F. Test-retest reliability of robotic assessment measures for the evaluation of upper limb recovery. IEEE Trans Neural Syst Rehabil Eng. 2014;22(5):1020–9.

Costa V, Ramírez Ó, Otero A, Muñoz-García D, Uribarri S, Raya R. Validity and reliability of inertial sensors for elbow and wrist range of motion assessment. PeerJ. 2020;8: e9687.

Gasser T, Bächer P, Steinberg H. Test-retest reliability of spectral parameters of the EEG. Electroencephalogr Clin Neurophysiol. 1985;60(4):312–9.

Levin AR, Naples AJ, Scheffler AW, Webb SJ, Shic F, Sugar CA, et al. Day-to-day test-retest reliability of EEG profiles in children with autism spectrum disorder and typical development. Front Integr Neurosci. 2020;14:1–12.

Briels CT, Briels CT, Schoonhoven DN, Schoonhoven DN, Stam CJ, De Waal H, et al. Reproducibility of EEG functional connectivity in Alzheimer’s disease. Alzheimer’s Res Ther. 2020;12(1):1–14.

Marquetand J, Vannoni S, Carboni M, Li Hegner Y, Stier C, Braun C, et al. Reliability of magnetoencephalography and high-density electroencephalography resting-state functional connectivity metrics. Brain Connect. 2019;9(7):539–53.

Lowrey CR, Blazevski B, Marnet J-L, Bretzke H, Dukelow SP, Scott SH. Robotic tests for position sense and movement discrimination in the upper limb reveal that they each are highly reproducible but not correlated in healthy individuals. J Neuroeng Rehabil. 2020;17(1):103. https://doi.org/10.1186/s12984-020-00721-2 .

Simmatis LER, Early S, Moore KD, Appaqaq S, Scott SH. Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing. J Neuroeng Rehabil. 2020;17(1):86. https://doi.org/10.1186/s12984-020-00713-2 .

Download references

Acknowledgements

The authors would like to thank Stephen Goodwin and Aaron I. Feinstein for their contributions to the collection and organization of references on robotic systems, measurements, and metrics.

This work was funded by the National Science Foundation (Award#1532239) and the Eunice Kennedy Shriver National Institute of Child Health & Human Development of the National Institutes of Health (Award#K12HD073945). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Science Foundation nor the National Institutes of Health.

Author information

Authors and affiliations.

Mechanical Engineering Department, University of Idaho, Moscow, ID, USA

Rene M. Maura, Eric T. Wolbrecht & Joel C. Perry

Engineering and Physics Department, Whitworth University, Spokane, WA, USA

Richard E. Stevens

College of Medicine, Washington State University, Spokane, WA, USA

Douglas L. Weeks

Electrical Engineering Department, University of Idaho, ID, Moscow, USA

Sebastian Rueda Parra

You can also search for this author in PubMed   Google Scholar

Contributions

RM, and SRP drafted the manuscript and performed the literature search. EW, JP, RS, and DW provided concepts, edited, and revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Rene M. Maura .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Table 11 .

See Table 12 .

See Table 13 .

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Maura, R.M., Rueda Parra, S., Stevens, R.E. et al. Literature review of stroke assessment for upper-extremity physical function via EEG, EMG, kinematic, and kinetic measurements and their reliability. J NeuroEngineering Rehabil 20 , 21 (2023). https://doi.org/10.1186/s12984-023-01142-7

Download citation

Received : 27 May 2021

Accepted : 19 January 2023

Published : 15 February 2023

DOI : https://doi.org/10.1186/s12984-023-01142-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Reliability
  • Robot-assisted therapy
  • Exoskeleton
  • Neurological assessment
  • Rehabilitation
  • Motor function

Journal of NeuroEngineering and Rehabilitation

ISSN: 1743-0003

model literature review

IMAGES

  1. 39 Best Literature Review Examples (Guide & Samples)

    model literature review

  2. 15 Literature Review Examples (2024)

    model literature review

  3. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    model literature review

  4. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    model literature review

  5. Start

    model literature review

  6. literature review article examples Sample of research literature review

    model literature review

VIDEO

  1. What is OSI model ( Hindi )

  2. How to formulate Research Problem or questions? #research #methodology #sec #du #satyawaticollege

  3. DSC English literature. model paper

  4. Asia Cup: BCCI willing to accept hybrid model on one condition

  5. Review of Literature and Model Building

  6. How to Write Literature Review for Research Proposal

COMMENTS

  1. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  2. Literature review as a research methodology: An overview and guidelines

    Literature reviews can also be useful if the aim is to engage in theory development (Baumeister & Leary, 1997; Torraco, 2005). In these cases, a literature review provides the basis for building a new conceptual model or theory, and it can be valuable when aiming to map the development of a particular research field over time.

  3. Literature Review: Conducting & Writing

    Steps for Conducting a Lit Review; Finding "The Literature" Organizing/Writing; APA Style This link opens in a new window; Chicago: Notes Bibliography This link opens in a new window; MLA Style This link opens in a new window; Sample Literature Reviews. Sample Lit Reviews from Communication Arts; Have an exemplary literature review? Get Help!

  4. Writing a Literature Review Research Paper: A step-by-step approach

    47. Writing a Literatur e Review Research Paper: A step -by- step approach. Abdullah Ramdhani 1, Muhammad Ali Ramdhani 2, Abdusy Syakur Am in 3. 1 Department of Public Administration, Garut ...

  5. Writing a Literature Review

    A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis).The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays).

  6. What is a Literature Review?

    A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research. There are five key steps to writing a literature review: Search for relevant literature. Evaluate sources. Identify themes, debates and gaps.

  7. How To Write A Literature Review

    1. Outline and identify the purpose of a literature review. As a first step on how to write a literature review, you must know what the research question or topic is and what shape you want your literature review to take. Ensure you understand the research topic inside out, or else seek clarifications.

  8. Steps in Conducting a Literature Review

    A literature review is an integrated analysis-- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question. That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

  9. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  10. START HERE

    Steps to Completing a Literature Review. Find. Conduct searches for relevant information. Evaluate. Critically review your sources. Summarize. Determine the most important and relevant information from each source, theories, findings, etc. Synthesize. Create a synthesis matrix to find connections between resources, and ensure your sources ...

  11. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  12. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    However, a model depicting the study design does not serve the same role as a conceptual framework. Researchers need to avoid conflating these constructs by differentiating the researchers' conceptual framework that guides the study from the research design, when applicable. ... Writing literature reviews: A guide for students of the social ...

  13. Technology acceptance model: a literature review from 1986 to 2013

    A review of prior relevant literature is an essential feature of any scientific study. The effective review creates a firm foundation for advancing knowledge; it facilitates theory development, closes areas where a plethora of research exists, and uncovers areas where research is needed [17, 89].To identify scientific publications that aim to investigate the TAM, a literature review that ...

  14. Recent advances in deep learning models: a systematic literature review

    In recent years, deep learning has evolved as a rapidly growing and stimulating field of machine learning and has redefined state-of-the-art performances in a variety of applications. There are multiple deep learning models that have distinct architectures and capabilities. Up to the present, a large number of novel variants of these baseline deep learning models is proposed to address the ...

  15. What is a Literature Review? How to Write It (with Examples)

    A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship ...

  16. PDF LITERATURE REVIEWS

    WRITING A TARGETED LITERATURE REVIEW a targeted literature review is NOT: ¡ a sophisticated evaluation of the entire literature or literatures related to your topic ¡ a set of thinly connected summaries of important related works haphazardly selected from many subfields a targeted literature review IS: ¡ a carefully curated set of sources from a small number of subfield literatures

  17. Writing the Review

    An example outline for a Literature Review might look like this: Introduction. Background information on the topic & definitions; ... As you conduct your research, you will likely read many sources that model the same kind of literature review that you are researching and writing. While your original intent in reading those sources is likely to ...

  18. 5. The Literature Review

    A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories.A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that ...

  19. Chapter 9 Methods for Literature Reviews

    Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour (vom Brocke et al., 2009). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and ...

  20. The Constructs of a Business Model Redefined: A Half-Century Journey

    The "Literature Review" section provides a synthesized overview of the available literature, focusing on the emergence and popularity of the business model concept in academic literature as well as on business model definition. ... First, although more than half a century has passed since the first appearance of the term business model in ...

  21. How to Write a Thematic Literature Review: A Beginner's Guide

    When writing a thematic literature review, go through different literature review sections of published research work and understand the subtle nuances associated with this approach. Identify Themes: Analyze the literature to identify recurring themes or topics relevant to your research question. Categorize the bibliography by dividing them ...

  22. How to Analyze Literature Reviews with Four Models

    3. Theoretical model. Be the first to add your personal experience. 4. Methodological model. Be the first to add your personal experience. 5. Here's what else to consider. Be the first to add ...

  23. Model Programs Guide

    Model Programs Guide Literature Reviews provide practitioners and policymakers with relevant research and evaluations for several youth-related topics and programs. Afterschool Programs (2010) Alternatives to Detention and Confinement (2014) Age Boundaries of the Juvenile Justice System (2024) Arts-Based Programs and Arts Therapies for At-Risk ...

  24. The C.A.R.S. Model

    The Creating a Research Space [C.A.R.S.] Model was developed by John Swales based upon his analysis of journal articles representing a variety of discipline-based writing practices. His model attempts to explain and describe the organizational pattern of writing the introduction to scholarly research studies. ... this is not a literature review ...

  25. Towards a Consensus Model: Literature Review of How Science Teachers

    This chapter presents a systematic review of the science education literature to identify how researchers investigate science teachers' pedagogical content knowledge (PCK). Specifically, we focus on empirical studies of individual science teachers' PCK...

  26. PDF Large Language Model for Vulnerability Detection and Repair: Literature

    Large Language Model for Vulnerability Detection and Repair: Literature Review and Roadmap Xin Zhou†, Sicong Cao‡, Xiaobing Sun‡, and David Lo† †School of Computing and Information Systems, Singapore Management University Singapore ‡School of Information Engineering, Yangzhou University China [email protected],[email protected]

  27. Literature review of stroke assessment for upper-extremity physical

    This paper reviews literature (2000-2021) on sensor-based measures and metrics for upper-limb biomechanical and electrophysiological (neurological) assessment, which have been shown to correlate with clinical test outcomes for motor assessment. ... along with model, type of agreement, and confidence intervals, when reported. A total of 60 ...

  28. MTL-AraBERT: An Enhanced Multi-Task Learning Model for Arabic ...

    Aspect-based sentiment analysis (ABSA) is a fine-grained type of sentiment analysis; it works on an aspect level. It mainly focuses on extracting aspect terms from text or reviews, categorizing the aspect terms, and classifying the sentiment polarities toward each aspect term and aspect category. Aspect term extraction (ATE) and aspect category detection (ACD) are interdependent and closely ...