Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper
  • How to Write Recommendations in Research | Examples & Tips

How to Write Recommendations in Research | Examples & Tips

Published on September 15, 2022 by Tegan George . Revised on July 18, 2023.

Recommendations in research are a crucial component of your discussion section and the conclusion of your thesis , dissertation , or research paper .

As you conduct your research and analyze the data you collected , perhaps there are ideas or results that don’t quite fit the scope of your research topic. Or, maybe your results suggest that there are further implications of your results or the causal relationships between previously-studied variables than covered in extant research.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What should recommendations look like, building your research recommendation, how should your recommendations be written, recommendation in research example, other interesting articles, frequently asked questions about recommendations.

Recommendations for future research should be:

  • Concrete and specific
  • Supported with a clear rationale
  • Directly connected to your research

Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

Relatedly, when making these recommendations, avoid:

  • Undermining your own work, but rather offer suggestions on how future studies can build upon it
  • Suggesting recommendations actually needed to complete your argument, but rather ensure that your research stands alone on its own merits
  • Using recommendations as a place for self-criticism, but rather as a natural extension point for your work

Prevent plagiarism. Run a free check.

There are many different ways to frame recommendations, but the easiest is perhaps to follow the formula of research question   conclusion  recommendation. Here’s an example.

Conclusion An important condition for controlling many social skills is mastering language. If children have a better command of language, they can express themselves better and are better able to understand their peers. Opportunities to practice social skills are thus dependent on the development of language skills.

As a rule of thumb, try to limit yourself to only the most relevant future recommendations: ones that stem directly from your work. While you can have multiple recommendations for each research conclusion, it is also acceptable to have one recommendation that is connected to more than one conclusion.

These recommendations should be targeted at your audience, specifically toward peers or colleagues in your field that work on similar subjects to your paper or dissertation topic . They can flow directly from any limitations you found while conducting your work, offering concrete and actionable possibilities for how future research can build on anything that your own work was unable to address at the time of your writing.

See below for a full research recommendation example that you can use as a template to write your own.

Recommendation in research example

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Survivorship bias
  • Self-serving bias
  • Availability heuristic
  • Halo effect
  • Hindsight bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

While it may be tempting to present new arguments or evidence in your thesis or disseration conclusion , especially if you have a particularly striking argument you’d like to finish your analysis with, you shouldn’t. Theses and dissertations follow a more formal structure than this.

All your findings and arguments should be presented in the body of the text (more specifically in the discussion section and results section .) The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones.

The conclusion of your thesis or dissertation should include the following:

  • A restatement of your research question
  • A summary of your key arguments and/or results
  • A short discussion of the implications of your research

For a stronger dissertation conclusion , avoid including:

  • Important evidence or analysis that wasn’t mentioned in the discussion section and results section
  • Generic concluding phrases (e.g. “In conclusion …”)
  • Weak statements that undermine your argument (e.g., “There are good points on both sides of this issue.”)

Your conclusion should leave the reader with a strong, decisive impression of your work.

In a thesis or dissertation, the discussion is an in-depth exploration of the results, going into detail about the meaning of your findings and citing relevant sources to put them in context.

The conclusion is more shorter and more general: it concisely answers your main research question and makes recommendations based on your overall findings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, July 18). How to Write Recommendations in Research | Examples & Tips. Scribbr. Retrieved April 5, 2024, from https://www.scribbr.com/dissertation/recommendations-in-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, how to write a discussion section | tips & examples, how to write a thesis or dissertation conclusion, how to write a results section | tips & examples, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Recommendations – Examples and Writing Guide

Research Recommendations – Examples and Writing Guide

Table of Contents

Research Recommendations

Research Recommendations

Definition:

Research recommendations refer to suggestions or advice given to someone who is looking to conduct research on a specific topic or area. These recommendations may include suggestions for research methods, data collection techniques, sources of information, and other factors that can help to ensure that the research is conducted in a rigorous and effective manner. Research recommendations may be provided by experts in the field, such as professors, researchers, or consultants, and are intended to help guide the researcher towards the most appropriate and effective approach to their research project.

Parts of Research Recommendations

Research recommendations can vary depending on the specific project or area of research, but typically they will include some or all of the following parts:

  • Research question or objective : This is the overarching goal or purpose of the research project.
  • Research methods : This includes the specific techniques and strategies that will be used to collect and analyze data. The methods will depend on the research question and the type of data being collected.
  • Data collection: This refers to the process of gathering information or data that will be used to answer the research question. This can involve a range of different methods, including surveys, interviews, observations, or experiments.
  • Data analysis : This involves the process of examining and interpreting the data that has been collected. This can involve statistical analysis, qualitative analysis, or a combination of both.
  • Results and conclusions: This section summarizes the findings of the research and presents any conclusions or recommendations based on those findings.
  • Limitations and future research: This section discusses any limitations of the study and suggests areas for future research that could build on the findings of the current project.

How to Write Research Recommendations

Writing research recommendations involves providing specific suggestions or advice to a researcher on how to conduct their study. Here are some steps to consider when writing research recommendations:

  • Understand the research question: Before writing research recommendations, it is important to have a clear understanding of the research question and the objectives of the study. This will help to ensure that the recommendations are relevant and appropriate.
  • Consider the research methods: Consider the most appropriate research methods that could be used to collect and analyze data that will address the research question. Identify the strengths and weaknesses of the different methods and how they might apply to the specific research question.
  • Provide specific recommendations: Provide specific and actionable recommendations that the researcher can implement in their study. This can include recommendations related to sample size, data collection techniques, research instruments, data analysis methods, or other relevant factors.
  • Justify recommendations : Justify why each recommendation is being made and how it will help to address the research question or objective. It is important to provide a clear rationale for each recommendation to help the researcher understand why it is important.
  • Consider limitations and ethical considerations : Consider any limitations or potential ethical considerations that may arise in conducting the research. Provide recommendations for addressing these issues or mitigating their impact.
  • Summarize recommendations: Provide a summary of the recommendations at the end of the report or document, highlighting the most important points and emphasizing how the recommendations will contribute to the overall success of the research project.

Example of Research Recommendations

Example of Research Recommendations sample for students:

  • Further investigate the effects of X on Y by conducting a larger-scale randomized controlled trial with a diverse population.
  • Explore the relationship between A and B by conducting qualitative interviews with individuals who have experience with both.
  • Investigate the long-term effects of intervention C by conducting a follow-up study with participants one year after completion.
  • Examine the effectiveness of intervention D in a real-world setting by conducting a field study in a naturalistic environment.
  • Compare and contrast the results of this study with those of previous research on the same topic to identify any discrepancies or inconsistencies in the findings.
  • Expand upon the limitations of this study by addressing potential confounding variables and conducting further analyses to control for them.
  • Investigate the relationship between E and F by conducting a meta-analysis of existing literature on the topic.
  • Explore the potential moderating effects of variable G on the relationship between H and I by conducting subgroup analyses.
  • Identify potential areas for future research based on the gaps in current literature and the findings of this study.
  • Conduct a replication study to validate the results of this study and further establish the generalizability of the findings.

Applications of Research Recommendations

Research recommendations are important as they provide guidance on how to improve or solve a problem. The applications of research recommendations are numerous and can be used in various fields. Some of the applications of research recommendations include:

  • Policy-making: Research recommendations can be used to develop policies that address specific issues. For example, recommendations from research on climate change can be used to develop policies that reduce carbon emissions and promote sustainability.
  • Program development: Research recommendations can guide the development of programs that address specific issues. For example, recommendations from research on education can be used to develop programs that improve student achievement.
  • Product development : Research recommendations can guide the development of products that meet specific needs. For example, recommendations from research on consumer behavior can be used to develop products that appeal to consumers.
  • Marketing strategies: Research recommendations can be used to develop effective marketing strategies. For example, recommendations from research on target audiences can be used to develop marketing strategies that effectively reach specific demographic groups.
  • Medical practice : Research recommendations can guide medical practitioners in providing the best possible care to patients. For example, recommendations from research on treatments for specific conditions can be used to improve patient outcomes.
  • Scientific research: Research recommendations can guide future research in a specific field. For example, recommendations from research on a specific disease can be used to guide future research on treatments and cures for that disease.

Purpose of Research Recommendations

The purpose of research recommendations is to provide guidance on how to improve or solve a problem based on the findings of research. Research recommendations are typically made at the end of a research study and are based on the conclusions drawn from the research data. The purpose of research recommendations is to provide actionable advice to individuals or organizations that can help them make informed decisions, develop effective strategies, or implement changes that address the issues identified in the research.

The main purpose of research recommendations is to facilitate the transfer of knowledge from researchers to practitioners, policymakers, or other stakeholders who can benefit from the research findings. Recommendations can help bridge the gap between research and practice by providing specific actions that can be taken based on the research results. By providing clear and actionable recommendations, researchers can help ensure that their findings are put into practice, leading to improvements in various fields, such as healthcare, education, business, and public policy.

Characteristics of Research Recommendations

Research recommendations are a key component of research studies and are intended to provide practical guidance on how to apply research findings to real-world problems. The following are some of the key characteristics of research recommendations:

  • Actionable : Research recommendations should be specific and actionable, providing clear guidance on what actions should be taken to address the problem identified in the research.
  • Evidence-based: Research recommendations should be based on the findings of the research study, supported by the data collected and analyzed.
  • Contextual: Research recommendations should be tailored to the specific context in which they will be implemented, taking into account the unique circumstances and constraints of the situation.
  • Feasible : Research recommendations should be realistic and feasible, taking into account the available resources, time constraints, and other factors that may impact their implementation.
  • Prioritized: Research recommendations should be prioritized based on their potential impact and feasibility, with the most important recommendations given the highest priority.
  • Communicated effectively: Research recommendations should be communicated clearly and effectively, using language that is understandable to the target audience.
  • Evaluated : Research recommendations should be evaluated to determine their effectiveness in addressing the problem identified in the research, and to identify opportunities for improvement.

Advantages of Research Recommendations

Research recommendations have several advantages, including:

  • Providing practical guidance: Research recommendations provide practical guidance on how to apply research findings to real-world problems, helping to bridge the gap between research and practice.
  • Improving decision-making: Research recommendations help decision-makers make informed decisions based on the findings of research, leading to better outcomes and improved performance.
  • Enhancing accountability : Research recommendations can help enhance accountability by providing clear guidance on what actions should be taken, and by providing a basis for evaluating progress and outcomes.
  • Informing policy development : Research recommendations can inform the development of policies that are evidence-based and tailored to the specific needs of a given situation.
  • Enhancing knowledge transfer: Research recommendations help facilitate the transfer of knowledge from researchers to practitioners, policymakers, or other stakeholders who can benefit from the research findings.
  • Encouraging further research : Research recommendations can help identify gaps in knowledge and areas for further research, encouraging continued exploration and discovery.
  • Promoting innovation: Research recommendations can help identify innovative solutions to complex problems, leading to new ideas and approaches.

Limitations of Research Recommendations

While research recommendations have several advantages, there are also some limitations to consider. These limitations include:

  • Context-specific: Research recommendations may be context-specific and may not be applicable in all situations. Recommendations developed in one context may not be suitable for another context, requiring adaptation or modification.
  • I mplementation challenges: Implementation of research recommendations may face challenges, such as lack of resources, resistance to change, or lack of buy-in from stakeholders.
  • Limited scope: Research recommendations may be limited in scope, focusing only on a specific issue or aspect of a problem, while other important factors may be overlooked.
  • Uncertainty : Research recommendations may be uncertain, particularly when the research findings are inconclusive or when the recommendations are based on limited data.
  • Bias : Research recommendations may be influenced by researcher bias or conflicts of interest, leading to recommendations that are not in the best interests of stakeholders.
  • Timing : Research recommendations may be time-sensitive, requiring timely action to be effective. Delayed action may result in missed opportunities or reduced effectiveness.
  • Lack of evaluation: Research recommendations may not be evaluated to determine their effectiveness or impact, making it difficult to assess whether they are successful or not.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Enago Academy

Research Recommendations – Guiding policy-makers for evidence-based decision making

' src=

Research recommendations play a crucial role in guiding scholars and researchers toward fruitful avenues of exploration. In an era marked by rapid technological advancements and an ever-expanding knowledge base, refining the process of generating research recommendations becomes imperative.

But, what is a research recommendation?

Research recommendations are suggestions or advice provided to researchers to guide their study on a specific topic . They are typically given by experts in the field. Research recommendations are more action-oriented and provide specific guidance for decision-makers, unlike implications that are broader and focus on the broader significance and consequences of the research findings. However, both are crucial components of a research study.

Difference Between Research Recommendations and Implication

Although research recommendations and implications are distinct components of a research study, they are closely related. The differences between them are as follows:

Difference between research recommendation and implication

Types of Research Recommendations

Recommendations in research can take various forms, which are as follows:

These recommendations aim to assist researchers in navigating the vast landscape of academic knowledge.

Let us dive deeper to know about its key components and the steps to write an impactful research recommendation.

Key Components of Research Recommendations

The key components of research recommendations include defining the research question or objective, specifying research methods, outlining data collection and analysis processes, presenting results and conclusions, addressing limitations, and suggesting areas for future research. Here are some characteristics of research recommendations:

Characteristics of research recommendation

Research recommendations offer various advantages and play a crucial role in ensuring that research findings contribute to positive outcomes in various fields. However, they also have few limitations which highlights the significance of a well-crafted research recommendation in offering the promised advantages.

Advantages and limitations of a research recommendation

The importance of research recommendations ranges in various fields, influencing policy-making, program development, product development, marketing strategies, medical practice, and scientific research. Their purpose is to transfer knowledge from researchers to practitioners, policymakers, or stakeholders, facilitating informed decision-making and improving outcomes in different domains.

How to Write Research Recommendations?

Research recommendations can be generated through various means, including algorithmic approaches, expert opinions, or collaborative filtering techniques. Here is a step-wise guide to build your understanding on the development of research recommendations.

1. Understand the Research Question:

Understand the research question and objectives before writing recommendations. Also, ensure that your recommendations are relevant and directly address the goals of the study.

2. Review Existing Literature:

Familiarize yourself with relevant existing literature to help you identify gaps , and offer informed recommendations that contribute to the existing body of research.

3. Consider Research Methods:

Evaluate the appropriateness of different research methods in addressing the research question. Also, consider the nature of the data, the study design, and the specific objectives.

4. Identify Data Collection Techniques:

Gather dataset from diverse authentic sources. Include information such as keywords, abstracts, authors, publication dates, and citation metrics to provide a rich foundation for analysis.

5. Propose Data Analysis Methods:

Suggest appropriate data analysis methods based on the type of data collected. Consider whether statistical analysis, qualitative analysis, or a mixed-methods approach is most suitable.

6. Consider Limitations and Ethical Considerations:

Acknowledge any limitations and potential ethical considerations of the study. Furthermore, address these limitations or mitigate ethical concerns to ensure responsible research.

7. Justify Recommendations:

Explain how your recommendation contributes to addressing the research question or objective. Provide a strong rationale to help researchers understand the importance of following your suggestions.

8. Summarize Recommendations:

Provide a concise summary at the end of the report to emphasize how following these recommendations will contribute to the overall success of the research project.

By following these steps, you can create research recommendations that are actionable and contribute meaningfully to the success of the research project.

Download now to unlock some tips to improve your journey of writing research recommendations.

Example of a Research Recommendation

Here is an example of a research recommendation based on a hypothetical research to improve your understanding.

Research Recommendation: Enhancing Student Learning through Integrated Learning Platforms

Background:

The research study investigated the impact of an integrated learning platform on student learning outcomes in high school mathematics classes. The findings revealed a statistically significant improvement in student performance and engagement when compared to traditional teaching methods.

Recommendation:

In light of the research findings, it is recommended that educational institutions consider adopting and integrating the identified learning platform into their mathematics curriculum. The following specific recommendations are provided:

  • Implementation of the Integrated Learning Platform:

Schools are encouraged to adopt the integrated learning platform in mathematics classrooms, ensuring proper training for teachers on its effective utilization.

  • Professional Development for Educators:

Develop and implement professional programs to train educators in the effective use of the integrated learning platform to address any challenges teachers may face during the transition.

  • Monitoring and Evaluation:

Establish a monitoring and evaluation system to track the impact of the integrated learning platform on student performance over time.

  • Resource Allocation:

Allocate sufficient resources, both financial and technical, to support the widespread implementation of the integrated learning platform.

By implementing these recommendations, educational institutions can harness the potential of the integrated learning platform and enhance student learning experiences and academic achievements in mathematics.

This example covers the components of a research recommendation, providing specific actions based on the research findings, identifying the target audience, and outlining practical steps for implementation.

Using AI in Research Recommendation Writing

Enhancing research recommendations is an ongoing endeavor that requires the integration of cutting-edge technologies, collaborative efforts, and ethical considerations. By embracing data-driven approaches and leveraging advanced technologies, the research community can create more effective and personalized recommendation systems. However, it is accompanied by several limitations. Therefore, it is essential to approach the use of AI in research with a critical mindset, and complement its capabilities with human expertise and judgment.

Here are some limitations of integrating AI in writing research recommendation and some ways on how to counter them.

1. Data Bias

AI systems rely heavily on data for training. If the training data is biased or incomplete, the AI model may produce biased results or recommendations.

How to tackle: Audit regularly the model’s performance to identify any discrepancies and adjust the training data and algorithms accordingly.

2. Lack of Understanding of Context:

AI models may struggle to understand the nuanced context of a particular research problem. They may misinterpret information, leading to inaccurate recommendations.

How to tackle: Use AI to characterize research articles and topics. Employ them to extract features like keywords, authorship patterns and content-based details.

3. Ethical Considerations:

AI models might stereotype certain concepts or generate recommendations that could have negative consequences for certain individuals or groups.

How to tackle: Incorporate user feedback mechanisms to reduce redundancies. Establish an ethics review process for AI models in research recommendation writing.

4. Lack of Creativity and Intuition:

AI may struggle with tasks that require a deep understanding of the underlying principles or the ability to think outside the box.

How to tackle: Hybrid approaches can be employed by integrating AI in data analysis and identifying patterns for accelerating the data interpretation process.

5. Interpretability:

Many AI models, especially complex deep learning models, lack transparency on how the model arrived at a particular recommendation.

How to tackle: Implement models like decision trees or linear models. Provide clear explanation of the model architecture, training process, and decision-making criteria.

6. Dynamic Nature of Research:

Research fields are dynamic, and new information is constantly emerging. AI models may struggle to keep up with the rapidly changing landscape and may not be able to adapt to new developments.

How to tackle: Establish a feedback loop for continuous improvement. Regularly update the recommendation system based on user feedback and emerging research trends.

The integration of AI in research recommendation writing holds great promise for advancing knowledge and streamlining the research process. However, navigating these concerns is pivotal in ensuring the responsible deployment of these technologies. Researchers need to understand the use of responsible use of AI in research and must be aware of the ethical considerations.

Exploring research recommendations plays a critical role in shaping the trajectory of scientific inquiry. It serves as a compass, guiding researchers toward more robust methodologies, collaborative endeavors, and innovative approaches. Embracing these suggestions not only enhances the quality of individual studies but also contributes to the collective advancement of human understanding.

Frequently Asked Questions

The purpose of recommendations in research is to provide practical and actionable suggestions based on the study's findings, guiding future actions, policies, or interventions in a specific field or context. Recommendations bridges the gap between research outcomes and their real-world application.

To make a research recommendation, analyze your findings, identify key insights, and propose specific, evidence-based actions. Include the relevance of the recommendations to the study's objectives and provide practical steps for implementation.

Begin a recommendation by succinctly summarizing the key findings of the research. Clearly state the purpose of the recommendation and its intended impact. Use a direct and actionable language to convey the suggested course of action.

Rate this article Cancel Reply

Your email address will not be published.

3 importance of recommendation in research

Enago Academy's Most Popular Articles

Understand Academic Burnout: Spot the Signs & Reclaim Your Focus

  • Career Corner
  • Trending Now

Recognizing the signs: A guide to overcoming academic burnout

As the sun set over the campus, casting long shadows through the library windows, Alex…

How to Promote an Inclusive and Equitable Lab Environment

  • Diversity and Inclusion

Reassessing the Lab Environment to Create an Equitable and Inclusive Space

The pursuit of scientific discovery has long been fueled by diverse minds and perspectives. Yet…

AI Summarization Tools

  • AI in Academia

Simplifying the Literature Review Journey — A comparative analysis of 6 AI summarization tools

Imagine having to skim through and read mountains of research papers and books, only to…

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Digital Citations: A comprehensive guide to citing of websites in APA, MLA, and CMOS…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Beyond the Podium: Understanding the differences in conference and academic…

3 importance of recommendation in research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

3 importance of recommendation in research

What should universities' stance be on AI tools in research and academic writing?

Implications or Recommendations in Research: What's the Difference?

  • Peer Review

High-quality research articles that get many citations contain both implications and recommendations. Implications are the impact your research makes, whereas recommendations are specific actions that can then be taken based on your findings, such as for more research or for policymaking.

Updated on August 23, 2022

yellow sign reading opportunity ahead

That seems clear enough, but the two are commonly confused.

This confusion is especially true if you come from a so-called high-context culture in which information is often implied based on the situation, as in many Asian cultures. High-context cultures are different from low-context cultures where information is more direct and explicit (as in North America and many European cultures).

Let's set these two straight in a low-context way; i.e., we'll be specific and direct! This is the best way to be in English academic writing because you're writing for the world.

Implications and recommendations in a research article

The standard format of STEM research articles is what's called IMRaD:

  • Introduction
  • Discussion/conclusions

Some journals call for a separate conclusions section, while others have the conclusions as the last part of the discussion. You'll write these four (or five) sections in the same sequence, though, no matter the journal.

The discussion section is typically where you restate your results and how well they confirmed your hypotheses. Give readers the answer to the questions for which they're looking to you for an answer.

At this point, many researchers assume their paper is finished. After all, aren't the results the most important part? As you might have guessed, no, you're not quite done yet.

The discussion/conclusions section is where to say what happened and what should now happen

The discussion/conclusions section of every good scientific article should contain the implications and recommendations.

The implications, first of all, are the impact your results have on your specific field. A high-impact, highly cited article will also broaden the scope here and provide implications to other fields. This is what makes research cross-disciplinary.

Recommendations, however, are suggestions to improve your field based on your results.

These two aspects help the reader understand your broader content: How and why your work is important to the world. They also tell the reader what can be changed in the future based on your results.

These aspects are what editors are looking for when selecting papers for peer review.

how to write the conclusion section of a research manuscript

Implications and recommendations are, thus, written at the end of the discussion section, and before the concluding paragraph. They help to “wrap up” your paper. Once your reader understands what you found, the next logical step is what those results mean and what should come next.

Then they can take the baton, in the form of your work, and run with it. That gets you cited and extends your impact!

The order of implications and recommendations also matters. Both are written after you've summarized your main findings in the discussion section. Then, those results are interpreted based on ongoing work in the field. After this, the implications are stated, followed by the recommendations.

Writing an academic research paper is a bit like running a race. Finish strong, with your most important conclusion (recommendation) at the end. Leave readers with an understanding of your work's importance. Avoid generic, obvious phrases like "more research is needed to fully address this issue." Be specific.

The main differences between implications and recommendations (table)

 the differences between implications and recommendations

Now let's dig a bit deeper into actually how to write these parts.

What are implications?

Research implications tell us how and why your results are important for the field at large. They help answer the question of “what does it mean?” Implications tell us how your work contributes to your field and what it adds to it. They're used when you want to tell your peers why your research is important for ongoing theory, practice, policymaking, and for future research.

Crucially, your implications must be evidence-based. This means they must be derived from the results in the paper.

Implications are written after you've summarized your main findings in the discussion section. They come before the recommendations and before the concluding paragraph. There is no specific section dedicated to implications. They must be integrated into your discussion so that the reader understands why the results are meaningful and what they add to the field.

A good strategy is to separate your implications into types. Implications can be social, political, technological, related to policies, or others, depending on your topic. The most frequently used types are theoretical and practical. Theoretical implications relate to how your findings connect to other theories or ideas in your field, while practical implications are related to what we can do with the results.

Key features of implications

  • State the impact your research makes
  • Helps us understand why your results are important
  • Must be evidence-based
  • Written in the discussion, before recommendations
  • Can be theoretical, practical, or other (social, political, etc.)

Examples of implications

Let's take a look at some examples of research results below with their implications.

The result : one study found that learning items over time improves memory more than cramming material in a bunch of information at once .

The implications : This result suggests memory is better when studying is spread out over time, which could be due to memory consolidation processes.

The result : an intervention study found that mindfulness helps improve mental health if you have anxiety.

The implications : This result has implications for the role of executive functions on anxiety.

The result : a study found that musical learning helps language learning in children .

The implications : these findings suggest that language and music may work together to aid development.

What are recommendations?

As noted above, explaining how your results contribute to the real world is an important part of a successful article.

Likewise, stating how your findings can be used to improve something in future research is equally important. This brings us to the recommendations.

Research recommendations are suggestions and solutions you give for certain situations based on your results. Once the reader understands what your results mean with the implications, the next question they need to know is "what's next?"

Recommendations are calls to action on ways certain things in the field can be improved in the future based on your results. Recommendations are used when you want to convey that something different should be done based on what your analyses revealed.

Similar to implications, recommendations are also evidence-based. This means that your recommendations to the field must be drawn directly from your results.

The goal of the recommendations is to make clear, specific, and realistic suggestions to future researchers before they conduct a similar experiment. No matter what area your research is in, there will always be further research to do. Try to think about what would be helpful for other researchers to know before starting their work.

Recommendations are also written in the discussion section. They come after the implications and before the concluding paragraphs. Similar to the implications, there is usually no specific section dedicated to the recommendations. However, depending on how many solutions you want to suggest to the field, they may be written as a subsection.

Key features of recommendations

  • Statements about what can be done differently in the field based on your findings
  • Must be realistic and specific
  • Written in the discussion, after implications and before conclusions
  • Related to both your field and, preferably, a wider context to the research

Examples of recommendations

Here are some research results and their recommendations.

A meta-analysis found that actively recalling material from your memory is better than simply re-reading it .

  • The recommendation: Based on these findings, teachers and other educators should encourage students to practice active recall strategies.

A medical intervention found that daily exercise helps prevent cardiovascular disease .

  • The recommendation: Based on these results, physicians are recommended to encourage patients to exercise and walk regularly. Also recommended is to encourage more walking through public health offices in communities.

A study found that many research articles do not contain the sample sizes needed to statistically confirm their findings .

The recommendation: To improve the current state of the field, researchers should consider doing power analysis based on their experiment's design.

What else is important about implications and recommendations?

When writing recommendations and implications, be careful not to overstate the impact of your results. It can be tempting for researchers to inflate the importance of their findings and make grandiose statements about what their work means.

Remember that implications and recommendations must be coming directly from your results. Therefore, they must be straightforward, realistic, and plausible.

Another good thing to remember is to make sure the implications and recommendations are stated clearly and separately. Do not attach them to the endings of other paragraphs just to add them in. Use similar example phrases as those listed in the table when starting your sentences to clearly indicate when it's an implication and when it's a recommendation.

When your peers, or brand-new readers, read your paper, they shouldn't have to hunt through your discussion to find the implications and recommendations. They should be clear, visible, and understandable on their own.

That'll get you cited more, and you'll make a greater contribution to your area of science while extending the life and impact of your work.

The AJE Team

The AJE Team

See our "Privacy Policy"

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • How to formulate...

How to formulate research recommendations

  • Related content
  • Peer review
  • Polly Brown ( pbrown{at}bmjgroup.com ) , publishing manager 1 ,
  • Klara Brunnhuber , clinical editor 1 ,
  • Kalipso Chalkidou , associate director, research and development 2 ,
  • Iain Chalmers , director 3 ,
  • Mike Clarke , director 4 ,
  • Mark Fenton , editor 3 ,
  • Carol Forbes , reviews manager 5 ,
  • Julie Glanville , associate director/information service manager 5 ,
  • Nicholas J Hicks , consultant in public health medicine 6 ,
  • Janet Moody , identification and prioritisation manager 6 ,
  • Sara Twaddle , director 7 ,
  • Hazim Timimi , systems developer 8 ,
  • Pamela Young , senior programme manager 6
  • 1 BMJ Publishing Group, London WC1H 9JR,
  • 2 National Institute for Health and Clinical Excellence, London WC1V 6NA,
  • 3 Database of Uncertainties about the Effects of Treatments, James Lind Alliance Secretariat, James Lind Initiative, Oxford OX2 7LG,
  • 4 UK Cochrane Centre, Oxford OX2 7LG,
  • 5 Centre for Reviews and Dissemination, University of York, York YO10 5DD,
  • 6 National Coordinating Centre for Health Technology Assessment, University of Southampton, Southampton SO16 7PX,
  • 7 Scottish Intercollegiate Guidelines Network, Edinburgh EH2 1EN,
  • 8 Update Software, Oxford OX2 7LG
  • Correspondence to: PBrown
  • Accepted 22 September 2006

“More research is needed” is a conclusion that fits most systematic reviews. But authors need to be more specific about what exactly is required

Long awaited reports of new research, systematic reviews, and clinical guidelines are too often a disappointing anticlimax for those wishing to use them to direct future research. After many months or years of effort and intellectual energy put into these projects, authors miss the opportunity to identify unanswered questions and outstanding gaps in the evidence. Most reports contain only a less than helpful, general research recommendation. This means that the potential value of these recommendations is lost.

Current recommendations

In 2005, representatives of organisations commissioning and summarising research, including the BMJ Publishing Group, the Centre for Reviews and Dissemination, the National Coordinating Centre for Health Technology Assessment, the National Institute for Health and Clinical Excellence, the Scottish Intercollegiate Guidelines Network, and the UK Cochrane Centre, met as members of the development group for the Database of Uncertainties about the Effects of Treatments (see bmj.com for details on all participating organisations). Our aim was to discuss the state of research recommendations within our organisations and to develop guidelines for improving the presentation of proposals for further research. All organisations had found weaknesses in the way researchers and authors of systematic reviews and clinical guidelines stated the need for further research. As part of the project, a member of the Centre for Reviews and Dissemination under-took a rapid literature search to identify information on research recommendation models, which found some individual methods but no group initiatives to attempt to standardise recommendations.

Suggested format for research recommendations on the effects of treatments

Core elements.

E Evidence (What is the current state of the evidence?)

P Population (What is the population of interest?)

I Intervention (What are the interventions of interest?)

C Comparison (What are the comparisons of interest?)

O Outcome (What are the outcomes of interest?)

T Time stamp (Date of recommendation)

Optional elements

d Disease burden or relevance

t Time aspect of core elements of EPICOT

s Appropriate study type according to local need

In January 2006, the National Coordinating Centre for Health Technology Assessment presented the findings of an initial comparative analysis of how different organisations currently structure their research recommendations. The National Institute for Health and Clinical Excellence and the National Coordinating Centre for Health Technology Assessment request authors to present recommendations in a four component format for formulating well built clinical questions around treatments: population, intervention, comparison, and outcomes (PICO). 1 In addition, the research recommendation is dated and authors are asked to provide the current state of the evidence to support the proposal.

Clinical Evidence , although not directly standardising its sections for research recommendations, presents gaps in the evidence using a slightly extended version of the PICO format: evidence, population, intervention, comparison, outcomes, and time (EPICOT). Clinical Evidence has used this inherent structure to feed research recommendations on interventions categorised as “unknown effectiveness” back to the National Coordinating Centre for Health Technology Assessment and for inclusion in the Database of Uncertainties about the Effects of Treatments ( http://www.duets.nhs.uk/ ).

We decided to propose the EPICOT format as the basis for its statement on formulating research recommendations and tested this proposal through discussion and example. We agreed that this set of components provided enough context for formulating research recommendations without limiting researchers. In order for the proposed framework to be flexible and more widely applicable, the group discussed using several optional components when they seemed relevant or were proposed by one or more of the group members. The final outcome of discussions resulted in the proposed EPICOT+ format (box).

A recent BMJ article highlighted how lack of research hinders the applicability of existing guidelines to patients in primary care who have had a stroke or transient ischaemic attack. 2 Most research in the area had been conducted in younger patients with a recent episode and in a hospital setting. The authors concluded that “further evidence should be collected on the efficacy and adverse effects of intensive blood pressure lowering in representative populations before we implement this guidance [from national and international guidelines] in primary care.” Table 1 outlines how their recommendations could be formulated using the EPICOT+ format. The decision on whether additional research is indeed clinically and ethically warranted will still lie with the organisation considering commissioning the research.

Research recommendation based on gap in the evidence identified by a cross sectional study of clinical guidelines for management of patients who have had a stroke

  • View inline

Table 2 shows the use of EPICOT+ for an unanswered question on the effectiveness of compliance therapy in people with schizophrenia, identified by the Database of Uncertainties about the Effects of Treatments.

Research recommendation based on a gap in the evidence on treatment of schizophrenia identified by the Database of Uncertainties about the Effects of Treatments

Discussions around optional elements

Although the group agreed that the PICO elements should be core requirements for a research recommendation, intense discussion centred on the inclusion of factors defining a more detailed context, such as current state of evidence (E), appropriate study type (s), disease burden and relevance (d), and timeliness (t).

Initially, group members interpreted E differently. Some viewed it as the supporting evidence for a research recommendation and others as the suggested study type for a research recommendation. After discussion, we agreed that E should be used to refer to the amount and quality of research supporting the recommendation. However, the issue remained contentious as some of us thought that if a systematic review was available, its reference would sufficiently identify the strength of the existing evidence. Others thought that adding evidence to the set of core elements was important as it provided a summary of the supporting evidence, particularly as the recommendation was likely to be abstracted and used separately from the review or research that led to its formulation. In contrast, the suggested study type (s) was left as an optional element.

A research recommendation will rarely have an absolute value in itself. Its relative priority will be influenced by the burden of ill health (d), which is itself dependent on factors such as local prevalence, disease severity, relevant risk factors, and the priorities of the organisation considering commissioning the research.

Similarly, the issue of time (t) could be seen to be relevant to each of the core elements in varying ways—for example, duration of treatment, length of follow-up. The group therefore agreed that time had a subsidiary role within each core item; however, T as the date of the recommendation served to define its shelf life and therefore retained individual importance.

Applicability and usability

The proposed statement on research recommendations applies to uncertainties of the effects of any form of health intervention or treatment and is intended for research in humans rather than basic scientific research. Further investigation is required to assess the applicability of the format for questions around diagnosis, signs and symptoms, prognosis, investigations, and patient preference.

When the proposed format is applied to a specific research recommendation, the emphasis placed on the relevant part(s) of the EPICOT+ format may vary by author, audience, and intended purpose. For example, a recommendation for research into treatments for transient ischaemic attack may or may not define valid outcome measures to assess quality of life or gather data on adverse effects. Among many other factors, its implementation will also depend on the strength of current findings—that is, strong evidence may support a tightly focused recommendation whereas a lack of evidence would result in a more general recommendation.

The controversy within the group, especially around the optional components, reflects the different perspectives of the participating organisations—whether they were involved in commissioning, undertaking, or summarising research. Further issues will arise during the implementation of the proposed format, and we welcome feedback and discussion.

Summary points

No common guidelines exist for the formulation of recommendations for research on the effects of treatments

Major organisations involved in commissioning or summarising research compared their approaches and agreed on core questions

The essential items can be summarised as EPICOT+ (evidence, population, intervention, comparison, outcome, and time)

Further details, such as disease burden and appropriate study type, should be considered as required

We thank Patricia Atkinson and Jeremy Wyatt.

Contributors and sources All authors contributed to manuscript preparation and approved the final draft. NJH is the guarantor.

Competing interests None declared.

  • Richardson WS ,
  • Wilson MC ,
  • Nishikawa J ,
  • Hayward RSA
  • McManus RJ ,
  • Leonardi-Bee J ,
  • PROGRESS Collaborative Group
  • Warburton E
  • Rothwell P ,
  • McIntosh AM ,
  • Lawrie SM ,
  • Stanfield AC
  • O'Donnell C ,
  • Donohoe G ,
  • Sharkey L ,
  • Jablensky A ,
  • Sartorius N ,
  • Ernberg G ,

3 importance of recommendation in research

National Academies Press: OpenBook

Conducting Biosocial Surveys: Collecting, Storing, Accessing, and Protecting Biospecimens and Biodata (2010)

Chapter: 5 findings, conclusions, and recommendations, 5 findings, conclusions, and recommendations.

A s the preceding chapters have made clear, incorporating biological specimens into social science surveys holds great scientific potential, but also adds a variety of complications to the tasks of both individual researchers and institutions. These complications arise in a number of areas, including collecting, storing, using, and distributing biospecimens; sharing data while protecting privacy; obtaining informed consent from participants; and engaging with Institutional Review Boards (IRBs). Any effort to make such research easier and more effective will need to address the issues in these areas.

In considering its recommendations, the panel found it useful to think of two categories: (1) recommendations that apply to individual investigators, and (2) recommendations that are addressed to the National Institute on Aging (NIA) or other institutions, particularly funding agencies. Researchers who wish to collect biological specimens with social science data will need to develop new skills in a variety of areas, such as the logistics of specimen storage and management, the development of more diverse informed consent forms, and ways of dealing with the disclosure risks associated with sharing biogenetic data. At the same time, NIA and other funding agencies must provide researchers the tools they need to succeed. These tools include such things as biorepositories for maintaining and distributing specimens, better guidance on informed consent policies, and better ways to share data without risking confidentiality.

TAKING ADVANTAGE OF EXISTING EXPERTISE

Although working with biological specimens will be new and unfamiliar to many social scientists, it is an area in which biomedical researchers have a great deal of expertise and experience. Many existing documents describe recommended procedures and laboratory practices for the handling of biospecimens. These documents provide an excellent starting point for any social scientist who is interested in adding biospecimens to survey research.

Recommendation 1: Social scientists who are planning to add biological specimens to their survey research should familiarize themselves with existing best practices for the collection, storage, use, and distribution of biospecimens. First and foremost, the design of the protocol for collec tion must ensure the safety of both participants and survey staff (data and specimen collectors and handlers).

Although existing best-practice documents were not developed with social science surveys in mind, their guidelines have been field-tested and approved by numerous IRBs and ethical oversight committees. The most useful best-practice documents are updated frequently to reflect growing knowledge and changing opinions about the best ways to collect, store, use, and distribute biological specimens. At the same time, however, many issues arising from the inclusion of biospecimens in social science surveys are not fully addressed in the best-practice documents intended for biomedical researchers. For guidance on these issues, it will be necessary to seek out information aimed more specifically at researchers at the intersection of social science and biomedicine.

COLLECTING, STORING, USING, AND DISTRIBUTING BIOSPECIMENS

As described in Chapter 2 , the collection, storage, use, and distribution of biospecimens and biodata are tasks that are likely to be unfamiliar to many social scientists and that raise a number of issues with which even specialists are still grappling. For example, which biospecimens in a repository should be shared, given that in most cases the amount of each specimen is limited? And given that the available technology for cost-efficient analysis of biospecimens, particularly genetic analysis, is rapidly improving, how much of any specimen should be used for immediate research and analysis, and how much should be stored for analysis at a later date? Collecting, storing, using, and distributing biological specimens also present significant practical and financial challenges for social scientists. Many of the questions they must address, such as exactly what should be held, where it should be held, and what should be shared or distributed, have not yet been resolved.

Developing Data Sharing Plans

An important decision concerns who has access to any leftover biospecimens. This is a problem more for biospecimens than for biodata because in most cases, biospecimens can be exhausted. Should access be determined according to the principle of first funded, first served? Should there be a formal application process for reviewing the scientific merits of a particular investigation? For studies that involve international collaboration, should foreign investigators have access? And how exactly should these decisions be made? Recognizing that some proposed analyses may lie beyond the competence of the original investigators, as well as the possibility that principal investigators may have a conflict of interest in deciding how to use any remaining biospecimens, one option is for a principal investigator to assemble a small scientific committee to judge the merits of each application, including the relevance of the proposed study to the parent study and the capacities of the investigators. Such committees should publish their review criteria to help prospective applicants. A potential problem with such an approach, however, is that many projects may not have adequate funding to carry out such tasks.

Recommendation 2: Early in the planning process, principal investigators who will be collecting biospecimens as part of a social science survey should develop a complete data sharing plan.

This plan should spell out the criteria for allowing other researchers to use (and therefore deplete) the available stock of biospecimens, as well as to gain access to any data derived therefrom. To avoid any appearance of self-interest, a project might empower an external advisory board to make decisions about access to its data. The data sharing plan should also include provisions for the storage and retrieval of biospecimens and clarify how the succession of responsibility for and control of the biospecimens will be handled at the conclusion of the project.

Recommendation 3: NIA (or preferably the National Institutes of Health [NIH]) should publish guidelines for principal investigators containing a list of points that need to be considered for an acceptable data sharing plan. In addition to staff review, Scientific Review Panels should read and comment on all proposed data sharing plans. In much the same way as an unacceptable human subjects plan, an inadequate data sharing plan should hold up an otherwise acceptable proposal.

Supporting Social Scientists in the Storage of Biospecimens

The panel believes that many social scientists who decide to add the collection of biospecimens to their surveys may be ill equipped to provide for the storage and distribution of the specimens.

Conclusion: The issues related to the storage and distribution of biospecimens are too complex and involve too many hidden costs to assume that social scientists without suitable knowledge, experience, and resources can handle them without assistance.

Investigators should therefore have the option of delegating the storage and distribution of biospecimens collected as part of social science surveys to a centralized biorepository. Depending on the circumstances, a project might choose to utilize such a facility for immediate use, long-term or archival storage, or not at all.

Recommendation 4: NIA and other relevant funding agencies should support at least one central facility for the storage and distribution of biospecimens collected as part of the research they support.

PROTECTING PRIVACY AND CONFIDENTIALITY: SHARING DIGITAL REPRESENTATIONS OF BIOLOGICAL AND SOCIAL DATA

Several different types of data must be kept confidential: survey data, data derived from biospecimens, and all administrative and operational data. In the discussion of protecting confidentiality and privacy, this report has focused on biodata, but the panel believes it is important to protect all the data collected from survey participants. For many participants, for example, data on wealth, earnings, or sexual behavior can be as or more sensitive than genetic data.

Conclusion: Although biodata tend to receive more attention in discussions of privacy and confidentiality, social science and operational data can be sensitive in their own right and deserve similar attention in such discussions.

Protecting the participants in a social science survey that collects biospecimens requires securing the data, but data are most valuable when they are made available to researchers as widely as possible. Thus there is an inherent tension between the desire to protect the privacy of the participants and the desire to derive as much scientific value from the data as possible, particularly since the costs of data collection and analysis are so high. The following recommendations regarding confidentiality are made in the spirit of balancing these equally important needs.

Genomic data present a particular challenge. Several researchers have demonstrated that it is possible to identify individuals with even modest amounts of such data. When combined with social science data, genomic data may pose an even greater risk to confidentiality. It is difficult to know how much or which genomic data, when combined with social science data, could become critical identifiers in the future. Although the problem is most significant with genomic data, similar challenges can arise with other kinds of data derived from biospecimens.

Conclusion: Unrestricted distribution of genetic and other biodata risks violating promises of confidentiality made to research participants.

There are two basic approaches to protecting confidentiality: restricting data and restricting access. Restricting data—for example, by stripping individual and spatial identifiers and modifying the data to make it difficult or impossible to trace them back to their source—usually makes it possible to release social science data widely. In the case of biodata, however, there is no answer to how little data is required to make a participant uniquely identifiable. Consequently, any release of biodata must be carefully managed to protect confidentiality.

Recommendation 5: No individual-level data containing uniquely identify ing variables, such as genomic data, should be publicly released without explicit informed consent.

Recommendation 6: Genomic data and other individual-level data con taining uniquely identifying variables that are stored or in active use by investigators on their institutional or personal computers should be encrypted at all times.

Even if specific identifying variables, such as names and addresses, are stripped from data, it is still often possible to identify the individuals associated with the data by other means, such as using the variables that remain (age, sex, marital status, family income, etc.) to zero in on possible candidates. In the case of biodata that do not uniquely identify individuals and can change with time, such as blood pressure and physical measurements, it may be possible to share the data with no more protection than stripping identifying variables. Even these data, however, if known to intruders, can increase identification disclosure risk when combined with enough other data. With sufficient characteristics to match, intruders can uniquely identify individuals in shared data if given access to another data source that contains the same information plus identifiers.

Conclusion: Even nonunique biodata, if combined with social science data, may pose a serious risk of reidentification.

In the case of high-dimensional genomic data, standard disclosure limitation techniques, such as data perturbation, are not effective with respect to preserving the utility of the data because they involve such extreme alterations that they would severely distort analyses aimed at determining gene–gene and gene–environment interactions. Standard disclosure limitation methods could be used to generate public-use data sets that would enable low-dimensional analyses involving genes, for example, one gene at a time. However, with several such public releases, it may be possible for a key match to be used to construct a data set with higher-dimensional genomic data.

Conclusion: At present, no data restriction strategy has been demonstrated to protect confidentiality while preserving the usefulness of the data for drawing inferences involving high-dimensional interactions among genomic and social science variables, which are increasingly the target of research. Providing public-use genomic data requires such intense data masking to protect confidentiality that it would distort the high-dimensional analyses that could result in ground-breaking research progress.

Recommendation 7: Both rich genomic data acquired for research and sensitive and potentially identifiable social science data that do not change (or change very little) with time should be shared only under restricted circumstances, such as licensing and (actual or virtual) data enclaves.

As discussed in Chapter 3 , the four basic ways to restrict access to data are licensing, remote execution centers, data enclaves, and virtual data enclaves. Each has its advantages and disadvantages. 1 Licensing, for example, is the least restrictive for a researcher in terms of access to the data, but the licensing process itself can be lengthy and burdensome. Thus it would be useful if the licensing process could be facilitated.

Recommendation 8: NIA (or preferably NIH) should develop new stan dards and procedures for licensing confidential data in ways that will maximize timely access while maintaining security and that can be used by data repositories and by projects that distribute data.

Ways to improve the other approaches to restricted access are needed as well. For example, improving the convenience and availability of virtual data enclaves could increase the use of combined social science and biodata without

a significant increase in risk to confidentiality. The panel notes that much of the discussion of the confidentiality risk posed by the various approaches is theoretical; no one has a clear idea of just what disclosure risks are associated with the various ways of sharing data. It is important to learn more about these disclosure risks for a variety of reasons—determining how to minimize the risks, for instance, or knowing which approaches to sharing data pose the least risk. It would also be useful to be able to describe disclosure risks more accurately to survey participants.

Recommendation 9: NIA and other funding agencies should assess the strength of confidentiality protections through periodic expert audits of confidentiality and computer security. Willingness to participate in such audits should be a condition for receipt of NIA support. Beyond enforce ment, the purpose of such audits would be to identify challenges and solutions.

Evaluating risks and applying protection methods, whether they involve restricted access or restricted data, is a complex process requiring expertise in disclosure protection methods that exceeds what individual principal investigators and their institutions usually possess. Currently, not enough is known to be able to represent these risks either fully or accurately. The NIH requirement for data sharing necessitates a large investment of resources to anticipate which variables are potentially available to intruders and to alter data in ways that reduce disclosure risks while maintaining the utility of the data. Such resources are better spent by principal investigators on collecting and analyzing the data.

Recommendation 10: NIH should consider funding Centers of Excellence to explore new ways of protecting digital representations of data and to assist principal investigators wishing to share data with others. NIH should also support research on disclosure risks and limitations.

Principal investigators could send digital data to these centers, which would organize and manage any restricted access or restricted data policies or provide advisory services to investigators. NIH would maintain the authority to penalize those who violated any confidentiality agreements, for example, by denying them or their home institution NIH funding. Models for these centers include the Inter-university Consortium for Political and Social Research (ICPSR) and its projects supported by NIH and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) and the UK data sharing archive. The centers would alleviate the burden of data sharing as mandated of principal investigators by NIH and place it in expert hands. However, excellence in the design of data access and control systems

is likely to require intimate knowledge of each specific data resource, so data producers should be involved in the systems’ development.

INFORMED CONSENT

As described in Chapter 4 , informed consent is a complex subject involving many issues that are still being debated; the growing power of genetic analysis techniques and bioinformatics has only added to this complexity. Given the rapid pace of advances in scientific knowledge and in the technology used to analyze biological materials, it is impossible to predict what information might be gleaned from biological specimens just a few years hence; accordingly, it is impossible, even in theory, to talk about perfectly informed consent. The best one can hope for is relatively well-informed consent from a study’s participants, but knowing precisely what that means is difficult. Determining the scope of informed consent adds another layer of complexity. Will new analyses be covered under the existing consent, for example? There are no clear guidelines on such questions, yet specific details on the scope of consent will likely affect an IRB’s reaction to a study proposal.

What Individual Researchers Need to Know and Do Regarding Informed Consent

To be sure, there is a wide range of views about the practicality of providing adequate protection to participants while proceeding with the scientific enterprise, from assertions that it is simply not possible to provide adequate protection to offers of numerous procedural safeguards but no iron-clad guarantees. This report takes the latter position—that investigators should do their best to communicate adequately and accurately with participants, to provide procedural safeguards to the extent possible, and not to promise what is not possible. 2 Social science researchers need to know that adding the collection of biospecimens to social science surveys changes the nature of informed consent. Informed consent for a traditional social science survey may entail little more than reading a short script over the phone and asking whether the participant is willing to continue; obtaining informed consent for the collection and use of biospecimens and biodata is generally a much more involved process.

Conclusion: Social scientists should be made aware that the process of obtaining informed consent for the use of biospecimens and biodata typically differs from social science norms.

If participants are to provide truly informed consent to taking part in any study, they must be given a certain minimum amount of information. They should be told, for example, what the purpose of the study is, how it is to be carried out, and what participants’ roles are. In addition, because of the unique risks associated with providing biospecimens, participants in a social science survey that involves the collection of such specimens should be provided with other types of information as well. In particular, they should be given detail on the storage and use of the specimens that relates to those risks and can assist them in determining whether to take part in the study.

Recommendation 11: In designing a consent form for the collection of biospecimens, in addition to those elements that are common to social science and biomedical research, investigators should ensure that certain other information is provided to participants:

how long researchers intend to retain their biospecimens and the genomic and other biodata that may be derived from them;

both the risks associated with genomic data and the limits of what they can reveal;

which other researchers will have access to their specimens, to the data derived therefrom, and to information collected in a survey questionnaire;

the limits on researchers’ ability to maintain confidentiality;

any potential limits on participants’ ability to withdraw their speci mens or data from the research;

the penalties 3 that may be imposed on researchers for various types of breaches of confidentiality; and

what plans have been put in place to return to them any medically relevant findings.

Researchers who fail to properly plan for and handle all of these issues before proceeding with a study are in essence compromising assurances under informed consent. The literature on informed consent emphasizes the importance of ensuring that participants understand reasonably well what they are consenting to. This understanding cannot be taken for granted, particularly as it pertains to the use of biological specimens and the data derived therefrom.

While it is not possible to guarantee that participants have a complete understanding of the scientific uses of their specimens or all the possible risks of their participation, they should be able to make a relatively well-informed decision about whether to take part in the study. Thus the ability of various participants to understand the research and the informed consent process must be considered. Even impaired individuals may be able to participate in research if their interests are protected and they can do so only through proxy consent. 4

Recommendation 12: NIA should locate and publicize positive examples of the documentation of consent processes for the collection of biospeci mens. In particular, these examples should take into account the special needs of certain individuals, such as those with sensory problems and the cognitively impaired.

Participants in a biosocial survey are likely to have different levels of comfort concerning how their biospecimens and data will be used. Some may be willing to provide only answers to questions, for example, while others may both answer questions and provide specimens. Among those who provide specimens, some may be willing for the specimens to be used only for the current study, while others may consent to their use in future studies. One effective way to deal with these different comfort levels is to offer a tiered approach to consent that allows participants to determine just how their specimens and data will be used. Tiers might include participating in the survey, providing specimens for genetic and/or nongenetic analysis in a particular study, and allowing the specimens and data to be stored for future uses (genetic and/or nongenetic). For those participants who are willing to have their specimens and data used in future studies, researchers should tell them what sort of approval will be obtained for such use. For example, an IRB may demand reconsent, in which case participants may have to be contacted again before their specimens and data can be used. Ideally, researchers should design their consent forms to avoid the possibility that an IRB will demand a costly or infeasible reconsent process.

Recommendation 13: Researchers should consider adopting a tiered approach to obtaining consent. Participants who are willing to have their specimens and data used in future studies should be informed about the process that will be used to obtain approval for such uses.

What Institutions Should Do Regarding Informed Consent

Because the details of informed consent vary from study to study, individual investigators must bear ultimate responsibility for determining the details of informed consent for any particular study. Thus researchers must understand the various issues and concerns surrounding informed consent and be prepared to make decisions about the appropriate approach for their research in consultation with staff of survey organizations. These decisions should be addressed in the training of survey interviewers. As noted above, however, the issues surrounding informed consent are complex and not completely resolved, and researchers have few options for learning about informed consent as it applies to social science studies that collect biospecimens. Thus it makes sense for agencies funding this research, the Office for Human Research Protection (OHRP), or other appropriate organizations (for example, Public Responsibility in Medicine and Research [PRIM&R]) to provide opportunities for such learning, taking into account the fact that the issues arising in biosocial research do not arise in the standard informed consent situations encountered in social science research. It should also be made clear that the researchers’ institution is usually deemed (e.g., in the courts) to bear much of the responsibility for informed consent.

Recommendation 14: NIA, OHRP, and other appropriate organizations should sponsor training programs, create training modules, and hold informational workshops on informed consent for investigators, staff of survey organizations, including field staff, administrators, and mem bers of IRBs who oversee surveys that collect social science data and biospecimens.

The Return of Medically Relevant Information

An issue related to informed consent is how much information to provide to survey participants once their biological specimens have been analyzed and in particular, how to deal with medically relevant information that may arise from the analysis. What, for example, should a researcher do if a survey participant is found to have a genetic disease that does not appear until later in life? Should the participant be notified? Should participants be asked as part of the initial interview whether they wish to be notified about such a discovery? At this time, there are no generally agreed-upon answers to such questions, but researchers should expect to have to deal with these issues as they analyze the data derived from biological specimens.

Recommendation 15: NIH should direct investigators to formulate a plan in advance concerning the return of any medically relevant findings to

survey participants and to implement that plan in the design and conduct of their informed consent procedures.

INSTITUTIONAL REVIEW BOARDS

Investigators seeking IRB approval for biosocial research face a number of challenges. Few IRBs are familiar with both social and biological science; thus, investigators may find themselves trying to justify standard social science protocols to a biologically oriented IRB or explaining standard biological protocols to an IRB that is used to dealing with social science—or sometimes both. Researchers can expect these obstacles, which arise from the interdisciplinary nature of their work, to be exacerbated by a number of other factors that are characteristic of IRBs in general (see Chapter 4 ).

Recommendation 16: In institutions that have separate biomedical and social science IRBs, mechanisms should be created for sharing expertise during the review of biosocial protocols. 5

What Individual Researchers Need to Do Regarding IRBs

Because the collection of biospecimens as part of social science surveys is still relatively unfamiliar to many IRBs, researchers planning such a study can expect their interactions with the IRB overseeing the research to involve a certain learning curve. The IRB may need extra time to become familiar and comfortable with the proposed practices of the survey, and conversely, the researchers will need time to learn what the IRB will require. Thus it will be advantageous if researchers conducting such studies plan from the beginning to devote additional time to working with their IRBs.

Recommendation 17: Investigators considering collecting biospecimens as part of a social science survey should consult with their IRBs early and often.

What Research Agencies Should Do Regarding IRBs

One way to improve the IRB process would be to give members of IRBs an opportunity to learn more about biosocial research and the risks it entails.

This could be done by individual institutions, but it would be more effective if a national funding agency took the lead (see Recommendation 14).

It is the panel’s hope that its recommendations will support the incorporation of social science and biological data into empirical models, allowing researchers to better document the linkages among social, behavioral, and biological processes that affect health and other measures of well-being while avoiding or minimizing many of the challenges that may arise. Implementing these recommendations will require the combined efforts of both individual investigators and the agencies that support them.

This page intentionally left blank.

Recent years have seen a growing tendency for social scientists to collect biological specimens such as blood, urine, and saliva as part of large-scale household surveys. By combining biological and social data, scientists are opening up new fields of inquiry and are able for the first time to address many new questions and connections. But including biospecimens in social surveys also adds a great deal of complexity and cost to the investigator's task. Along with the usual concerns about informed consent, privacy issues, and the best ways to collect, store, and share data, researchers now face a variety of issues that are much less familiar or that appear in a new light.

In particular, collecting and storing human biological materials for use in social science research raises additional legal, ethical, and social issues, as well as practical issues related to the storage, retrieval, and sharing of data. For example, acquiring biological data and linking them to social science databases requires a more complex informed consent process, the development of a biorepository, the establishment of data sharing policies, and the creation of a process for deciding how the data are going to be shared and used for secondary analysis--all of which add cost to a survey and require additional time and attention from the investigators. These issues also are likely to be unfamiliar to social scientists who have not worked with biological specimens in the past. Adding to the attraction of collecting biospecimens but also to the complexity of sharing and protecting the data is the fact that this is an era of incredibly rapid gains in our understanding of complex biological and physiological phenomena. Thus the tradeoffs between the risks and opportunities of expanding access to research data are constantly changing.

Conducting Biosocial Surveys offers findings and recommendations concerning the best approaches to the collection, storage, use, and sharing of biospecimens gathered in social science surveys and the digital representations of biological data derived therefrom. It is aimed at researchers interested in carrying out such surveys, their institutions, and their funding agencies.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Research Recommendations Process and Methods Guide [Internet]

  • PMID: 27466642
  • Bookshelf ID: NBK310373

The foundation of NICE guidance is the synthesis of evidence primarily through the process of systematic reviewing and, if appropriate, modelling and cost effectiveness decision analysis. The results of these analyses are then discussed by independent committees. These committees include NHS staff, healthcare professionals, social care practitioners, commissioners and providers of care, patients, service users and carers, industry and academics. Stakeholders have the opportunity to comment on draft recommendations before they are finalised. Not only does this process explicitly describe the evidence base, it also identifies where there are gaps, uncertainties or conflicts in the existing evidence.

Many of these uncertainties, although interesting to resolve, are unlikely to affect people’s care or NICE’s ability to produce guidance. However, if these uncertainties may have an effect on NICE’s recommendations it is important for NICE to liaise with the research community to ensure they are addressed. NICE does this by making recommendations for research, which are communicated to researchers and funders. At the time guidance is issued, NICE’s staff and committees have a thorough understanding of the current evidence and valuable insights into uncertainties that need to be resolved. It is important that these are capitalised on.

To undertake its national role effectively, NICE needs to ensure that:

the process of developing the research recommendations is robust, transparent and involves stakeholders

we identify research priorities

we make all research recommendations clearly identifiable in the guidance

the research recommendations provide the information necessary to support research commissioning

the research recommendations are available to researchers and funders by promoting them (for example through the research recommendations database)

the research recommendations are relevant to current practice

we communicate well with the research community.

This process and methods guide has been developed to help guidance-producing centres make research recommendations. It describes a step-by-step approach to identifying uncertainties, formulating research recommendations and research questions, prioritising them and communicating them to the NICE Science Policy and Research (SP&R) team, researchers and funders. It has been developed based on the SP&R team’s interactions with research funders and researchers, as well as with guidance developers.

Keywords: research gaps; uncertainties; research recommendations; NICE Process and Methods Guides.

Copyright © 2015 National Institute for Health and Clinical Excellence, unless otherwise stated. All rights reserved.

Publication types

Scholarly recommendation systems: a literature survey

  • Open access
  • Published: 04 June 2023
  • Volume 65 , pages 4433–4478, ( 2023 )

Cite this article

You have full access to this open access article

  • Zitong Zhang 1   na1 ,
  • Braja Gopal Patra 2   na1 ,
  • Ashraf Yaseen 1 ,
  • Jie Zhu 1 ,
  • Rachit Sabharwal 1 ,
  • Kirk Roberts 3 ,
  • Tru Cao 1 &
  • Hulin Wu 1  

4249 Accesses

6 Citations

9 Altmetric

Explore all metrics

A scholarly recommendation system is an important tool for identifying prior and related resources such as literature, datasets, grants, and collaborators. A well-designed scholarly recommender significantly saves the time of researchers and can provide information that would not otherwise be considered. The usefulness of scholarly recommendations, especially literature recommendations, has been established by the widespread acceptance of web search engines such as CiteSeerX, Google Scholar, and Semantic Scholar. This article discusses different aspects and developments of scholarly recommendation systems. We searched the ACM Digital Library, DBLP, IEEE Explorer, and Scopus for publications in the domain of scholarly recommendations for literature, collaborators, reviewers, conferences and journals, datasets, and grant funding. In total, 225 publications were identified in these areas. We discuss methodologies used to develop scholarly recommender systems. Content-based filtering is the most commonly applied technique, whereas collaborative filtering is more popular among conference recommenders. The implementation of deep learning algorithms in scholarly recommendation systems is rare among the screened publications. We found fewer publications in the areas of the dataset and grant funding recommenders than in other areas. Furthermore, studies analyzing users’ feedback to improve scholarly recommendation systems are rare for recommenders. This survey provides background knowledge regarding existing research on scholarly recommenders and aids in developing future recommendation systems in this domain.

Similar content being viewed by others

3 importance of recommendation in research

Scientific Paper Recommender Systems: A Review

3 importance of recommendation in research

Online Evaluations for Everyone: Mr. DLib’s Living Lab for Scholarly Recommendations

A comprehensive evaluation of scholarly paper recommendation using potential citation papers.

Kazunari Sugiyama & Min-Yen Kan

Avoid common mistakes on your manuscript.

1 Introduction

A recommendation or recommender system is a type of information filtering system that employs data mining and analytics of user behaviors, including preferences and activities, to filter required information from a large information source. In the era of big data, recommendation systems have become important applications in our daily lives by recommending music, videos, movies, books, news, etc. In academia, there has been a substantial increase in the extent of information (literature, collaborators, conferences, datasets, and many more) available online and it has become increasingly taxing for researchers to stay up to date with relevant information. Several recommendation tools and search engines in academia (Google Scholar, ResearchGate, Semantic Scholar, and others) are available for researchers to recommend relevant publications, collaborators, funding opportunities, etc. Recommendation systems are evolving rapidly. The initial scholarly recommender system was intended for literature by recommending publications using content-based similarity methods [ 1 ]. Currently, there are several recommendation systems available for researchers and these are widely used in different scholarly areas.

1.1 Motivation and research questions

In this article, we focus on different scholarly recommenders used to improve the quality of research. To the best of our knowledge, no article currently focusing on all scholarly recommendation systems together is available right now. Previous surveys on recommendation systems were conducted separately for each recommendation system. Most of these studies were based on literature or collaborator recommendation systems [ 2 ]. Currently, there is no comprehensive review that contains a description of different types of scholarly recommendation systems, particularly for academic use.

Therefore, it is necessary to provide a survey as a guide and reference to researchers interested in this area; a systematic review of scholarly recommendation system would serve this purpose. It helps to explore research achievements in scholarly recommendation, provide researchers with an overall presentation of systems for allocating academic resources, and identify improvement opportunities.

This article describes the different scholarly recommendation systems that researchers use in their daily activities. We are taking a closer look at the methodologies used for developing such systems. The research questions of our study are as follows:

RQ1 What different problems are addressed by scholarly recommendation systems?

RQ2 What datasets or repositories were used for developing these recommendation systems?

RQ3 What types of methodologies were implemented in these recommendation systems?

RQ4 What further research can be performed to overcome the drawbacks of the current research and develop new recommenders to enhance the field of scholarly recommendation?

To answer our first research question, we collected over 500 publications on scholarly recommenders from the ACM Digital Library, DBLP, IEEE Explorer, and Scopus. Literature and collaborator recommendation systems are the most studied recommenders in the literature, with many publications in each. Websites for searching publications host literature recommendations as a key function, almost all of which are free for researchers. However, a few collaborator recommendation systems have been implemented online; and are not free for all users. One of the reasons can be attributed to the large amount of personal information and preferences required by these recommenders.

Furthermore, we studied journal and conference recommendation systems for publishing papers and articles. Although many publishing houses have implemented their own online journal recommender systems, conference recommender systems are not available online. Next, we studied reviewer recommendation problems, in which reviewers are recommended for conferences, journals, and grants. Finally, we identified datasets and grant recommendation systems, which are the least studied scholarly recommendation systems. Figure  1 shows all currently available scholarly recommendations.

figure 1

Scholarly recommenders studied in this article

1.2 Materials and methods

An initial literature survey was conducted to identify keywords related to individual recommendation systems that can be used to search for relevant publications. A total of 26 keywords were identified to search for relevant publications (see Supplementary  17 ).

At the end of the full-text review process, 225 publications were included in this study. The number of publications on individual recommendation systems is shown in Fig.  2 . To be eligible for the review, we focused on the description, evaluation, and use of natural language processing algorithms. During the full-text review process, we excluded studies that were not peer-reviewed, such as abstracts and commentary, perspective, or opinion pieces. Finally, we performed data extraction and analysis on 225 articles and summarized their data, methodology, evaluation metrics, and detailed categorization in the following sections. The PRISMA flowchart for our publication collection is shown in Fig.  3 ; with example search keywords.

figure 2

Number of papers/articles collected for studying different recommenders

figure 3

PRISMA flowchart for including publications in scholarly recommendation

The remainder of this paper is organized as follows. Section  2 describes different literature recommendation systems based on their methodologies and corresponding datasets. Section  3 describes different approaches for developing collaborator recommendation systems. Section  4 reviews the journal and conference venue recommendation systems. Section  5 describes the reviewer’s recommendation system. In Sect.  6 , we review all other scholarly recommendation systems available in the literature such as datasets and grant recommendation systems. Finally, Sect.  7 discusses future work and concludes the article.

2 Literature recommendation

Literature recommendation is one of the most well-studied scholarly recommendation problems with several research articles published in the past decade. Recommender systems for scholarly literature have been widely used by researchers to locate papers, keep up with their research fields, and find relevant citations for drafts. To summarize the literature recommendation systems, we collected 82 publications for scholarly papers and citations.

The first research paper recommendation system was introduced as a part of the CiteSeer project [ 1 ]. In total, 11 out of 82 publications (approximately 13%) used applications or methodologies based on a citation recommendation system. As one of the widest subsets of scholarly literature recommendation, citation recommendation aims to recommend citations to researchers while authoring a paper and finding work related to their ideas. It recommends citations based on the content of the researchers’ work. Among the 11 citation recommender papers, content-based filtering (CBF) methodologies have been widely used on the fragments of the citations for the recommendation, and some of them applied collaborative filtering (CF) to develop a potential citation recommendation system based on users’ research interests and citation networks [ 3 ].

In this section, we describe the datasets used to develop literature recommendation systems. A total of 75 reviewed publications evaluated the methodologies using different datasets. The authors of 45 publications chose to construct their own datasets based on manually collected information or paid datasets that were rarely used. Several open-source published datasets are commonly used to develop literature recommendations.

Owing to the rapid development of modern websites for literature search, datasets for literature recommendation are readily available. There were 28 publications that used public databases for the testing and evaluation of the methods. The sources of these datasets are listed in Table  1 . These websites collected publications from several scientific publishers and indexed them with their references and keywords. Using the information extracted from these public resources, researchers created datasets to perform recommendation methodologies and obtain the ground truth for offline evaluation.

DBLP was used in 12 reviewed publications and ACM was used in 11 reviewed publications to construct datasets for evaluation. DBLP hosts more than 5.2 million publications, Footnote 1 and obtains its database entries by using a limited number of volunteers who manually enter tables of contents of journals and conference proceedings. The CiteSeer dataset was used in 9 reviewed publications to conduct an offline evaluation. It currently contains over 6 million publications; and is continuously crawling the web to find new content using user submissions, conferences, and journals as data entries. Petricek et al. [ 4 ] proved that the application of autonomous acquisition through web crawling in CiteSeer introduces a significant bias against papers with a low number of authors. Among the reviewed papers, we can say that most of researchers constructed their own datasets for evaluation by combining the information from multiple databases. These self-constructed evaluation datasets based on different resources were used to avoid bias resulting from using information from only one source.

The CiteULike dataset was used in 7 reviewed publications. CiteULike is a web service that contains social tags added to research articles by users. The dataset was not originally intended for literature recommendation system research, but is still frequently used for this purpose.

2.2 Methods

Three main approaches were used to develop literature recommenders; CBF ( N = 37 papers), CF ( N = 16 papers), and hybrid ( N = 29 papers). Next, we introduce the promising and popular approaches used in each recommendation class. We also provide an overview of the most important aspects and techniques used for literature recommendation.

2.2.1 Content-based filtering (CBF)

CBF is one of the most popular methods for recommending literature and is used in 37 of 82 publications. Based on the user-item model that treats textual contents as ‘items,’ CBF usually uses topic-based methods to measure the similarity of the publication’s topic that users are interested in and the topic of target publications. These methods performed well in terms of topic and content matching. A summary of CBF approaches used for literature recommendation can be found in Table  2 .

CBF recommenders use keywords or topics as key features because they are used to describe a publication. The creation of a content-based profile of users usually concentrates on the user’s preference model, and the user’s interaction log with the recommendation system converted by a weighted vector of item features. For example, Hong et al. [ 9 ] constructed a paper recommendation methodology based on a user profile built with extracted keywords, and calculated the similarity between a given topic and collected papers by using cosine similarity to recommend initial publications for each topic.

Most of the reviewed publications used the term frequency and inverse document frequency (TF-IDF) representation to evaluate the similarities between text objects. TF-IDF negates the effect of high-frequency words while determining the importance of an item. Magara et al. [ 38 ] constructed methodologies for recommending serendipitous research papers from two large normally mismatched information spaces or domains using Bisociative Information Networks (BisoNets) and TF-IDF measures as weighting and filtering terms. Lofty et al. [ 11 ] combined TF-IDF with a cosine similarity measure to construct a methodology for paper recommendation using ontology. To address higher relevancy and serendipity, Sugiyama and Kan [ 25 ] also constructed feature vectors using the TF-IDF measure and user profiles utilizing the Co-Author Network (CAN), computed cosine similarity and recommended papers with higher similarity.

In summary, researchers claim that content-based recommender systems are independent for each user to build their own profiles so that the most suitable recommendation can be made for different users. Also, based on automatically generated user models, recommendation systems using CBF would spend less time and calculation on up-front classification.

The limitations of CBF can also be concluded. The improvements made in the papers we collected were mostly to overcome these limitations. CBF requires more calculation and resources to analyze each item for its features and build each user model individually. For example, to mark passages for citation recommendations, users are typically required to provide a representative bibliography. By examining the relevance between segments in a query manuscript and the representative segments extracted from a document corpus, He et al. [ 36 ] formulated a dependency feature model based on language model, contextual similarity, and topic relevance to produce a citation recommendation approach without author supervision. Neethukrishnan et al. [ 8 ] proposed a paper recommender methodology using an SVM classifier to found their users’ personal ontology similarity to specify the conceptualization. Nasciment et al. [ 35 ] also proposed a novel source independent framework for research paper recommendation to reduce the resources required. They designed a framework that required only a single research paper as input, and generated several weighting candidate queries by using terms in that paper, and then applied a cosine similarity metric to rank the candidates to recommend the ones most related to the input paper.

In addition, the traditional CBF methods are not able to consider the popularity and rating of items, that is, it is difficult to differentiate between two research papers if they have similar terms in user model. To overcome this limitation, Ollagnier et.al [ 21 ] formulated a centrality indicator for their software, which was dedicated to the analysis of bibliographical references extracted from scientific collections of papers. This approach determines the impact and inner representativeness of each bibliographical reference according to their occurrences. Pera and Ng [ 30 ] adopted CombMNZ, a linear combination strategy that combines similarity degree and popularity score into a joint ranking, to build up their application, and a paper recommender system recommends papers considering both context similarity and popularity of the paper among users. Liu et.al [ 23 ] constructed a publication ranking approach with PRF (Pseudo Relevance Feedback) by leveraging a number of meta-paths on a heterogeneous bibliographic graph.

2.2.2 Collaborative filtering

We collected 16 studies that used the Collaborative Filtering (CF) method. CF methods find the users that are similar to the target user in their past ratings, and then recommend similar user options to the target user. These methods are suitable for extending the recommended range. A summary of literature recommendation papers using CF methods is presented in Table  3 .

Common methodologies using a collaborative filtering algorithms can be categorized into two groups: model-based and memory-based. The main difference between the two approaches is that the model-based approach uses a matrix factorization-based algorithm, in which the preferences of users can be calculated by embedding factors. The memory-based approach calculates the preferences of users for items based on arithmetic operations (correlation coefficients or cosine similarity). Memory-based CF approaches are widely used in scholarly literature recommendation systems, which includes several different approaches, such as k-nearest neighbors (kNN), Latent Semantic Index (LSI), and Singular Value Decomposition (SVD). Pan and Li [ 48 ] used the LDA (Latent Dirichlet Allocation) model to construct a paper recommendation system using a thematic similarity measurement to transform a topic-based recommendation into a modified version of the item-based recommendation approach. Ha et al. [ 46 ] proposed a novel method using SVD for matrix factorization and rating prediction to recommende newly published papers that have not been cited by other papers by predicting the interests of the target researchers.

Compared to CBF methods and applications based on CF show the following advantages. First, given that CF approaches is independent of content, resource costs for error-prone item processing are reduced. In addition, popularity and quality assessments are often considered limitations of CBF, but CF can achieve them easily. Sugiyama and Kan [ 43 ] used the PageRank approach to rank the popularity factor and measure the importance of research papers, to enhance the user profile derived directly from the researchers’ past works with information coming from their referenced papers as well as papers that cite the work. CF approaches are also used for serendipitous recommendations; because they are usually based on user similarity and not item similarity. Tang and McCalla [ 44 ] constructed user profiles via a co-author network to build a serendipitous paper recommendation system based on a scholarly social network.

The limitations of CF are also shown in the reviewed papers. To make precise recommendations, a CF system requires a great volume of existing data to start the recommendation. This problem is called Cold Start. Loh et al. [ 55 ] used scientific papers written by users to compose user profiles, representing user interests or expertise in order to alleviate the cold start problem in the recommender system. Data sparsity is another problem, which represents active users only by observing a small subset of the dataset to rate the papers. Keshavarz and Honarvar [ 47 ] presented an approach for paper recommendation based on local sensitive hashing by converting the citations of papers to signatures and comparing these signatures to each other to detect similar papers according to their citations. Sugiyama and Kan [ 3 ] also applied CF to discover potential citation papers that help in representing target papers to recommend, in order to alleviate sparsity. The authors also attempted to improve the scalability of the approaches, to reduce the amount of calculation and resources required for recommendations.

2.2.3 Hybrid

Approaches to the previously introduced recommendation may be combined with hybrid approaches. We reviewed 29 studies that applied hybrid recommendation approaches. Table  4 summarizes the papers that we collected where literature recommendation was developed using hybrid approaches.

As a combination of CBF and CF, hybrid recommendation approaches can be categorized into four main groups. The first group implements CBF and CF methods separately and then combine their recommendation results. Liu et al. [ 70 ] constructed a citation recommendation method that employed an association mining technique to obtain the representation of each citing paper from the citation context. Then, these paper representations were compared pairwise to compute similarities between the cited papers for CF. Zarrinkalam and Kahani [ 62 ] used multiple linked data sources to create a rich background data layer and combine multiple-criteria CF and CBF to develop a citation recommender. Zhang et al. [ 65 ] constructed a paper recommendation method based on the semantic concept similarity computed from collaborative tags.

The second and third groups incorporate CBF characteristics into a CF method or incorporate some CF characteristics into a CBF method. West et al. [ 63 ] formulated a citation-based method for making scholarly recommendations. The method uses a hierarchical structure of scientific knowledge, making possible multiple scales of relevance for different users. Nart et al. [ 82 ] built a method that simplifies CF paper recommendations by extracting concepts from papers to generate and explain the recommendations. Zhou et al. [ 57 ] used the concepts and methods of community partitioning and introduced a model to recommend authoritative papers based on the specific community. Magalhaes et al. [ 67 ] constructed a user paper-based recommendation approach by considering the user’s academic curriculum vitae.

The fourth group is to constructs a general unifying model that incorporates both content-based and collaborative characteristics. Meng et al. [ 58 ] built a unified graph model with multiple types of information (e.g., content, authorship, citation, and collaboration networks) for efficient recommendation. Pohl et al. [ 64 ] treated access data as a bipartite graph of users and documents analogous to item-to-item recommendation systems to build a paper recommender method using digital access records (e.g., http-server logs) as indicators. Gipp et al. [ 41 ] developed a paper recommender system that used keyword-based search by combining it with citation analysis, author analysis, source analysis, implicit ratings, explicit ratings, and, in addition, innovative and yet unused methods like the ‘Distance Similarity Index’ (DSI) and the ‘In-text Impact Factor’ (ItIF).

2.3 Evaluation

The evaluation metrics for different recommendation methods vary, making it difficult to compare them. To objectively compare the performance of these approaches, 75 publications used two main evaluation metrics.

First, accuracy is the most widely used parameter for evaluating a recommendation system, and it is the capability to recommend the most relevant items based on the given information. Among the reviewed papers, many offline evaluation metrics were applied to measure the accuracy. The second factor is the recommendation system’s ability to satisfy users. For example, considering serendipitous factors and user requirements instead of only considering the accuracy of the recommendation system. Some of the reviewed papers designed questionnaires for users to collect their feedback, or applied their methods to real-world systems to evaluate user satisfaction. To quantify and compare the accuracy and user satisfaction of recommendation systems, evaluation methods can be divided into two groups: online and offline.

2.3.1 Online evaluation

A total of 17 publications evaluated their methods with a user study or a real-world system using an online evaluation. They created a rating scheme for users to rate the recommendation results. These manual rating results were then used to analyze and judge an method. In addition, 6 publications out of the 17 applied online evaluations, the methodology of recommendation methods in real-world systems and collected feedback from users for evaluation. Despite analyzing a method based on manually rated the results, online evaluation is typically based on users’ acceptance results. Acceptance is commonly measured by the Click-Through Rate (CTR), that is, the ratio of recommendations clicked by users.

2.3.2 Offline evaluation

A total of 59 publications applied offline evaluations to analyze the recommendation algorithms based on the prepared offline datasets. Offline evaluations typically measure the accuracy of recommendation methods based on the ground truth, normally obtained from the information provided by the database, or obtained by manual tests.

To measure the accuracy, precision at position n (P@n) is often used to express how many items of the ground truth are recommended within the top n recommendations. Other decision support metrics including Recall and F-measure were also commonly used, often together with Precision as a reference. To evaluate the quality of recommendation, rank-aware evaluation metrics including mean reciprocal rank (MRR) and normalized discounted cumulative gain (nDCG) were also widely used to test highly relevant items that were ranked at the top of a recommendation list. The different evaluation metrics used are illustrated in Fig.  4 .

figure 4

Distribution for evaluation metrics used in literature recommendation

3 Collaborator recommendation

Currently, research in any area has expanded exponentially beyond its own fields to other research fields in the form of collaborative research. Collaboration is essential in academia to obtain good publications and grants. Identifying and determining a potential collaborator is challenging. Hence, a recommendation system for collaboration would be very helpful. Fortunately, many publications on recommending collaborators are available.

A total of 59 publications were identified using databases to develop, test and evaluate recommender systems. In 20 publications, the authors constructed their own datasets based on manually collected information, unique social platforms, or paid databases that are rarely used. In 39 out of the 59 publications, the authors used open-source databases. Of these 39 publications, 17 used data from the DBLP library to evaluate the developed collaborator recommendation systems.

The datasets needed for developing collaborator recommendations usually include 2 major subjects: (1) contexts and keywords based on researchers’ information; and (2) information networks based on academic relationships. Owing to the rapid development of online libraries and academic social networks, the extraction of information networks has become available. These datasets extracted relative information from different online sources and collected information to (i) construct profiles for researchers, (ii) retrieve keywords for constructing a structure, for specific domains and concepts, and (iii) extract weighted co-author graphs. In addition, data mining and social network analysis tools may also be used for clustering analysis and for identifying representatives of expert communities. The sources of datasets used in the 59 publications are listed in Table  5 .

Among the reviewed studies, most researchers extracted information from these databases to construct training and evaluation datasets for their recommendations.

The DBLP dataset was used in 17 publications to evaluate the performance of the collaborator recommendation approaches. The DBLP computer science bibliography provides an open bibliographic list of information on major computer science fields and is widely used to construct co-authorship networks. In the co-authorship network graphs of DBLP bibliography, the nodes represent computer scientists and the edges represent a co-authorship incident.

ScholarMate, a social research management tool launched in 2007 was used in 4 publications. It has more than 70,000 research groups created by researchers for their own projects, collaboration, and communication. As a platform for presenting publication research outputs, ScholarMate automatically collects scholarly related information about researchers’ output from multiple online resources. These resources include multiple online databases such as Scopus, one of the largest abstract and citation databases for peer-reviewed literature, including scientific journals, books, and conference proceedings. ScholarMate uses aggregated data to provide researchers with recommendations on relevant opportunities based on their profiles.

3.2 Methods

Similar to other scholarly recommendation areas, research on methodologies to develop collaborator recommendations can be classified into the following categories: CBF, CF, and hybrid approaches. In this section, we introduce the approaches that are widely used in each recommendation class. In addition, we provide an overview of the most important aspects and techniques used in these fields.

3.2.1 Content-based filtering (CBF)

23 publications presented CBF methods for collaborator recommendation. CBF focuses on the semantic similarity between researchers’ personal features, such as their personal profiles, professional fields, and research interests. Natural language processing techniques (NLP) were used to extract keywords from the associated documents to define researchers’ professional fields and interests. A summary of publications on collaborator recommendation using CBF approaches is presented in Table  6 .

The Vector Space Model (VSM) is widely used in content-based recommendation methodologies. By expressing queries and documents as vectors in a multidimensional space, these vectors can be used to calculate the relevance or similarity. Yukawa et al. [ 84 ] proposed an expert recommendation system employing an extended vector space model that calculates document vectors for every target document for authors or organizations. It provides a list in the order of relevance between academic topics and researchers.

Topic clustering models using VSM have been widely used to profile fields of researchers using a list of keywords with a weighting schema. Using a keyword weighting model, Afzal and Maurer [ 85 ] implemented an automated approach for measuring expertise profiles in academia that incorporates multiple metrics for measuring the overall expertise level. Gollapalli et al. [ 86 ] proposed a scholarly content-based recommendation system by computing the similarity between researchers based on their personal profiles extracted from their publications and academic homepages.

Topic-based models have also been widely applied for document processing. The topic-based model introduces a topic layer between the researchers and extracted documents. For example, in a popular topic modeling approach, based on the latent Dirichlet allocation (LDA) method, each document is considered as a mixture of topics and each word in a document is considered randomly drawn from the document’s topics. Yang et al. [ 87 ] proposed a complementary collaborator recommendation approach to retrieve experts for research collaboration using an enhanced heuristic greedy algorithm with symmetric Kullback–Leibler divergence based on a probabilistic topic model. Kong et al. [ 88 ] applied a collaborator recommendation system by generating a recommendation list based on scholar vectors learned from researchers’ research interests extracted from documents based on topic modeling.

As mentioned previously in the literature recommendation section, content-based methods usually suffer from a high calculation cost because of the large number of analyzed documents and vector space. To minimize this cost and maximize the preference, Kong et al. [ 100 ] presented a scholarly collaborator recommendation method based on matching theory, which adopts multiple indicators extracted from associated documents to integrate the preference matrix among researchers. Some researchers have also modified weighted features and hybrid topic extraction methods with other factors to obtain higher accuracy. For example, Sun et al. [ 92 ] designed a career age-aware academic collaborator recommendation model consisting of authorship extraction from digital libraries, topic extraction based on published abstractions, and career age-aware random walk for measuring scholar similarity.

3.2.2 Collaborative filtering

Six publications presented a methodology based merely on collaborative filtering. Traditional CF-based recommendations aim to find the nearest neighbor in a social context similar to that of the targeted user. It selects the nearest neighbors based on the users’ rating similarities. When the users rate a set of items in a manner similar to that of a target user, the recommendation systems would define these nearest neighbors as groups with similar interests and recommend items that are favored by these groups but not discovered by the target user. To apply this method to collaborator recommendation, the system would recommend persons who have worked with a target author’s colleagues but not with the target author himself. Analogously, the system considers each author as an item to be rated and the scholarly activities such as writing a paper together as a rating activity, following the methodology of traditional CF-based recommendations. Researchers’ publication activities are transformed into rating actions, and the frequency of co-authored papers is considered a rating value. Using this criterion, a graph based on a scholarly social network was built. A summary of the collaborator recommendation paper using CF approaches is presented in Table  7 .

Based on this co-authorship network transformed from researchers’ publication activities, several methods for link prediction and edge weighting have been utilized. Benchettara et al. [ 108 ] solved the problem of link prediction in co-authoring networks by using a topological dyadic supervised machine learning approach. Koh and Dobbie [ 110 ] proposed an academic collaborator recommendation approach that uses a co-authorship network with a weighted association rule approach using a weighting mechanism called sociability. Recommendation approaches based on this co-authorship network transformed from publication activities, where all nodes have the same functions, are called homogeneous network-based recommendation approaches.

The random walk model, which can define and measure the confidence of a recommendation, is popular in co-authorship network-based collaborator recommendations. Tong et al. [ 113 ] published Random Walk with Restart (RWR), a famous random walk model, which provides a good way to measure how closely related two nodes are in a graph. Applications and improvements based on RWR model are widely used for link prediction in co-authorship networks. Li et al. [ 109 ] proposed a collaboration recommendation approach based on a random walk model using three academic metrics as the basics through co-authorship relationship in a scholarly social network. Yang et al. [ 112 ] combined the RWR model with the PageRank method to propose a nearest-neighbor-based random walk algorithm for recommending collaborators.

Compared with content-based recommendation approaches, which involve only the published profiles of researchers without considering scholarly social networks, homogeneous network-based approaches apply CF methods based on social network technology to recommend collaborators. Lee et al. [ 111 ] compared ASN-based collaborator recommendations with metadata-based and hybrid recommendation methodologies, and suggested it as the best method. However, homogeneous network-based collaboration recommendations do not consider the contextual features of researchers. As a combination of these two methods, a hybrid collaboration recommendation system based on a heterogeneous network is popular in current collaboration recommendation approaches and applications.

3.2.3 Hybrid

Approaches to previously introduced recommendation classes may be combined with hybrid approaches. 37 of the reviewed papers applied approaches with hybrid characteristics. As an improvement, heterogeneous network-based recommendations overcome these limitations. Table  8 summarizes all collaborator recommendation papers that we collected using hybrid approaches.

Heterogeneous networks are networks in which two or more node classes are categorized by their functions. Based on the co-authorship network used in most homogeneous network-based approaches, heterogeneous network-based approaches incorporate more information into the network, such as the profiles of researchers, the results of topic modeling or clustering, and the citation relationship between researchers and their published papers. Xia et al. [ 52 ] presented MVCWalker, an innovative method based on RWR for recommending collaborators to academic researchers. Based on academic social networks, other factors such as co-author order, latest collaboration time, and times of collaboration were used to define link importance. Kong et al. [ 114 ] proposed a collaboration recommendation model that combines the features extracted from researchers’ publications using a topic clustering model and a scholar collaboration network using the RWR model to improve the recommendation quality. Kong et al. [ 115 ] proposed a collaboration recommendation model that considers scholars’ dynamic research interests and collaborators’ academic levels. By using the LDA model for topic clustering and fitting the dynamic transformation of interest, they combined the similarity and weighting factors in a co-authorship network to recommend collaborators with high prevalence. Xu et al. [ 116 ] designed a recommendation system to provide serendipitous scholarly collaborators that could learn the serendipity-biased vector representation of each node in the co-authorship network.

4 Venue recommendation

In this section, we describe recommendation systems that can help researchers identify scientific research publishing opportunities. Recently, there has been an exponential increase in the number of journals and conferences researchers can select to submit their research. Recommendation systems can alleviate some of the cognitive burden that arises when choosing the right conference or journal for publishing a work. In the following sections, we describe academic venue recommendation systems for conferences and journals.

4.1 Conference recommendation

The dramatic rise in the number of conferences/journals has made it nearly impossible for researchers to keep track of academic conferences. While there is an argument to be made that researchers are familiar with the top conferences in their field, publishing to those conferences is also becoming increasingly difficult due to the increasing number of submissions. A conference recommendation system will be helpful in reducing the time and complexity requirement to find a conference that meets the needs of a given researcher. Thus, conference recommendation is a well-studied problem in the domain of data analysis, with many studies being conducted using a variety of methods such as citation analysis, social networks, and contextual information.

All reviewed publications used databases to test their methodology. Two publications chose to construct a custom dataset based on the manual collection of information and one publication used a rare paid dataset. The remaining 20 studies used published open-source databases to create the datasets used in their testing and evaluation environments. Table  9 provides a summary of the frequencies with which published open-source databases were used.

DBLP was the most used database with 12 occurrences, followed by ACM Digital Library and WikiCFP, both with 5 occurrences. The unique databases utilized in conference recommendation systems are Microsoft Academic Search, CORE Conference Portal, Epinion, IEEE Digital Library, and Scigraph.

Microsoft Academic Search hosts over 27 million publications from over 16 million authors and is primarily used to extract metadata on authors, their publications, and their co-authors. The CORE Conference portal provides rankings for conferences primarily in Computer Science and related disciplines. The CORE Conference provides metadata on conference publishers and rankings. The Epinion is a general review website founded in 1999 and utilized to create networks of ‘trusted’ users. The IEEE Digital Library is a database used to access journal articles, conference proceedings, and other publications in computer science, electrical engineering, and electronics. A scigraph is a knowledge graph aggregating metadata from publications in Springer Nature and other sources. WikiCFP is a website that collates and publishes calls for papers.

4.1.2 Methods

There are three main subtypes of conference recommendation systems: content-based, collaborative, and hybrid systems. The following section provides an overview of the most popular methods used by each sub types.

Content-based filtering (CBF)

Only 1 of the 23 publications in conference recommendations utilized pure CBF. Using data from Microsoft Academic Search, Medvet et al. [ 146 ] created three disparate CBF systems seeking to reduce the input data required for accurate recommendations: (a) utilizing Cavnar-Trenkle text classification, (b) utilizing two-step latent Dirichlet allocation (LDA), and (c) utilizing LDA alongside topic clustering.

Cavnar-Trenkle classification is an n-gram-based text classification method. Given a set of conferences \(C = \{c_1, c_2, c_3, \ldots \}\) , it is necessary to define for each conference \(c \in C\) a set of papers \(P = \{p_1, p_2, p_3, \ldots \}\) that were published in conference \(c\) . It creates an n-gram profile for each conference \(c \in C\) , using n-grams generated from each paper in the conference \(p \in P\) . Finally, it computes the distance between the n-gram profiles of each conference \(c \in C\) and a publication of interest \(p_i\) and recommends an \(n\) number of conferences that optimize the minimum distance between \(c\) and \(p_i\) .

Collaborative filtering

Among 18 publications employed collaborative filtering strategies out of the 23 collected publications, the most popular filtering approach was based on around generating and analyzing a variety of networks on different types of metadata including citations, co-authorship, references, social proximity, etc.

Asabere and Acakpovi [ 147 , 148 ] generated a user-based social context aware filter with breadth-first search (BFS) and depth-first search (DFS) on a knowledge graph created by computing the Social Ties between users, and added geographical, computing, social, and time contexts. Social Ties were generated by computing the network centrality based on the number of links between users and presenters at a given conference.

Other types of network-based collaborative filters include a co-author-based network that assigns weights with regard to venues where one’s collaborators have published previously [ 149 , 150 ], a broader metadata-based network that utilizes one or more distinct characteristics to assign weights to conferences (i.e., citations, co-authors, co-activity, co-interests, colleagues, interests, location, references, etc.) [ 146 , 151 , 152 , 153 , 154 ], and RWR-based methods [ 155 , 156 ].

Kucuktunc et al. [ 155 ] iterated the traditional RWR model by adding a directionality parameter \((\kappa )\) , which is used to chronologically calibrate the recommendations as either recent or traditional. The list of publications that used CF for conference recommendations is presented in Table  10 .

A total of 6 publications used hybrid filtering strategies out of the total 23 publications. The most common hybrid strategy i to amalgamate standard topic-based content filtering with network-based collaborative filters. Table  11 summarizes publications that used hybrid filtering methods for conference recommendations.

4.2 Journal recommendation

As of April 14, 2020, the Master Journal List of the Web of Science Group contains 24,748 peer-reviewed journals for publishing articles from different publishing houses. The authors may face difficulties in finding suitable journals for their manuscripts. In many cases, a manuscript submitted to a journal is rejected because it is not within the scope of that journal. Finding suitable journals for a manuscript is the most important step in publishing articles. A journal recommendation system may reduce the burden of authors by selecting appropriate journals to publish as well as reducing the burden of editors from rejecting manuscripts that do not align with the scopes of the journals. Many publishing companies have their own journal finders that can help authors find suitable journals for their manuscripts.

In this section, we review all available journal recommendation systems by analyzing the methods used and their journal coverage. There are a total of ten journal recommendation systems, but we found only four papers describing details corresponding to their recommendation procedures. A detailed list of journal recommenders with their methods and datasets is provided in Table  12 . Most journal recommenders were developed for different publishing houses. Most journal recommenders contain journals from multiple domains except eTBLAST, Jane, and SJFinder, where the journals are from the biomedical and life science domains.

TF-IDF, kNN, and BM25 were used to find similar journals using the keywords provided keywords. Kang et al. [ 172 ] used a classification model (using kNN and SVM) to identify the suitable journals. Errami et al. [ 169 ] used the similarity between provided keywords and journal keywords.

Rollins et al. [ 39 ] evaluated a journal recommender by using feedback from real users. Kang et al. [ 172 ] evaluated a system based on previously published articles. If the top three or top ten recommended journals contained the journal in which the input paper was published, then this would be counted as a correct recommendation; otherwise, it would be counted as a false recommendation. Similarly, eTBLAST [ 169 ] and Jane [ 170 ] were evaluated using previously published articles.

Deep learning-based recommenders perform better than traditional matching-based NLP or machine learning algorithms. However, none of the existing systems available for journal recommendations uses deep learning algorithms. One of the future goals may be the implementation of different deep learning algorithms. In addition to these publication houses, developing journal recommenders for different publication repositories (DBLP, arxiv, etc.) may be another future task.

5 Reviewer recommendation

In this section, we describe paper, journal, and grant reviewer recommendation systems that rae available in literature. With the rapid increase in publishable research materials, pressure to find reviewers is overwhelming for conference organizers/journal editors. Similarly, it overwhelms program directors in finding appropriate reviewers for grants.

In the case of conferences, authors normally choose some research fields during the submission. The organizing committee of a conference typically has a set of researchers as reviewers who have been assigned from the same set of fields. Based on the matching of the fields, the reviewers were assigned papers. However, the research fields are broad and may not exactly match those of the reviewer. In the case of journals, authors need to suggest that reviewers or editors need to find reviewers for manuscript reviewing. Whereas, for reviewing grant proposals, program directors are responsible for finding suitable reviewers for reviewing proposals.

The problem of finding reviewers can be solved by a reviewer recommendation system, which the system can recommend reviewers based on the similarity of contents or past experiences. The reviewer recommendation problem is known as the reviewer assignment problem. We searched for publications related to both reviewer recommendations and assignments.

A total of 67 reviewed publications were retrieved using Google searches, and 36 publications were included in the final analysis after title, abstract, and full-text screening. Among these 36 publications, 23 conducted experiments to supplement the theoretical contents, and the sources of the datasets used are listed in Table  13 .

5.2 Methods

Broadly, there are three major categories in terms of techniques used, one is based on information retrieval (IR), another one on optimization where the recommendation is viewed as an enhanced version of the generalized assignment problem (GAP), and the third includes techniques that fall between the first two categories.

5.2.1 Informational retrieval (IR)-based

IR-based studies generally focus on calculating matching degrees between reviewers and submissions.

Hettich and Pazzani [ 178 ] discussed a prototype application in the U.S. National Science Foundation (NSF) to assist program directors in identifying reviewers for proposals, named Revaid, which uses TF-IDF vectors for calculating proposal topics and reviewer expertise, and defined a measure called the Sum of Residual Term Weight (SRTW) for the assignment of reviewers. Yang et al. [ 179 ] constructed a knowledge base of expert domains extracted from the web and used a probability model for domain classification to compute the relatedness between experts and proposals for ranking expertise. Ferilli et al. [ 180 ] used Latent Semantic Indexing (LSI) to extract the paper topic and expertise of reviewers from publications available online, followed by Global Review Assignment Processing Engine (GRAPE), a rule-based expert system for the actual assignment of reviewers.

Serdyukov et al. [ 181 ] formulated a search for an expert to absorb a random walk in a document-candidate graph. A recommendation was made on reviewer candidate nodes with high probabilities after an infinite number of transitions in the graph, with the assumption that expertise is proportional to probability. Yunhong et al. [ 182 ] used LDA for proposal and expertise topic extraction, and defined a weighted sum of varied index scores for ranking reviewers for each proposal. Peng et al. [ 183 ] built a time-aware reviewer’s personal profile using LDA to represent the expertise of reviewers, then a weighted average of matching degree by topic vectors and TF-IDF of the reviewer and submitted papers were used for recommendation. Medakene et al. [ 184 ] used pedagogical expertise in addition to the research expertise of the reviewers with LDA in building reviewers’ profiles and used a weighted sum of the topic similarity and the reference similarity for assigning reviewers to papers. Rosen-Zvi et al. [ 185 ] proposed an Author-Topic Model (ATM) that extends the LDA to include authorship information. Later, Jin et al. [ 186 ] proposed an Author-Subject-Topic (AST) model, with the addition of a ‘subject’ layer that supervises the generation of hierarchical topics and sharing of subjects among authors for reviewer recommendations. Alkazemi [ 187 ] developed PRATO (Proposals Reviewers Automated Taxonomy-based Organization) that first sorted proposals and reviewers into categorized tracks as defined by a tree of hierarchical research domains, and then assigned the reviewers based on the matching of tracks using Jaccard similarity scores. Cagliero et al. [ 188 ] proposed an association rule-based methodology (Weighted Association Rules, WAR) to recommend additional external reviewers.

Ishag et al. [ 189 ] modeled citation data of published papers as a heterogeneous academic network, integrating authors’ h-index and papers’ citation counts, proposed a quantification to account for author diversity, and formulated two types of target patterns, namely, researcher-general topic patterns (RSP) and researcher-specific topic patterns (RSP) for searching reviewers.

Recently deep learning techniques have been incorporated into feature representations. Zhao et al. [ 190 ] used word embeddings to represent the contents of both the papers and reviewers. Then, the Word Mover’s distance (WMD) method was used to measure the minimum distances between paper and reviewer vectors. Finally, the Constructive Covering Algorithm (CCA) was used to classify reviewer labels for recommending reviewers. Anjum et al. [ 191 ] proposed a common topic model (PaRe) that jointly models topics to a submission and a reviewer profile based on word embedding. Zhang et al. [ 192 ] proposed a two-level bidirectional gated recurrent unit with an attention mechanism (Hiepar-MLC) to represent the semantic information of reviewers and papers and used a simple multilabel-based reviewer assignment strategy (MLBRA) to match the most similar multilabeled reviewer to a particular multilabeled paper.

Co-authorship and reviewer preferences were incorporated into collaborative filtering application. Li and Watanabe [ 193 ] designed a scale-free network combining preferences and a topic-based approach that considers both reviewer preferences and the relevance of reviewers and submitted papers to measure the final matching degrees between reviewers and submitted papers. Xu and Du [ 194 ] designed a three-layer network that combines a social network, semantic concept analysis and citation analysis, and proposed a particle swarm algorithm to recommend reviewers for submissions. Maleszka et al. [ 195 ] used a modular approach to determine a grouping of reviewers that consisted of a keyword-based module, a social graph module and a linguistic module. A summary of all IR-based reviewer recommendations can be found in Table  14 .

5.2.2 Optimization-based

Optimization-based reviewer recommendations focus more on theory, modeling an algorithm of assignments under multiple constraints such as reviewer workload, authority, diversity, and conflict of interest (COI).

Sun et al. [ 196 ] proposed a hybrid of knowledge and decision models to solve the proposal-reviewer assignment problem under constraints. Kolasa and Krol [ 197 ] compared artificial intelligence methods for reviewer-paper assignment problems, namely, genetic algorithms (GA), ant colony optimization (ACO), tabu search (TS), hybrid ACO-GA and GA-TS, in terms of time efficiency and accuracy. Chen et al. [ 198 ] employed a two-stage genetic algorithm to solve the project-reviewer assignment problem. In the first stage, reviewer were assigned by taking into consideration their respective preferences, and then, in the second stage, review venues were arranged in a way that allows the minimum times of change for reviewers.

Das and Gocken [ 199 ] used fuzzy linear programming to solve the reviewer assignment problem by maximizing the matching degree between expert sets and grouped proposals, under crisp constraints. Tayal et al. [ 200 ] used type-2 fuzzy sets to represent reviewers’ expertise in different domains, and proposed using the fuzzy equality operator to calculate equality between the set representing the expertise levels of a reviewer and the set representing the keywords of a submitted proposal, and optimized the assignment under various constraints.

Wang et al. [ 201 ] formulated the problem into a multiobjective mixed integer programming model that considers Direct Matching Score (DMS) between manuscripts and reviewer, Manuscript Diversity (MD), and Reviewer Diversity (RD), and proposed a two-phased stochastic-biased greedy algorithm (TPGA) to solve the problem. Long et al. [ 202 ] studied the paper-reviewer assignment problem from the perspective of goodness and fairness, where they proposed maximizing topic coverage and avoiding the conflict of interest (COI) for the optimization objectives. They also designed an approximation method that provides 1/3 approximation.

Kou et al. [ 203 ] modeled reviewers’ published papers as a set of topics and performed weighted-coverage group-based assignments of reviewers to papers. They also proposed a greedy algorithm that achieves a 1/2 approximation ratio compared with the exact solution. Kou et al. [ 204 ] developed a system that automatically extracts the profiles of reviewers and submissions in the form of topic vectors using the author-topic model (ATM) and assigns reviewers to papers based on the weighted coverage of paper topics.

Stelmakh et al. [ 205 ] designed an algorithm, PeerReview4All, which is based on an incremental max-flow procedure to maximize the review quality of the most disadvantaged papers (fairness objective) and to ensure the correct recovery of the papers that should be accepted (accuracy objective). Yesilcimen and Yildirim [ 206 ] proposed an alternative mixed integer programming formulation for the reviewer assignment problem whose size grows polynomially as a function of the input size. A summary of all the optimization-based reviewer recommendation papers is presented in Table  15 .

5.2.3 Hybrid

Finally, we see hybrid of both methods in other studies. Conry et al. [ 207 ] modeled reviewer-paper preferences using CF of ratings, latent factors, paper-to-paper content similarity, and reviewer-to-reviewer content similarity and optimized the paper assignment under global conference constraints; therefore, the assignment was transformedinto a linear programming problem. Tang et al. [ 208 ] formulated the problem of expertise matching to a convex cost flow problem which turned the recommendation into an optimization problem under constraints, and also used online matching algorithms to support user feedback to the system.

As one of the most popular systems for conference reviewer assignment, Charlin and Zemel [ 209 ] addressed the assignment by first using a language model and LDA for learning reviewer expertise and submission topics, followed by a linear regression for initial predictions of reviewers’ preferences, combined with reviewers’ elicitation scores (reviewers’ disinterest or interests) in specific papers for the final recommendation, and optimized the objective functions under constraints. Liu et al. [ 210 ] constructed a graph network for reviewers and query papers using LDA to establish edge weights, and used the Random Walk with Restart (RWR) model on a graph network with sparsity constraints to recommend reviewers with the highest probabilities incorporating aspects of expertise, authority and diversity. Liu et al. [ 211 ] combined the heuristic knowledge of expert assignment and techniques of operations research, in which different aspects are involved, such as reviewer expertise, title and project experience. A multiobjective optimization problem was formulated to maximize the total expertise level of the recommended experts and avoid conflicts between reviewers and authors. Ogunleye et al. [ 212 ] used a mixture of TF-IDF, LSI, LDA and word2vec to represent the semantic similarity between submissions and reviewers’ publications and then used integer linear programming to match submissions with the most appropriate reviewers. Jin et al. [ 213 ] extracted topic distributions of reviewers’ publications and submissions using the Author-Topic Model (ATM) and Expectation Maximization (EM), then formulated the problem of reviewer assignment into an integer linear programming problem that takes into consideration the topic relevance, interest trend of a reviewer candidate, and authority of candidates. A summary of the reviewer recommendation papers is presented in Table  16 .

6 Other scholarly recommendation

6.1 dataset recommendation.

In the Big Data era, extensive data have been generated for scientific discoveries. However, storing, accessing, analyzing, and sharing a vast amount of data is becoming a major challenge and bottleneck for scientific research. Furthermore, making a large amount of public scientific data findable, accessible, interoperable, and reusable (FAIR) is challenging. Many repositories and knowledge bases have been established to facilitate data-sharing. Most of these repositories are domain-specific, and none of them recommend datasets to researchers or users. Furthermore, over the past two decades, there has been an exponential increase in the number of datasets added to these dataset repositories. Researchers must visit each repository to find suitable datasets for their research. In this case, a dataset recommender would be helpful to researchers. This can save time and the visibility of the dataset.

A dataset recommender is not commonly used. However, dataset retrieval is a popular information retrieval task. Many dataset retrieval systems exist for general datasets as well as biomedical datasets. Google’s Dataset Search Footnote 2 is a popular search engine for datasets from different domains. DataMed Footnote 3 is another dataset search engine specific to biomedical domain datasets that combines biomedical repositories and enhances query searching using advanced natural language processing (NLP) techniques [ 214 , 215 ]. DataMed indexes and provides the functionality to search diverse categories of biomedical datasets [ 215 ]. The research focus of DataMed is to retrieve datasets using a focused query. Search engines such as DataMed or Google Dataset Search are helpful when the user knows the type of dataset to search for, but determining the user intent of web searches is a difficult problem because of the sparse data available concerning the searcher [ 216 ].

A few experiments have been performed on data linking where similar datasets can be clustered together using different semantic features. Data linking or identifying/clustering similar datasets has received relatively less attention in research on recommendation systems. Specifically, only a few papers [ 217 , 218 , 219 ] have been published on this topic. Ellefi et al. [ 218 ] defined dataset recommendation as the problem of computing a rank score for each set of target datasets ( \(D_T\) ) such that the rank score indicates the relatedness of \(D_T\) to a given source dataset ( \(D_S\) ). The rank scores provide information on the likelihood of a \(D_T\) containing linking candidates for \(D_S\) . Similarly, Srivastava [ 219 ] proposed a dataset recommendation system by first creating similarity-based dataset networks, and then recommending connected datasets to users for each searched dataset. This recommendation approach is difficult to implement because of the cold start problem. Here, the cold start problem refers to the user’s initial dataset selection, where the user has no idea what dataset to select/search for. If the user lands on an incorrect dataset, the system will always recommend the wrong dataset to the user.

Patra et al. [ 220 , 221 ] and Zhu et al. [ 222 ] proposed a dataset recommendation system for the Gene Expression Omnibus (GEO) based on the publications of researchers. This system recommends GEO datasets using classification and similarity-based approaches. Initially, they identified the research areas from the publications of researchers using the Dirichlet Process Mixture Model (DPMM) and recommended datasets for each cluster. The classification-based approach uses several machine and deep learning algorithms, whereas the similarity-based approach uses cosine similarity between publications and datasets. This is the first study on dataset recommendations.

6.2 Grants/funding recommendation

Obtaining grants or funding for research is essential in academic settings. Grants help researchers in many ways during their careers. Finding appropriate funding opportunities is an important step in this process, and there are multiple grant opportunities available that a researcher may not be aware of. No universal repositories available for funding announcements worldwide. However, few repositories are available for funding announcements in the United States of America, such as, grants.gov, NIH, and SPIN. These websites host many funding opportunities in various areas. Furthermore, multiple new opportunities are available daily. Thus, it is difficult to find suitable opportunities for researchers. A recommendation system for funding announcements will help researchers find appropriate research funding opportunities. Recently, Zhu et al. [ 223 ] developed a grant recommendation system for NIH grants based on researchers’ publications. They developed the recommendation as a classification using Bidirectional Encoder Representations from Transformers (BERT) to capture intrinsic, nonlinear relationships between researchers’ publications and grant announcements. Internal and external evaluations were performed to assess the usefulness of the system. Two publications are available on developing a search engine to find Japanese research announcements [ 224 , 225 ]. The titles of these papers suggest recommendation systems; however, the full text reveals that these publications describe the search for funding announcements in Japan. These publications describe a keyword-based search engine using TF-IDF and association rules.

7 Conclusion and future directions

Numerous recommendation systems have been developed since the beginning of the twenty-first century. In this comprehensive survey, we discussed all common types of scholarly recommendation systems outlining the data resources, applied methodologies and evaluation metrics.

Recommendation systems for the literature are still the most focused areas for scholarly recommendations. With the increasing need to collaborate with other researchers and publish research results, recommenders for collaborators and reviewers are becoming popular. Compared with these popular research targets, published recommendation systems for conferences/journals, datasets and grants are relatively less common.

To develop recommendation systems and evaluate their results, researchers commonly construct datasets using information extracted from multiple resources. Published open-source databases, such as DBLP, ACM and IEEE Digital Libraries, are the most commonly used sources for multiple types of recommendation systems. Some web services containing scholarly related information about its users, or social tags added by researchers, such as, ScholarMate and CiteULike, were also used to develop recommendation systems.

Content-based filtering (CBF) is the most commonly used approach for recommendation systems. Owing to the requirement of processing context information, measuring keywords and searching topics of academic resources, most recommendation systems were built based on CBF. It is difficult to consider the popularity and rating of objects in traditional CBF. To overcome these limitations, CF has been used to solve the problem, especially when recommending items based on researchers’ interests and profiles. With the rapid development of recommendation systems and the need to overcome the high calculation costs, hybrid methods combining CBF and CF have been used by several recommenders to achieve better performance.

Based on the information gathered for the survey, we provide the following suggestions for better recommendation developments:

To Improve System Performance And Avoid The Limitations Of Existing Methodologies, A Combination Of Different Methods, Or Incorporating The Characteristics Of One Method Into Another May Be Helpful.

Evaluating The Efficiency Of The Recommendation System, Including Both Decision Support Metrics Such As Precision And Recall, And Rank-Aware Evaluation Metrics, Including Mrr And Ndcg, Will Make The Offline Evaluation More Applicable.

For Future Directions Of Scholarly Recommendation Research, We Suggest That Researchers Apply Recommendation Methodologies In Areas Less Studied, Such As Datasets And Grant Recommendations. We Believe That Researchers Would Benefit Significantly From These Areas From A Practical Perspective.

Based on extensive research, our literature review provides a comprehensive summary of scholarly recommendation systems from various perspectives. For researchers interested in developing future recommendation systems, this would be an efficient overview and guide.

https://dblp.org , accessed on October 16, 2020.

https://datasetsearch.research.google.com .

https://datamed.org .

Bollacker KD, Lawrence S, Giles CL (1998) Citeseer: an autonomous web agent for automatic retrieval and identification of interesting publications. Springer, Berlin, pp 116–123

Google Scholar  

Das D, Sahoo L, Datta S (2017) A survey on recommendation system. Int J Comput Appl 7:160

Sugiyama K, Kan M-Y (2013) Exploiting potential citation papers in scholarly paper recommendation. In: Proceedings of the 13th ACM/IEEE-CS joint conference on digital libraries, pp 153–162

Petricek V, Cox IJ, Han H, Councill IG, Giles CL (2005) Modeling the author bias between two on-line computer science citation databases. In: Special interest tracks and posters of the 14th international conference on World Wide Web, pp 1062–1063

Haruna K, Akmar Ismail M, Damiasih D, Sutopo J, Herawan T (2017) A collaborative approach for research paper recommender system. PLoS ONE 12(10):0184516

Philip S, Shola P, Ovye A (2014) Application of content-based approach in research paper recommendation system for a digital library. Int J Adv Comput Sci Appl 10:5

Peis E, del Castillo JM, Delgado-López JA (2008) Semantic recommender systems. Analysis of the state of the topic. Hipertext Net 6(2008):1–5

Neethukrishnan K, Swaraj K (2017) Ontology based research paper recommendation using personal ontology similarity method. In: 2017 second international conference on electrical, computer and communication technologies (ICECCT), pp 1–4. IEEE

Hong K, Jeon H, Jeon C (2012) Userprofile-based personalized research paper recommendation system. In: 2012 8th international conference on computing and networking technology (INC, ICCIS and ICMIC), pp 134–138 . IEEE

Ghosal T, Chakraborty A, Sonam R, Ekbal A, Saha S, Bhattacharyya P (2019) Incorporating full text and bibliographic features to improve scholarly journal recommendation. In: 2019 ACM/IEEE joint conference on digital libraries (JCDL), pp 374–375 . IEEE

Lofty M, Salama A, El-Ghareeb H, El-dosuky M (2014) Subject recommendation using ontology for computer science ACM curricula. Int J Inf Sci Intell Syst 1:3

Le Anh V, Hai VH, Tran HN, Jung JJ (2014) Scirecsys: a recommendation system for scientific publication by discovering keyword relationships. In: International conference on computational collective intelligence, pp 72–82 . Springer

Maake BM, Ojo SO, Zuva T (2019) Information processing in research paper recommender system classes. In: Research data access and management in modern libraries, pp 90–118 . IGI Global

Shimbo M, Ito T, Matsumoto Y (2007) Evaluation of kernel-based link analysis measures on research paper recommendation. In: Proceedings of the 7th ACM/IEEE-CS joint conference on digital libraries, pp 354–355

Achakulvisut T, Acuna DE, Ruangrong T, Kording K (2016) Science concierge: a fast content-based recommendation system for scientific publications. PLoS ONE 11(7):0158423

Habib R, Afzal MT (2017) Paper recommendation using citation proximity in bibliographic coupling. Turkish J Electr Eng Comput Sci 25(4):2708–2718

Beel J, Langer S, Genzmehr M, Nürnberger A (2013) Introducing docear’s research paper recommender system. In: Proceedings of the 13th ACM/IEEE-CS joint conference on digital libraries, pp 459–460

Uchiyama K, Nanba H, Aizawa A, Sagara T (2011) Osusume: cross-lingual recommender system for research papers. In: Proceedings of the 2011 workshop on context-awareness in retrieval and recommendation, pp 39–42

Tang T (2006) Active, context-dependent, data-centered techniques for e-learning: a case study of a research paper recommender system. Data Min E-Learn 4:97–111

Hong K, Jeon H, Jeon C (2013) Personalized research paper recommendation system using keyword extraction based on userprofile. J Converg Inf Technol 8(16):106

Ollagnier A, Fournier S, Bellot P (2018) Biblme recsys: harnessing bibliometric measures for a scholarly paper recommender system. In: BIR 2018 Workshop on Bibliometric-enhanced Information Retrieval, pp 34–45

Strohman T, Croft WB, Jensen D (2007) Recommending citations for academic papers. In: Proceedings of the 30th annual international ACM SIGIR conference on research and development in information retrieval, pp 705–706

Liu X, Yu Y, Guo C, Sun Y, Gao L (2014) Full-text based context-rich heterogeneous network mining approach for citation recommendation. In: IEEE/ACM joint conference on digital libraries, pp 361–370 . IEEE

Manrique R, Marino O (2018) Knowledge graph-based weighting strategies for a scholarly paper recommendation scenario. In: KaRS@ RecSys, pp 5–8

Sugiyama K, Kan M-Y (2015) A comprehensive evaluation of scholarly paper recommendation using potential citation papers. Int J Digit Libr 16(2):91–109

Zhang Z, Li L (2010) A research paper recommender system based on spreading activation model. In: The 2nd international conference on information science and engineering, pp 928–931 . IEEE

Jiang Y, Jia A, Feng Y, Zhao D (2012) Recommending academic papers via users’ reading purposes. In: Proceedings of the sixth ACM conference on recommender systems, pp 241–244

Hagen M, Beyer A, Gollub T, Komlossy K, Stein B (2016) Supporting scholarly search with keyqueries. In: European conference on information retrieval, pp 507–520. Springer

Ohta M, Hachiki T, Takasu A (2011) Related paper recommendation to support online-browsing of research papers. In: Fourth international conference on the applications of digital information and web technologies (ICADIWT 2011), pp 130–136. IEEE

Pera MS, Ng Y-K (2011) A personalized recommendation system on scholarly publications. In: Proceedings of the 20th ACM international conference on information and knowledge management, pp 2133–2136

Huang W, Kataria S, Caragea C, Mitra P, Giles CL, Rokach L (2012) Recommending citations: translating papers into references. In: Proceedings of the 21st ACM international conference on information and knowledge management, pp 1910–1914

Pera MS, Ng Y-K (2014) Exploiting the wisdom of social connections to make personalized recommendations on scholarly articles. J Intell Inf Syst 42(3):371–391

Beel J, Langer S, Gipp B, Nürnberger A (2014) The architecture and datasets of docear’s research paper recommender system. D-Lib Mag 20(11/12)

Chakraborty T, Modani N, Narayanam R, Nagar S (2015) Discern: a diversified citation recommendation system for scientific queries. In: 2015 IEEE 31st international conference on data engineering, pp 555–566. IEEE

Nascimento C, Laender AH, da Silva AS, Gonçalves MA (2011) A source independent framework for research paper recommendation. In: Proceedings of the 11th annual international ACM/IEEE joint conference on digital libraries, pp 297–306

He Q, Kifer D, Pei J, Mitra P, Giles CL (2011) Citation recommendation without author supervision. In: Proceedings of the fourth ACM international conference on web search and data mining, pp 755–764

Sesagiri Raamkumar A, Foo S, Pang N (2015) Rec4lrw–scientific paper recommender system for literature review and writing. In: Proceedings of the 6th international conference on applications of digital information and web technologies, pp 106–120

Magara MB, Ojo SO, Zuva T (2018) Towards a serendipitous research paper recommender system using bisociative information networks (bisonets). In: 2018 international conference on advances in big data, computing and data communication systems (icABCD), pp 1–6. IEEE

Rollins J, McCusker M, Carlson J, Stroll J (2017) Manuscript matcher: a content and bibliometrics-based scholarly journal recommendation system. In: BIR@ ECIR, pp 18–29

De Nart D, Tasso C (2014) A personalized concept-driven recommender system for scientific libraries. Procedia Comput Sci 38:84–91

Gipp B, Beel J, Hentschel, C (2009) Scienstein: a research paper recommender system. In: Proceedings of the international conference on emerging trends in computing (ICETiC’09), pp 309–315

Alzoghbi A, Ayala VAA, Fischer PM, Lausen G (2016) Learning-to-rank in research paper cbf recommendation: leveraging irrelevant papers. In: CBRecSys@ RecSys, pp 43–46

Sugiyama K, Kan M-Y (2010) Scholarly paper recommendation via user’s recent research interests. In: Proceedings of the 10th annual joint conference on digital libraries, pp 29–38

Sugiyama K, Kan M-Y (2011) Serendipitous recommendation for scholarly papers considering relations among researchers. In: Proceedings of the 11th annual international ACM/IEEE joint conference on digital libraries, pp 307–310

Tang TY, McCalla G (2009) The pedagogical value of papers: a collaborative-filtering based paper recommender. J Dig Inf 10(2):458

Ha J, Kim S-W, Faloutsos C, Park S (2015) An analysis on information diffusion through blogcast in a blogosphere. Inf Sci 290:45–62

Keshavarz S, Honarvar AR (2015) A parallel paper recommender system in big data scholarly. In: International conference on electrical engineering and computer, pp 80–85

Pan C, Li W (2010) Research paper recommendation with topic analysis. In: 2010 International conference on computer design and applications, vo. 4, pp 4–264. IEEE

Choochaiwattana W (2010) Usage of tagging for research paper recommendation. In: 2010 3rd international conference on advanced computer theory and engineering (ICACTE), vol 2, pp 2–439. IEEE

Doerfel S, Jäschke R, Hotho A, Stumme G (2012) Leveraging publication metadata and social data into folkrank for scientific publication recommendation. In: Proceedings of the 4th ACM RecSys workshop on recommender systems and the social Web, pp 9–16

Igbe T, Ojokoh B et al (2016) Incorporating user’s preferences into scholarly publications recommendation. Intell Inf Manag 8(02):27

Xia F, Chen Z, Wang W, Li J, Yang LT (2014) Mvcwalker: random walk-based most valuable collaborators recommendation exploiting academic factors. IEEE Trans Emerg Top Comput 2(3):364–375

Agarwal N, Haque E, Liu H, Parsons L (2005) Research paper recommender systems: a subspace clustering approach. In: International conference on web-age information management, pp 475–491. Springer

Farooq U, Song Y, Carroll JM, Giles CL (2007) Social bookmarking for scholarly digital libraries. IEEE Int Comput 11(6):29–35

Loh S, Lorenzi F, Granada R, Lichtnow D, Wives LK, de Oliveira JPM (2009) Identifying similar users by their scientific publications to reduce cold start in recommender systems. In: Proceedings of the fifth international conference on web information systems and technologies (WEBIST 2009), vol 9, pp 593–600

Hassan HAM (2017) Personalized research paper recommendation using deep learning. In: Proceedings of the 25th conference on user modeling, adaptation and personalization, pp 327–330

Zhou Q, Chen X, Chen C (2014) Authoritative scholarly paper recommendation based on paper communities. In: 2014 IEEE 17th international conference on computational science and engineering, pp 1536–1540. IEEE

Meng F, Gao, D, Li, W, Sun X, Hou Y (2013) A unified graph model for personalized query-oriented reference paper recommendation. In: Proceedings of the 22nd ACM international conference on information and knowledge management, pp 1509–1512

Al Alshaikh M, Uchyigit G, Evans R (2017) A research paper recommender system using a dynamic normalized tree of concepts model for user modelling. In: 2017 11th international conference on research challenges in information science (RCIS), pp 200–210. IEEE

Tang TY, McCalla G (2009) A multidimensional paper recommender: experiments and evaluations. IEEE Int Comput 13(4):34–41

Gori M, Pucci A (2006) Research paper recommender systems: a random-walk based approach. In: 2006 IEEE/WIC/ACM international conference on web intelligence (WI 2006 Main Conference Proceedings) (WI’06), pp 778–781. IEEE

Zarrinkalam F, Kahani M (2012) A multi-criteria hybrid citation recommendation system based on linked data. In: 2012 2nd international econference on computer and knowledge engineering (ICCKE), pp 283–288. IEEE

West JD, Wesley-Smith I, Bergstrom CT (2016) A recommendation system based on hierarchical clustering of an article-level citation network. IEEE Trans Big Data 2(2):113–123

Pohl S, Radlinski F, Joachims T (2007) Recommending related papers based on digital library access records. In: Proceedings of the 7th ACM/IEEE-CS joint conference on digital libraries, pp 417–418

Zhang M, Wang W, Li X (2008) A paper recommender for scientific literatures based on semantic concept similarity. In: International conference on asian digital libraries, pp 359–362. Springer

Jomsri P, Sanguansintukul S, Choochaiwattana W (2010) A framework for tag-based research paper recommender system: an ir approach. In: 2010 IEEE 24th international conference on advanced information networking and applications workshops, pp 103–108. IEEE

Magalhaes J, Souza C, Costa E, Fechine J (2015) Recommending scientific papers: Investigating the user curriculum. In: The twenty-eighth international flairs conference, pp 489–494

Xue H, Guo J, Lan Y, Cao L (2014) Personalized paper recommendation in online social scholar system. In: 2014 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM 2014), pp 612–619. IEEE

Wu C-J, Chung J-M, Lu C-Y, Lee H-M, Ho J-M (2011) Using web-mining for academic measurement and scholar recommendation in expert finding system. In: 2011 IEEE/WIC/ACM international conferences on web intelligence and intelligent agent technology, vol 1, pp 288–291. IEEE

Liu H, Kong X, Bai X, Wang W, Bekele TM, Xia F (2015) Context-based collaborative filtering for citation recommendation. IEEE Access 3:1695–1703

Liu X-Y, Chien B-C (2017) Applying citation network analysis on recommendation of research paper collection. In: Proceedings of the 4th multidisciplinary international social networks conference, pp 1–6

Hristakeva M, Kershaw D, Rossetti M, Knoth P, Pettit B, Vargas S, Jack K (2017) Building recommender systems for scholarly information. In: Proceedings of the 1st workshop on scholarly web mining, pp 25–32

Lee J, Lee K, Kim JG (2013) Personalized academic research paper recommendation system. arXiv preprint arXiv:1304.5457

Feyer S, Siebert S, Gipp B, Aizawa A, Beel J (2017) Integration of the scientific recommender system mr. dlib into the reference manager jabref. In: European conference on information retrieval, pp 770–774. Springer

Collins A, Beel J (2019) Meta-learned per-instance algorithm selection in scholarly recommender systems. arXiv preprint arXiv:1912.08694

Watanabe S, Ito T, Ozono T, Shintani T (2005) A paper recommendation mechanism for the research support system papits. In: International workshop on data engineering issues in E-commerce, pp 71–80. IEEE

Cosley D, Lawrence S, Pennock DM (2002) Referee: an open framework for practical testing of recommender systems using researchindex. In: VLDB’02: Proceedings of the 28th international conference on very large databases, pp 35–46. Elsevier

Zhao W, Wu R, Dai W, Dai Y (2015) Research paper recommendation based on the knowledge gap. In: 2015 IEEE international conference on data mining workshop (ICDMW), pp 373–380. IEEE

Matsatsinis NF, Lakiotaki K, Delias P (2007) A system based on multiple criteria analysis for scientific paper recommendation. In: Proceedings of the 11th panhellenic conference on informatics, pp 135–149. Citeseer

Vellino A (2010) A comparison between usage-based and citation-based methods for recommending scholarly research articles. Proc Am Soc Inf Sci Technol 47(1):1–2

Huang Z, Chung W, Ong T-H, Chen H (2002) A graph-based recommender system for digital library. In: Proceedings of the 2nd ACM/IEEE-CS joint conference on digital libraries, pp 65–73

De Nart D, Ferrara F, Tasso C (2013) Personalized access to scientific publications: from recommendation to explanation. In: International conference on user modeling, adaptation, and personalization, pp 296–301. Springer

Middleton SE, De Roure DC, Shadbolt NR (2001) Capturing knowledge of user preferences: ontologies in recommender systems. In: Proceedings of the 1st international conference on knowledge capture, pp 100–107

Yukawa T, Kasahara K, Kato T, Kita T (2001) An expert recommendation system using concept-based relevance discernment. In: Proceedings 13th IEEE international conference on tools with artificial intelligence. ICTAI 2001, pp 257–264. IEEE

Afzal MT, Maurer HA (2011) Expertise recommender system for scientific community. J Univers Comput Sci 17(11):1529–1549

Gollapalli SD, Mitra P, Giles CL (2012) Similar researcher search in academic environments. In: Proceedings of the 12th ACM/IEEE-CS joint conference on digital libraries, pp 167–170

Yang C, Ma J, Liu X, Sun J, Silva T, Hua Z (2014) A weighted topic model enhanced approach for complementary collaborator recommendation. In: 18th Pacific Asia conference on information systems, PACIS 2014. Pacific Asia Conference on Information Systems

Kong X, Mao M, Liu J, Xu B, Huang R, Jin Q (2018) Tnerec: topic-aware network embedding for scientific collaborator recommendation. In: 2018 IEEE smartworld, ubiquitous intelligence and computing, advanced and trusted computing, scalable computing and communications, cloud and big data computing, internet of people and smart city innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pp 1007–1014. IEEE

Guerrero-Sosas JD, Chicharro FPR, Serrano-Guerrero J, Menendez-Dominguez V, Castellanos-Bolaños ME (2019) A proposal for a recommender system of scientific relevance. Procedia Comput Sci 162:199–206

Porcel C, López-Herrera AG, Herrera-Viedma E (2009) A recommender system for research resources based on fuzzy linguistic modeling. Expert Syst Appl 36(3):5173–5183

Silva ATP (2014) A research analytics framework for expert recommendation in research social networks. Ph.D. thesis, City University of Hong Kong

Sun N, Lu Y, Cao Y (2019) Career age-aware scientific collaborator recommendation in scholarly big data. IEEE Access 7:136036–136045

Xu W, Lu Y, Zhao J, Qian M (2016) Complementarity: a novel collaborator recommendation method for smes. In: 2016 IEEE first international conference on data science in cyberspace (DSC), pp 520–525. IEEE

Vazhkudai SS, Harney J, Gunasekaran R, Stansberry D, Lim S-H, Barron T, Nash A, Ramanathan A (2016) Constellation: a science graph network for scalable data and knowledge discovery in extreme-scale scientific collaborations. In: 2016 IEEE international conference on big data (Big Data), pp 3052–3061. IEEE

Chen H-H, Treeratpituk P, Mitra P, Giles CL (2013) Csseer: an expert recommendation system based on citeseerx. In: Proceedings of the 13th ACM/IEEE-CS joint conference on digital libraries, pp 381–382

Chicaiza J, Piedra N, Lopez-Vargas J, Tovar-Caro E (2018) Discovery of potential collaboration networks from open knowledge sources. In: 2018 IEEE global engineering education conference (EDUCON), pp 1320–1325. IEEE

Petry H, Tedesco P, Vieira V, Salgado AC (2008) Icare. A context-sensitive expert recommendation system. In: ECAI’08, pp 53–58

Hristovski D, Kastrin A, Rindflesch TC (2016) Implementing semantics-based cross-domain collaboration recommendation in biomedicine with a graph database. DBKDA 2016:104

Araki M, Katsurai M, Ohmukai I, Takeda H (2017) Interdisciplinary collaborator recommendation based on research content similarity. IEICE Trans Inf Syst 100(4):785–792

Kong X, Shi Y, Yu S, Liu J, Xia F (2019) Academic social networks: modeling, analysis, mining and applications. J Netw Comput Appl 132:86–103

dos Santos CK, Evsukoff AG, de Lima BS, Ebecken NFF (2009) Potential collaboration discovery using document clustering and community structure detection. In: Proceedings of the 1st ACM international workshop on complex networks meet information and knowledge management, pp 39–46

Zhou J, Rafi MA (2019) Recommendation of research collaborator based on semantic link network. In: 2019 15th international conference on semantics, knowledge and grids (SKG), pp 16–20. IEEE

Cohen S, Ebel L (2013) Recommending collaborators using keywords. In: Proceedings of the 22nd international conference on World Wide Web, pp 959–962

Hristovski D, Kastrin A, Rindflesch TC (2015) Semantics-based cross-domain collaboration recommendation in the life sciences: preliminary results. In: Proceedings of the 2015 IEEE/ACM international conference on advances in social networks analysis and mining 2015, pp 805–806

Li S, Abel M-H, Negre E (2019) Using user contextual profile for recommendation in collaborations. In: The international research and innovation forum, pp 199–209. Springer

Alinani K, Wang G, Alinani A, Narejo DH (2017) Who should be my co-author? recommender system to suggest a list of collaborators. In: 2017 IEEE international symposium on parallel and distributed processing with applications and 2017 IEEE international conference on ubiquitous computing and communications (ISPA/IUCC), pp 1427–1433. IEEE

Alinani K, Alinani A, Narejo DH, Wang G (2018) Aggregating author profiles from multiple publisher networks to build a list of potential collaborators. IEEE Access 6:20298–20308

Benchettara N, Kanawati R, Rouveirol C (2010) A supervised machine learning link prediction approach for academic collaboration recommendation. In: Proceedings of the fourth ACM conference on recommender systems, pp 253–256

Li J, Xia F, Wang W, Chen Z, Asabere NY, Jiang H (2014) Acrec: a co-authorship based random walk model for academic collaboration recommendation. In: Proceedings of the 23rd international conference on World Wide Web, pp 1209–1214

Koh YS, Dobbie G (2012) Indirect weighted association rules mining for academic network collaboration recommendations. In: Proceedings of the tenth Australasian data mining conference, vol 134, pp 167–173

Lee DH, Brusilovsky P, Schleyer T (2011) Recommending collaborators using social features and mesh terms. Proc Am Soc Inf Sci Technol 48(1):1–10

Yang C, Liu T, Liu L, Chen X (2018) A nearest neighbor based personal rank algorithm for collaborator recommendation. In: 2018 15th international conference on service systems and service management (ICSSSM), pp 1–5. IEEE

Tong H, Faloutsos C, Pan J-Y (2008) Random walk with restart: fast solutions and applications. Knowl Inf Syst 14(3):327–346

MATH   Google Scholar  

Kong X, Jiang H, Yang Z, Xu Z, Xia F, Tolba A (2016) Exploiting publication contents and collaboration networks for collaborator recommendation. PLoS ONE 11(2):0148492

Kong X, Jiang H, Bekele TM, Wang W, Xu Z (2017) Random walk-based beneficial collaborators recommendation exploiting dynamic research interests and academic influence. In: Proceedings of the 26th international conference on World Wide Web companion, pp 1371–1377

Xu Z, Yuan Y, Wei H, Wan L (2019) A serendipity-biased deepwalk for collaborators recommendation. PeerJ Comput Sci 5:178

Wang Q, Ma J, Liao X, Du W (2017) A context-aware researcher recommendation system for university-industry collaboration on r &d projects. Decis Support Syst 103:46–57

Davoodi E, Afsharchi M, Kianmehr K (2012) A social network-based approach to expert recommendation system. In: International conference on hybrid artificial intelligence systems, pp 91–102. Springer

Brandao MA, Moro MM (2012) Affiliation influence on recommendation in academic social networks. In: AMW, pp 230–234

Lopes GR, Moro MM, Wives LK, De Oliveira JPM (2010) Collaboration recommendation on academic social networks. In: International conference on conceptual modeling, pp 190–199. Springer

Payton DW (2004) Collaborator discovery method and system. Google Patents. US Patent 6,681,247

Huynh T, Takasu A, Masada T, Hoang K (2014) Collaborator recommendation for isolated researchers. In: 2014 28th international conference on advanced information networking and applications workshops, pp 639–644. IEEE

Zhou X, Ding L, Li Z, Wan R (2017) Collaborator recommendation in heterogeneous bibliographic networks using random walks. Inf Retr J 20(4):317–337

Chen H-H, Gou L, Zhang X, Giles CL (2011) Collabseer: a search engine for collaboration discovery. In: Proceedings of the 11th annual international ACM/IEEE joint conference on digital libraries, pp 231–240

Ben Yahia N, Bellamine Ben Saoud N, Ben Ghezala H (2014) Community-based collaboration recommendation to support mixed decision-making support. J Decis Syst 23(3):350–371

Chen J, Tang Y, Li J, Mao C, Xiao J (2013) Community-based scholar recommendation modeling in academic social network sites. In: International conference on web information systems engineering, pp 325–334. Springer

Gunawardena CN, Hermans MB, Sanchez D, Richmond C, Bohley M, Tuttle R (2009) A theoretical framework for building online communities of practice with social networking tools. Educ Media Int 46(1):3–16

Zhang Y, Zhang C, Liu X (2017) Dynamic scholarly collaborator recommendation via competitive multi-agent reinforcement learning. In: Proceedings of the eleventh ACM conference on recommender systems, pp 331–335

Brandão MA, Moro MM, Almeida JM (2014) Experimental evaluation of academic collaboration recommendation using factorial design. J Inf Data Manag 5(1):52–52

Fazel-Zarandi M, Devlin HJ, Huang Y, Contractor N (2011) Expert recommendation based on social drivers, social network analysis, and semantic data representation. In: Proceedings of the 2nd international workshop on information heterogeneity and fusion in recommender systems, pp 41–48

Zhang J, Tang J, Ma C, Tong H, Jing Y, Li J, Luyten W, Moens M-F (2017) Fast and flexible top-k similarity search on large networks. ACM Trans Inf Syst 36(2):1–30

Sun J, Ma J, Cheng X, Liu Z, Cao X (2013) Finding an expert: a model recommendation system. In: Thirty fourth international conference on information systems, pp 1–10

Bukowski M, Valdez AC, Ziefle M, Schmitz-Rode T, Farkas R (2017) Hybrid collaboration recommendation from bibliometric data. In: Proceedings of 2nd international workshop on health recommender systems co-located with the 11th ACM conference recommender systems, pp 36–38

Rebhi W, Yahia NB, Saoud NBB (2016) Hybrid community detection approach in multilayer social network: scientific collaboration recommendation case study. In: 2016 IEEE/ACS 13th international conference of computer systems and applications (AICCSA), pp 1–8 D. IEEE

Huynh T, Hoang K (2012) Modeling collaborative knowledge of publishing activities for research recommendation. In: International conference on computational collective intelligence, pp 41–50. Springer

Wu S, Sun J, Tang J (2013) Patent partner recommendation in enterprise social networks. In: Proceedings of the sixth ACM international conference on web search and data mining, pp 43–52

Liang W, Zhou X, Huang S, Hu C, Jin Q (2017) Recommendation for cross-disciplinary collaboration based on potential research field discovery. In: 2017 fifth international conference on advanced cloud and big data (CBD), pp 349–354. IEEE

Olshannikova E, Olsson T, Huhtamäki J, Yao P (2019) Scholars’ perceptions of relevance in bibliography-based people recommender system. Comput Supp Coop Work 28(3):357–389

Yang C, Sun J, Ma J, Zhang S, Wang G, Hua Z (2015) Scientific collaborator recommendation in heterogeneous bibliographic networks. In: 2015 48th Hawaii international conference on system sciences, pp 552–561. IEEE

Du G, Liu Y, Yu J (2018) Scientific users’ interest detection and collaborators recommendation. In: 2018 IEEE fourth international conference on big data computing service and applications (BigDataService), pp 72–79. IEEE

Guerra J, Quan W, Li K, Ahumada L, Winston F, Desai B (2018) Scosy: a biomedical collaboration recommendation system. In: 2018 40th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 3987–3990. IEEE

Wang W, Liu J, Yang Z, Kong X, Xia F (2019) Sustainable collaborator recommendation based on conference closure. IEEE Trans Comput Soc Syst 6(2):311–322

Datta A, Tan Teck Yong J, Ventresque A (2011) T-recs: team recommendation system through expertise and cohesiveness. In: Proceedings of the 20th international conference companion on World Wide Web, pp 201–204

Huynh T, Hoang K, Lam D (2013) Trend based vertex similarity for academic collaboration recommendation. In: International conference on computational collective intelligence, pp 11–20. Springer

Al-Ballaa H, Al-Dossari H, Chikh A (2019) Using an exponential random graph model to recommend academic collaborators. Information 10(6):220

Medvet E, Bartoli A, Piccinin G (2014) Publication venue recommendation based on paper abstract. In: 2014 IEEE 26th international conference on tools with artificial intelligence, pp 1004–1010. IEEE

Asabere N, Acakpovi A (2019) Rovets: search based socially-aware recommendation of smart conference sessions. Int J Decis Supp Syst Technol 11(3):30–46. https://doi.org/10.4018/IJDSST.2019070103

Article   Google Scholar  

Asabere NY, Xu B, Acakpovi A, Deonauth N (2021) Sarve-2: exploiting social venue recommendation in the context of smart conferences. IEEE Trans Emerg Top Comput 9(1):342–353. https://doi.org/10.1109/TETC.2018.2854718

García GM, Nunes BP, Lopes GR, Casanova MA, Paes Leme LAP (2017) Techniques for comparing and recommending conferences. J Braz Comput Soc 23(1):1–14

Luong H, Huynh T, Gauch S, Do L, Hoang K (2012) Publication venue recommendation using author network’s publication history. In: Intelligent information and database systems, pp 426–435

Zawali A, Boukhris I (2018) A group recommender system for academic venue personalization. In: International conference on intelligent systems design and applications, pp 597–606. Springer

Beierle F, Tan J, Grunert K (2016) Analyzing social relations for recommending academic conferences. In: Proceedings of the 8th ACM international workshop on hot topics in planet-scale mObile computing and online social neTworking, pp 37–42

Alshareef AM, Alhamid MF, Saddik AE (2019) Academic venue recommendations based on similarity learning of an extended nearby citation network. IEEE Access 7:38813–38825

Hiep L, Huynj T, Guach S, Hoang K (2012) Exploiting social networks for publication venue recommendations. In: International conference on knowledge discovery and information retrieval, pp 239–245. SciTePress, Spain

Küçüktunç O, Saule E, Kaya K, Çatalyürek UV (2013) Theadvisor: A webservice for academic recommendation. In: Proceedings of the 13th ACM/IEEE-CS joint conference on digital libraries. JCDL ’13, pp 433–434. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/2467696.2467752

Chen Z, Xia F, Jiang H, Liu H, Zhang J (2015) Aver: Random walk based academic venue recommendation. In: Proceedings of the 24th international conference on World Wide Web. WWW ’15 companion, pp 579–584. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/2740908.2741738

Alhoori H, Furuta R (2017) Recommendation of scholarly venues based on dynamic user interests. J Informet 11(2):553–563. https://doi.org/10.1016/j.joi.2017.03.006

Mhirsi N, Boukhris I (2018) Exploring location and ranking for academic venue recommendation. In: International conference on intelligent systems design and applications, pp 83–91

Pham MC, Cao Y, Klamma R (2010) Clustering technique for collaborative filtering and the application to venue recommendation

Yu S, Liu J, Yang Z, Chen Z, Jiang H, Tolba A, Xia F (2018) Pave: personalized academic venue recommendation exploiting co-publication networks. J Netw Comput Appl 104:38–47

Asabere NY, Xia F, Wang W, Rodrigues JJPC, Basso F, Ma J (2014) Improving smart conference participation through socially aware recommendation. IEEE Trans Hum-Mach Syst 44(5):689–700. https://doi.org/10.1109/THMS.2014.2325837

Pham MC, Cao Y, Klamma R, Jarke M (2011) A clustering approach for collaborative filtering recommendation using social network analysis. J Univ Comput Sci 17(4):583–604

Pradhan T, Pal S (2020) Cnaver: a content and network-based academic venue recommender system. Knowl-Based Syst 189:105092

Boukhris I, Ayachi R (2014) A novel personalized academic venue hybrid recommender. In: 2014 IEEE 15th international symposium on computational intelligence and informatics (CINTI), pp 465–470. IEEE

Yang Z, Davison BD (2012) Venue recommendation: submitting your paper with style. In: 2012 11th international conference on machine learning and applications, pp 681–686. IEEE

Iana A, Jung S, Naeser P, Birukou A, Hertling S, Paulheim H (2019) Building a conference recommender system based on scigraph and wikicfp. In: Semantic Systems. The power of AI and knowledge graphs, vol 11702, pp 117–123. Springer

Hoang DT, Hwang D, Tran VC, Nguyen VD, Nguyen NT (2016) Academic event recommendation based on research similarity and exploring interaction between authors. In: 2016 IEEE international conference on systems, man, and cybernetics (SMC), pp 004411–004416. IEEE

Hoang DT, Tran VC, Nguyen VD, Nguyen NT, Hwang D (2017) Improving academic event recommendation using research similarity and interaction strength between authors. Cybern Syst 48(3):210–230

Errami M, Wren JD, Hicks JM, Garner HR (2007) etblast: a web server to identify expert reviewers, appropriate journals and similar publications. Nucleic Acids Res 35(2):12–15

Schuemie MJ, Kors JA (2008) Jane: suggesting journals, finding experts. Bioinformatics 24(5):727–728

SJFinder: SJFinder Recommend Journals. http://www.sjfinder.com/journals/recommend

Kang N, Doornenbal MA, Schijvenaars RJ (2015) Elsevier journal finder: recommending journals for your paper. In: Proceedings of the 9th ACM conference on recommender systems, pp 261–264

IEEE: IEEE Publication Recommender. https://publication-recommender.ieee.org/home

Springer: Springer Nature Journal Suggester. https://journalsuggester.springer.com

Wiley: Wiley Journal Finder. https://journalfinder.wiley.com/

edanz innovative scientific solutions: Edanz Journal Selector. https://en-author-services.edanzgroup.com/journal-selector

Guide J Journal Guide. https://www.journalguide.com/bollacker1998citeseer

Hettich S, Pazzani MJ (2006) Mining for proposal reviewers: lessons learned at the national science foundation. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, pp 862–871

Yang K-H, Kuo T-L, Lee H-M, Ho J-M (2009) A reviewer recommendation system based on collaborative intelligence. In: 2009 IEEE/WIC/ACM international joint conference on web intelligence and intelligent agent technology, vol 1, pp 564–567. IEEE

Ferilli S, Di Mauro N, Basile TMA, Esposito F, Biba M (2006) Automatic topics identification for reviewer assignment. In: International conference on industrial, engineering and other applications of applied intelligent systems, pp 721–730. Springer

Serdyukov P, Rode H, Hiemstra D (2008) Modeling expert finding as an absorbing random walk. In: Proceedings of the 31st annual international ACM SIGIR conference on research and development in information retrieval, pp 797–798

Yunhong X, Xianli Z (2016) A lda model based text-mining method to recommend reviewer for proposal of research project selection. In: 2016 13th international conference on service systems and service management (ICSSSM), pp 1–5. IEEE

Peng H, Hu H, Wang K, Wang X (2017) Time-aware and topic-based reviewer assignment. In: International conference on database systems for advanced applications, pp 145–157. Springer

Medakene AN, Bouanane K, Eddoud MA (2019) A new approach for computing the matching degree in the paper-to-reviewer assignment problem. In: 2019 international conference on theoretical and applicative aspects of computer science (ICTAACS), vol 1, pp 1–8. IEEE

Rosen-Zvi M, Griffiths T, Steyvers M, Smyth P (2012) The author-topic model for authors and documents. arXiv preprint arXiv:1207.4169

Jin J, Geng Q, Mou H, Chen C (2019) Author-subject-topic model for reviewer recommendation. J Inf Sci 45(4):554–570

Alkazemi BY (2018) Prato: an automated taxonomy-based reviewer-proposal assignment system. Interdiscip J Inf Knowl Manag 13:383–396

Cagliero L, Garza P, Pasini A, Baralis EM (2018) Additional reviewer assignment by means of weighted association rules. IEEE Trans Emerg Top Comput 2:558

Ishag MIM, Park KH, Lee JY, Ryu KH (2019) A pattern-based academic reviewer recommendation combining author-paper and diversity metrics. IEEE Access 7:16460–16475

Zhao S, Zhang D, Duan Z, Chen J, Zhang Y-P, Tang J (2018) A novel classification method for paper-reviewer recommendation. Scientometrics 115(3):1293–1313

Anjum O, Gong H, Bhat S, Hwu W-M, Xiong J (2019) Pare: A paper-reviewer matching approach using a common topic space. arXiv preprint arXiv:1909.11258

Zhang, D., Zhao, S., Duan, Z., Chen, J., Zhang, Y., Tang, J.: A multi-label classification method using a hierarchical and transparent representation for paper-reviewer recommendation. arXiv preprint arXiv:1912.08976 (2019)

Li X, Watanabe T (2013) Automatic paper-to-reviewer assignment, based on the matching degree of the reviewers. Procedia Comput Sci 22:633–642

Xu Y, Du Y (2013) A three-layer network model for reviewer recommendation. In: 2013 sixth international conference on business intelligence and financial engineering, pp 552–556. IEEE

Maleszka M, Maleszka B, Król D, Hernes M, Martins DML, Homann L, Vossen G (2020) A modular diversity based reviewer recommendation system. In: Asian conference on intelligent information and database systems, pp 550–561. Springer

Sun Y-H, Ma J, Fan Z-P, Wang J (2007) A hybrid knowledge and model approach for reviewer assignment. In: 2007 40th annual Hawaii international conference on system sciences (HICSS’07), pp 47–47. IEEE

Kolasa T, Krol D (2011) A survey of algorithms for paper-reviewer assignment problem. IETE Tech Rev 28(2):123–134

Chen RC, Shang PH, Chen MC (2012) A two-stage approach for project reviewer assignment problem. In: Advanced materials research, vol 452, pp 369–373. Trans Tech Publ

Daş GS, Göçken T (2014) A fuzzy approach for the reviewer assignment problem. Comput Ind Eng 72:50–57

Tayal DK, Saxena P, Sharma A, Khanna G, Gupta S (2014) New method for solving reviewer assignment problem using type-2 fuzzy sets and fuzzy functions. Appl Intell 40(1):54–73

Wang F, Zhou S, Shi N (2013) Group-to-group reviewer assignment problem. Comput Oper Res 40(5):1351–1362

MathSciNet   MATH   Google Scholar  

Long C, Wong RC-W, Peng Y, Ye L (2013) On good and fair paper-reviewer assignment. In: 2013 IEEE 13th international conference on data mining, pp 1145–1150. IEEE

Kou NM, U LH, Mamoulis N, Gong Z (2015) Weighted coverage based reviewer assignment. In: Proceedings of the 2015 ACM SIGMOD international conference on management of data, pp 2031–2046

Kou NM, U LH, Mamoulis N, Li Y, Li Y, Gong Z, (2015) A topic-based reviewer assignment system. Proc VLDB Endow 8(12):1852–1855

Stelmakh I, Shah NB, Singh A (2018) Peerreview4all: Fair and accurate reviewer assignment in peer review. arXiv preprint arXiv:1806.06237

Yeşilçimen A, Yıldırım EA (2019) An alternative polynomial-sized formulation and an optimization based heuristic for the reviewer assignment problem. Eur J Oper Res 276(2):436–450

Conry D, Koren Y, Ramakrishnan N (2009) Recommender systems for the conference paper assignment problem. In: Proceedings of the third ACM conference on recommender systems, pp 357–360

Tang W, Tang J, Lei T, Tan C, Gao B, Li T (2012) On optimization of expertise matching with various constraints. Neurocomputing 76(1):71–83

Charlin L, Zemel R (2013) The toronto paper matching system: an automated paper-reviewer assignment system

Liu X, Suel T, Memon N (2014) A robust model for paper reviewer assignment. In: Proceedings of the 8th ACM conference on recommender systems, pp 25–32

Liu O, Wang J, Ma J, Sun Y (2016) An intelligent decision support approach for reviewer assignment in r &d project selection. Comput Ind 76:1–10

Ogunleye O, Ifebanjo T, Abiodun T, Adebiyi A (2017) Proposed framework for a paper-reviewer assignment system using word2vec. In: 4th Covenant University conference on E-Governance in Nigeria (CUCEN2016)

Jin J, Geng Q, Zhao Q, Zhang L (2017) Integrating the trend of research interest for reviewer assignment. In: Proceedings of the 26th international conference on World Wide Web Companion, pp 1233–1241

Roberts K, Gururaj AE, Chen X, Pournejati S, Hersh WR, Demner-Fushman D, Ohno-Machado L, Cohen T, Xu H (2017) Information retrieval for biomedical datasets: the 2016 biocaddie dataset retrieval challenge. Database 2017:1–9

Chen X, Gururaj AE, Ozyurt B, Liu R, Soysal E, Cohen T, Tiryaki F, Li Y, Zong N, Jiang M (2018) Datamed-an open source discovery index for finding biomedical datasets. J Am Med Inform Assoc 25(3):300–308

Jansen BJ, Booth DL, Spink A (2007) Determining the user intent of web search engine queries. In: Proceedings of the 16th international conference on World Wide Web, pp 1149–1150. ACM

Nunes BP, Dietze S, Casanova MA, Kawase R, Fetahu B, Nejdl W (2013) Combining a co-occurrence-based and a semantic measure for entity linking. In: Extended semantic web conference, pp 548–562. Springer

Ellefi MB, Bellahsene Z, Dietze S, Todorov K (2016) Dataset recommendation for data linking: an intensional approach. In: European semantic Web conference, pp 36–51. Springer

Srivastava KS (2018) Predicting and recommending relevant datasets in complex environments. Google Patents. US Patent App. 15/721,122

Patra BG, Roberts K, Wu H (2020) A content-based dataset recommendation system for researchers-a case study on gene expression omnibus (geo) repository. Database 2020:1–14

Patra BG, Soltanalizadeh B, Deng N, Wu L, Maroufy V, Wu C, Zheng WJ, Roberts K, Wu H, Yaseen A (2020) An informatics research platform to make public gene expression time-course datasets reusable for more scientific discoveries. Database 2020:1–15

Zhu J, Patra BG, Yaseen A (2021) Recommender system of scholarly papers using public datasets. In: AMIA summits on translational science proceedings, pp 672–679. American Medical Informatics Association

Zhu J, Patra BG, Wu H, Yaseen A (2023) A novel nih research grant recommender using bert. PLoS ONE 18(1):0278636

Kamada S, Ichimura T, Watanabe T (2015) Recommendation system of grants-in-aid for researchers by using jsps keyword. In: 2015 IEEE 8th international workshop on computational intelligence and applications (IWCIA), pp143–148. IEEE

Kamada S, Ichimura T, Watanabe T (2016) A recommendation system of grants to acquire external funds. In: 2016 IEEE 9th international workshop on computational intelligence and applications (IWCIA), pp 125–130. IEEE

Download references

Author information

Z. Zhang, B.G. Patra These authors contributed equally to this work.

Authors and Affiliations

Department of Biostatistics and Data Science, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA

Zitong Zhang, Ashraf Yaseen, Jie Zhu, Rachit Sabharwal, Tru Cao & Hulin Wu

Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, New York, NY, 10065, USA

Braja Gopal Patra

School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA

Kirk Roberts

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ashraf Yaseen .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Supplementary material

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Zhang, Z., Patra, B.G., Yaseen, A. et al. Scholarly recommendation systems: a literature survey. Knowl Inf Syst 65 , 4433–4478 (2023). https://doi.org/10.1007/s10115-023-01901-x

Download citation

Received : 03 June 2022

Revised : 28 April 2023

Accepted : 09 May 2023

Published : 04 June 2023

Issue Date : November 2023

DOI : https://doi.org/10.1007/s10115-023-01901-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scholarly recommendation systems
  • Literature recommendation
  • Collaborator recommendation
  • Conference recommendation
  • Journal recommendation
  • Reviewer recommendation
  • Find a journal
  • Publish with us
  • Track your research

Result Diversification in Search and Recommendation: A Survey

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Open access
  • Published: 01 December 2023

Compassion fatigue in healthcare providers: a scoping review

  • Anna Garnett 1 ,
  • Lucy Hui 2 ,
  • Christina Oleynikov 1 &
  • Sheila Boamah 3  

BMC Health Services Research volume  23 , Article number:  1336 ( 2023 ) Cite this article

2539 Accesses

2 Altmetric

Metrics details

The detrimental impacts of COVID-19 on healthcare providers’ psychological health and well-being continue to affect their professional roles and activities, leading to compassion fatigue. The purpose of this review was to identify and summarize published literature on compassion fatigue among healthcare providers and its impact on patient care. Six databases were searched: MEDLINE (Ovid), PsycINFO (Ovid), Embase (Ovid), CINAHL, Scopus, Web of Science, for studies on compassion fatigue in healthcare providers, published in English from the peak of the pandemic in 2020 to 2023. To expand the search, reference lists of included studies were hand searched to locate additional relevant studies. The studies primarily focused on nurses, physicians, and other allied health professionals. This scoping review was registered on Open Science Framework (OSF), using the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) extension to scoping review. From 11,715 search results, 24 met the inclusion criteria. Findings are presented using four themes: prevalence of compassion fatigue; antecedents of compassion fatigue; consequences of compassion fatigue; and interventions to address compassion fatigue. The potential antecedents of compassion fatigue are grouped under individual-, organization-, and systems-level factors. Our findings suggest that healthcare providers differ in risk for developing compassion fatigue in a country-dependent manner. Interventions such as increasing available personnel helped to minimize the occurrence of compassion fatigue. This scoping review offers important insight on the common causes and potential risks for compassion fatigue among healthcare providers and identifies potential strategies to support healthcare providers’ psychological health and well-being.

• What do we already know about this topic? The elevated and persistent mental stress associated with the COVID-19 pandemic predisposed healthcare providers (HCP) to various psychological conditions such as compassion fatigue. Declines in health providers’ mental health has been observed to negatively impact their professional performance and the quality of patient care.

• How does your research contribute to the field? This review provides an overview of the prevalence of compassion fatigue among HCPs across the globe during the COVID-19 pandemic. The main risk factors for compassion fatigue include younger age, female sex, being either a physician or a nurse, high workload, extensive work hours, and limited access to personal protective equipment (PPE). Negative behavioral intention towards patients has been identified to be a consequence of compassion fatigue. Interventions such as the provision of emotional support, increased monitoring for conditions such as stress and burnout, and increasing available personnel helped to minimize the occurrence of compassion fatigue.

• What are your research’s implications towards theory, practice, or policy? While the public health emergency associated with the COVID-19 pandemic has ended, the impact on human health resources persists. The findings of this review can inform policy decisions and implementation of evidence-based strategies to prevent, manage, and lessen the negative effects of compassion fatigue on HCPs and its subsequent impacts on patient care.

Peer Review reports

Introduction

The 2019-novel coronavirus disease (COVID-19) outbreak spread rapidly and by January 30 th , 2022 was formally proclaimed a global health emergency despite being first identified just over a month prior [ 1 ]. Although there have been five other global health emergencies associated with disease outbreaks since 2009, none has matched the scale and scope of the COVID-19 pandemic [ 2 ]. In the short-term the rapid increase in patients requiring acute care services presented unprecedented challenges for health systems. Care provision and infection control strategies were hampered by capacity limitations, staffing shortfalls and supply chain challenges [ 3 ]. As a result, healthcare providers (HCPs) encountered mounting levels of strain which have continued with little reprieve for the duration of and beyond the global COVID-19 pandemic. Limited access to personal protective equipment (PPEs) exacerbated transmission of the virus, compounding healthcare providers’ fears of contracting and spreading COVID-19 among their peers, patients and families [ 4 , 5 , 6 , 7 ]. HCPs also contracted COVID-19, became seriously ill and died with global estimates of HCP death between January 2020 and May 2021 being over 100,000. With time, the number of absences, extended sick leaves and staff turnovers increased [ 7 , 8 ]. The combination of short staffing, frequent changes to workflow and continuous care provision to patients who were gravely ill and had high mortality amplified the toll on health care providers [ 8 , 9 ]. While no longer a global health emergency, there continue to be COVID-19 cases and deaths. As of July 14, 2023 there were 767,972,961 COVID-19 cases and 6,950,655 deaths globally [ 10 ].

HCPs around the globe who treated severe COVID-19 cases, a process which necessitated in-depth compassionate engagement, became vulnerable to developing compassion fatigue as a result of their continued and in-depth involvement in the care of these severely ill patients and their families [ 11 ]. Compassion fatigue is defined as a composite of two measurements: burnout (sustained employment-related stress that compromises an individual’s desire to work) and secondary trauma (the development of traumatic symptoms resulting from the protracted exposure to the suffering of others) [ 12 , 13 ]. An individual experiencing compassion fatigue has a reduced ability for showing compassion to others, resulting from the prolonged exposure to witnessing the suffering of others without being able to relieve one’s anguish despite having the desire to do so [ 9 ]. Individuals experiencing compassion fatigue may express a range of behaviors such as increased work absences or declines in the ability to engage in work-related tasks such as decision-making. Burnout and secondary trauma are suggested to be mediated by compassion satisfaction—the pleasure that comes from helping behavior [ 11 , 12 ].

As the pandemic shifts from being a global health emergency to an endemic disease, there continues to be concern for HCP health and well-being [ 14 , 15 , 16 ]. The increased and chronic nature of the stress experienced during and beyond the COVID-19 pandemic has heightened HCPs risk for a range of negative psychological impacts such as depression, fearfulness, grief and post-traumatic stress disorder (PTSD) [ 17 ]. Prior infectious disease outbreaks (SARS-CoV-1, H1N1, MERS-CoV, Ebola) are also associated with an increased prevalence of declining mental health in HCPs [ 18 ]. A growing body of research on the COVID-19 pandemic highlights the range of psychological symptoms HCPs developed following their sustained exposure to COVID-19 including burnout, feelings of isolation, insomnia, grief, emotional exhaustion, depression, post-traumatic stress and depersonalization, some of which have persisted over time [ 14 , 17 , 19 , 20 , 21 , 22 ]. The consequences of HCPs’ declining psychological health and well-being has had impacts on the quality of patient care and indirectly on patient outcomes through inadequate staffing [ 18 ]. Compromises in HCPs’ ability to provide optimal clinical care can have serious consequences, including the worsening of patient conditions and the increased transmission of the infection from patients to others in the hospital [ 18 ]. In addition, compassion fatigue may be exacerbated by the COVID-19 pandemic, potentially leading to moral injury, decreased productivity, increased turnover, and reduced quality of care [ 23 ]. Moreover, a growing body of literature suggests that challenges across health systems will persist although COVID-19 is no longer a global health emergency [ 24 , 25 ]. As such, it is important to have a fulsome understanding of COVID-19’s toll on HCPs and tailor health system strategies accordingly.

As health care systems continue to experience a health human resources crisis, it is important to identify and understand the prevalence of compassion fatigue, identify contributing factors, and increase understanding of the consequences and actions that can be taken to address compassion fatigue among HCPs. While there has been in an increase in the body of published literature on the health and well-being of HCPs since the onset of the COVID-19 pandemic, there continues to be a knowledge gap mapping the incidence of compassion fatigue, its resultant impact on HCP well-being, and its potential influence on patient care provision [ 11 , 17 ]. A comprehensive review of the literature on compassion fatigue among HCPs can inform policy and practice initiatives to improve the current health human resources crisis experienced by many health systems. It may also aid in identifying prospective research foci.

The purpose of this scoping review was to synthesize and provide a synopsis of the literature on compassion fatigue among HCPs during the COVID-19 pandemic and to understand its broader impact. The review was guided by the following question: What is the current state of knowledge on compassion fatigue among HCPs over the course of COVID-19?

Project registration

This scoping review was registered under Open Science Framework. A project outline was submitted including the study hypotheses, design, and data collection procedures. The DOI for the registered project is as follows: https://doi.org/10.17605/OSF.IO/F4T7N . In addition, a scoping review protocol for this review has been published in a peer-reviewed journal ( https://doi.org/10.1136/bmjopen-2022-069843 ).

Study design

A systematic scoping review strategy was chosen to explore the existing body of literature pertaining to the research topic. The objective of a scoping review is to identify relevant literature on a given topic, without focusing on evaluating research quality or conducting a thorough analysis of selected studies, as systematic reviews typically do. Current gaps in research and directions for future research can be identified by means of summarizing emerging literature on compassion fatigue in HCPs.

The current scoping review used two methodological tools, namely the Arksey and O’Mally scoping review framework as well as the Joanna Briggs Institute Critical Appraisal Tools. The Arksey and O’Malley framework comprises five stages, which include: (1) formulating the research question; (2) identifying relevant studies; (3) selecting studies for inclusion; (4) extracting and organizing the data; and (5) collating, summarizing, and reporting the findings [ 26 ]. While scoping reviews typically do not require article appraisal, all articles were evaluated by one author (CO) using the methodology established by the Joanna Briggs Institute (JBI) to enhance the overall quality of the review [ 27 ]. No articles were excluded based on their quality, in accord with the Arksey and O’Malley framework [ 26 ].

Stage I: Identifying the research question(s)

The research objective and question were drafted by the authors (AG, LH, CO, SB) and can be found in the previous section under “Research aim”.

Stage II: Identifying relevant studies

As outlined by the JBI methodology, a three-step approach was used to identify relevant studies. These steps include: (1) conducting a preliminary search of at least two suitable databases; (2) identifying relevant keywords and index terms to perform a secondary search across all chosen databases; and (3) manually examining the reference lists of the included articles to discover additional relevant studies [ 28 ]   (p11) .

Preliminary literature search

To establish the criteria for inclusion and exclusion, an initial and restricted search was conducted on the subject of interest. The preliminary literature exploration encompassed three scholarly electronic databases: MEDLINE (Ovid), Scopus, and Web of Science. The search employed the keywords “compassion fatigue” and incorporated the timeframe March 1, 2020, to June 15, 2022, so that the most impactful waves of the COVID-19 pandemic were represented in the included literature, resulting in 1519, 2489, and 2246 studies, from the respective databases. These three databases were selected due to their likelihood of yielding results relevant to the research topic. To construct a comprehensive search strategy, a collection of keywords and index terms were identified from the titles and abstracts of relevant articles. The search strategy was further refined in collaboration with a social science librarian.

Structured search strategy

A systematic search was conducted across six scholarly electronic databases: MEDLINE (Ovid), Embase (Ovid), CINAHL, Scopus, and Web of Science. These databases were deliberately chosen to encompass a broad range of relevant findings within the current knowledge landscape regarding the research topic. The systematic search of the literature commenced once the scoping review was peer reviewed and revisions were addressed by the authors. Using the selected vocabulary and Boolean connectors as shown in Table 1 , a string of relevant search terms was developed. The search strategy was adapted accordingly for each individual database (e.g., Medical Subject Headings [MeSH] terms for MEDLINE [Ovid]). In the final stage of the search strategy, the reference lists of all included studies were manually examined to identify additional relevant studies.

Inclusion criteria

The inclusion criteria for this review was formulated using the PCC (Population, Concept, Context) mnemonic developed by JBI (Table 1 ). The participants included in this review were HCPs who were employed across healthcare systems during the COVID-19 pandemic (e.g., physicians, registered nurses, nurse practitioners, physician assistants, and licensed clinical social workers). The concept explored in this review focused on compassion fatigue among HCPs working in healthcare systems during the COVID-19 pandemic. The context of the study encompassed various care settings where HCPs carry out their professional activities across different clinical specialties (e.g., surgery, critical care, palliative care), as well as clinical settings (e.g., inpatient and outpatient). For the purposes of this scoping review, formal healthcare settings were broadly classified as those that provided health services and were situated within and administered by healthcare institutions.

This scoping review only included articles published in English. A time filter was applied to encompass studies conducted between 2020 to 2023, spanning the period from the onset of the COVID-19 pandemic to the present. A range of study designs were included in the review (i.e., experiments, quasi-experimental studies, analytical observational studies, descriptive observational studies, mixed-methods studies, and qualitative studies).

Exclusion criteria

Through the past two decades, compassion fatigue has been defined in different ways, sometimes being considered synonymous with burnout and secondary traumatic stress, or as an outcome resulting from both components [ 12 , 13 ]. Yet recently, it has been suggested that compassion fatigue is a focal concept related to the management of traumatic situations whereas burnout is a general concept that may have multiple contributors [ 26 ]. Due to the conceptual ambiguity surrounding compassion fatigue, articles that solely examine the components of compassion fatigue, such as burnout and secondary trauma, without directly addressing compassion fatigue itself, were excluded from consideration.

Studies that failed to meet the inclusion criteria or lacked full-text availability were excluded from the review. Additionally, editorials, letters to the editor, commentaries, and reviews were also excluded as they did not offer sufficient information for addressing the research questions.

Stage III: Study selection

After the full database searches were conducted, all identified citations were compiled and uploaded into Covidence. Any duplicate citations were automatically excluded.

Three reviewers (LH, CO, AG) independently screened the titles and abstracts of the identified studies to assess their eligibility according to the pre-established inclusion and exclusion criteria. Subsequently, the full texts of 736 selected studies were evaluated to arrive at the final list of articles for data extraction. The reasons for excluding specific studies were documented. Throughout the process, any disagreements that arose at each stage of study selection were resolved through discussions with a third reviewer (AG, SB).

The outcomes of the study selection process were presented in a flow diagram adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR) guidelines (Fig.  1 ) [ 29 ]. Additionally, all the included studies underwent an assessment of their risk of bias (quality) using established critical appraisal tools from the Joanna Briggs Institute (JBI) for Evidence Synthesis [ 30 ]. Although not mandatory for scoping reviews, appraisals of study quality will contribute to the subsequent implications and future steps stemming from this scoping review [ 31 ]. The JBI provides critical appraisal checklists for various study designs, encompassing experimental, quasi-experimental, randomized controlled trials, observational, and qualitative study designs. One reviewer (CO) conducted the assessments of all the included studies, and a second reviewer (AG) verified the evaluations. Any discrepancies that arose were discussed and resolved in consultation with both reviewers. In line with the methodology of scoping reviews, no studies were excluded based on their quality assessments, ensuring a comprehensive understanding of the current state of the literature on compassion fatigue among HCPs during the COVID-19 pandemic. A summary of the quality assessments were presented in the results section of the review, while the full appraisals can be found in Additional file 1 .

figure 1

PRISMA flow chart [ 28 ]

Stage IV: Data extraction

To facilitate data extraction aligned with the research objectives, a data-extraction template was developed by one reviewer (LH). This template encompassed various aspects of the included studies (i.e., authors, publication year, study populations, country, study design, aims, sample size, assessment instruments, risk factors, protective factors, consequences of compassion fatigue, and measures to prevent/manage/reduce compassion fatigue). Utilizing Covidence, two independent reviewers (LH, CO) extracted the relevant data from the studies included in the final list of citations.

Stage V: Risk of bias

Standardized tools developed by the Joanna Briggs Institute for respective study types were used to assess risk of bias (quality) for all studies included in the review [ 27 ]. The study appraisals were conducted by one reviewer (CO) and reviewed by another reviewer (AG). Any discrepancies were discussed and resolved together. While no studies were excluded based on the appraisal scores to ensure a comprehensive presentation of the available literature on compassion fatigue among healthcare providers, the findings for the risk of bias assessments are summarized in the results section and the full appraisals are presented in Additional file 1 .

Stage VI: Collating, summarizing, and reporting the results

To summarize and synthesize the findings, the study followed a three-step approach proposed by Levac et al. [ 32 ]: (1) collating and analyzing the collected data; (2) reporting the results and outcomes to address the study objectives; and (3) discussing the potential implications that findings hold for future research and policy considerations [ 31 ]. The review process adhered to the PRISMA Extension for Scoping Reviews checklist, which provided guidance for conducting the review and reporting the findings [ 26 ].

Search results

Figure  1 displays the PRISMA-ScR flowchart of the scoping review search strategy. The search and reference list initially yielded 11,715 studies. Of these, 5769 were excluded as duplicates. Following the title and abstract screening of the remaining studies, 5179 studies were excluded as they met the exclusion criteria. Finally, the full-texts of the remaining 736 studies were screened, and 712 were excluded as they did not meet the inclusion criteria. In total, 24 eligible studies were included in the review for further analysis.

Risk of bias of included studies

The complete assessment of risk of bias of all 24 included studies is available in Additional file 1 . Within the two mixed-methods studies risk of bias primarily stemmed from the quantitative strand of the studies with a lack of clarity provided about study inclusion criteria, study setting, and identification of confounding factors [ 29 ]. Other sources of bias in other quantitative studies were vagueness around the criteria used for outcome measurement [ 30 ] and only one study identified potential cofounding factors along with strategies to manage them [ 31 ]. Further shortcomings related to the failure to provide transparency around the use of valid and reliable outcome measures [ 23 , 31 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 ]. Within qualitative studies not all provided information about the researchers’ theoretical stance [ 29 , 41 , 43 ] and two studies did not provide documentation of ethics approval for the conducted research [ 43 , 44 ]. One included case report met most assessment criteria for risk of bias although more description of assessment, post-assessment condition and adverse events were warranted [ 45 ].

Characteristics of studies

Study characteristics are presented in Table 2 . Of the 24 eligible studies, 18 studies used quantitative methods [ 23 , 30 , 31 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 46 , 47 , 48 , 49 , 50 , 51 ], 3 studies used qualitative methods [ 43 , 44 , 45 ], and the remaining studies used mixed-methods approaches [ 29 , 41 , 52 ]. Additionally, 13 studies focused on the antecedents of compassion fatigue [ 23 , 29 , 33 , 34 , 35 , 36 , 40 , 41 , 42 , 45 , 46 , 47 , 48 ] and 5 studies examined the consequences of compassion fatigue [ 30 , 37 , 43 , 44 , 49 ]. Six studies were conducted in the United States, with the others being conducted in a range of countries including Ecuador, Spain, United Kingdom, Italy, Greece, Turkey, Iran, Uganda, Taiwan, Japan, Philippines, China, and India. These studies primarily focused on nurses, physicians, and other allied health professionals. The study samples included both male and female HCPs. Only one study focused exclusively on female HCPs [ 43 ].

A variety of assessment tools were used to measure compassion fatigue across included studies. Common tools included Compassion Fatigue Short Scale (CFSS) [ 33 , 47 , 48 ], Compassion Fatigue Scale (CFS) [ 30 , 49 ], Professional Quality of Life Scale Version 5 (ProQoL 5) [ 23 , 29 , 29 , 31 , 35 , 36 , 38 , 39 , 40 , 41 , 42 , 50 , 51 ], Work-Related Quality of Life Scale (WRQoL) [ 46 ], and Compassion Fatigue and Satisfaction Self-Test (CFST) [ 37 , 52 ] (Table 3 ).

The time period of the study period shows that most of the studies were conducted in the first six months of 2020, coinciding with the World Health Organization’s declaration of the COVID-19 outbreak as a pandemic [ 54 ]. No studies included in the review were conducted between March 2021 and May 2023 (Fig.  2 ).

figure 2

The time trend of study periods on compassion fatigue in HCPs during the COVID-19 pandemic

Findings were synthesized and presented using the following 4 themes: (1) prevalence of compassion fatigue, (2) antecedents of compassion fatigue (individual-Level, organizational-Level, and systems-level factors), (3) consequences of compassion fatigue, and (4) interventions for compassion fatigue.

Theme 1: Prevalence of compassion fatigue

Of the studies reviewed, five measured the prevalence of compassion fatigue among HCPs during the COVID-19 pandemic [ 23 , 30 , 31 , 36 , 41 ]. In a study conducted in Spain, 306 out of 506 (60.4%) HCPs reported high levels of compassion fatigue while 170 (33.6%) showed moderate levels of compassion fatigue (ProQoL 5: M = 19.9, SD = 7.6) [ 36 ]. In a sample composed of 395 Ugandan frontline nurses, 49.11% of the nurses reported high levels of compassion fatigue, while 29.6% experienced moderate levels of compassion fatigue [ 23 ]. Over half of the nurses in the study (54.94%) reported direct exposure to COVID-19 cases. A study conducted in Greece found that in a sample of 105 nurses, the majority of nurses (51.4%) experienced moderate levels of compassion fatigue (ProQoL 5: M = 22.26, SD = 6.76) [ 41 ]. In a Taiwanese study of 503 HCPs, the majority of the participants (63.2%) experienced low levels of compassion fatigue (ProQoL 5: M = 20.9, SD = 7.6) [ 31 ]. Finally, in a Filipino sample composed of 270 frontline nurses, 61.4% of the nurses reported low levels of compassion fatigue (CFS: M = 2.213, SD = 0.979) [ 30 ].

Theme 2: Antecedents of compassion fatigue

Individual-level factors.

Age and sex were key factors associated with compassion fatigue among participant HCPs. Younger HCPs with less experience were more likely to experience mental health issues and conflicting feelings with regards to providing care to COVID-19 patients [ 23 , 29 , 44 , 46 ]. Seven studies included in the review determined that female HCPs were more likely than male HCPs to experience compassion fatigue [ 23 , 35 , 36 , 38 , 40 , 50 , 52 ]. Physicians were also reported to have higher levels of compassion fatigue compared to nurses in three studies [ 36 , 38 , 39 ]. While nursing assistants had higher levels of compassion fatigue when compared to nurses in one study (ProQol 5: Nursing assistants = 29.15 ± 6.94; Nurse = 25.68 ± 5.87) [ 29 ]. Furthermore, the risk was higher in permanent workers compared to temporary workers (ProQoL 5: Permanent = 2.48 ± 1.29; Temporary = 2.11 ± 1.15; P -value < 0.05) [ 35 ]. One included study determined that marital status and education levels were not correlated with compassion fatigue [ 23 ]. Psychiatric comorbidities such as past trauma, burnout, stress, anxiety, and depression exacerbated HCPs’ psychological well-being across a number of included studies [ 31 , 33 , 36 , 38 , 39 , 41 , 49 , 50 ]. Other psychological factors such as excessive empathetic engagement, sensitive sensory processes, and overidentification from frequent witnessing of patient suffering and deaths were found to aggravate the development of compassion fatigue [ 34 , 39 , 45 ]. The inability to cope with the rapidly evolving landscape of healthcare provision and a lack of self-care contributed to increased burden and blurring of role boundaries between professional and private lives [ 29 , 41 , 43 , 44 , 51 , 52 ]. One study that used Compassion Fatigue and Satisfaction Self-Tests and a questionnaire of personal and professional characteristics found that feelings of underappreciation, insufficient compensations, and social isolation incurred psychological burden on pediatric sub-specialists [ 52 ]. Additionally, a decrease in occupational hardiness, as measured by the Occupational Hardiness Questionnaire, increased the risk of compassion fatigue among HCPs in two studies [ 42 , 50 ]. Negative outcomes to the HCPs’ families and concerns revolving around their patients’ families also predicted higher risk of experiencing compassion fatigue [ 45 , 48 , 52 ]. Finally, HCPs’ fear of COVID-19 with regards to infection and transmission was identified as a predictor of compassion fatigue [ 29 , 40 , 43 , 44 , 47 ].

Two studies identified social support from family, friends, peers, and hospital leadership as a crucial protective factor for compassion fatigue [ 43 , 52 ]. Coping mechanisms such as venting and exercising were found to help alleviate stress among HCPs [ 44 ]. Psychological qualities such as compassion satisfaction, professional satisfaction, resilience, vigor, and hardiness were found to help protect the psychological health of HCPs as well as reducing turnover intention and increasing perceived quality of care [ 30 , 34 , 36 , 37 , 39 , 40 , 42 , 46 , 50 ]. Self-care, self-awareness of limitations, and self-regulation of emotions were crucial for reducing risk of compassion fatigue in two studies comprised of physicians and nurses [ 44 , 50 ]. Lastly, spirituality, religiosity, and meditation also served as protective factors in three studies on compassion fatigue in HCPs [ 41 , 44 , 51 ].

Organizational-level factors

In five of the articles reviewed, increased workload [ 23 , 29 , 44 , 45 ], long working hours [ 23 , 29 , 44 , 45 ], and increased number of patients [ 50 ] were identified as common predictors of compassion fatigue. Furthermore, providing direct care to COVID-19 patients, which were often emotionally challenging cases, exacerbated the psychological risks to HCPs [ 23 , 36 , 46 , 48 , 50 ]. Chronic exposure to a dynamic work environment also increased the risk of compassion fatigue among HCPs [ 29 ]. Lack of access to suitable PPEs and lack of foresight from management and human resources teams regarding infection control guidelines contributed to HCPs’ distress [ 29 ]. Adjusting to the discomfort caused by wearing PPEs presented as a challenge to maintaining the efficiency of work activities [ 29 ]. Lastly, in two studies, HCPs identified that while there were plenty of wellness resources provided by healthcare organizations to support mindfulness, there was a lack of practical and pragmatic resources for social and emotional support, work-life balance, and remuneration [ 23 , 43 ].

Positive work conditions, such as a visible presence and engagement by leadership and management, as well as a positive work culture allowing HCPs to seek help without fear of judgment was found to be important protective factors against the development of compassion fatigue [ 44 ]. The social aspects of teamwork facilitated the sharing of feelings of trauma which in turn contributed to resilience and improved psychological well-being among HCPs in three studies [ 41 , 43 , 44 ]. One study observed that workplace wellness activities and a sense of feeling valued can prevent high levels of compassion fatigue [ 52 ]. Words of appreciation from supervisors boosted morale for some HCPs [ 44 ]. Attention to workplace safety in the form of PPEs and early access to vaccines alleviated the fear of infection [ 44 ]. Finally, two studies determined that adequate preparation and education to handle COVID-19 cases and increased autonomy decreased the risk of compassion fatigue and increased professional fulfillment [ 42 , 44 ].

Systems-level factors

Significant and frequently changing public health measures over the course of the pandemic presented a challenge as they were disruptive to workflow and resulted in uncertainty, feelings of inadequacy, and distress among HCPs across a range of geographical contexts [ 29 , 41 , 43 , 49 ]. Increases in the incidence of COVID-19 cases also contributed to a rise in the number of hospital admissions, aggravating HCPs’ workload [ 35 ]. Social-distancing policies precluded informal team interactions, such as sharing meals together, which posed a risk to HCPs’ psychological well-being by decreasing social support [ 43 , 52 ]. Transitions to tele-health also increased social isolation [ 43 ]. A theme that emerged was the negative impact of stigma on HCPs, with their proximity to contagion, as a possible risk factor [ 35 , 41 ]. Aggressive behaviors and verbal abuse from patients were sources of emotional stress for some HCPs [ 44 ]. Finally, negative peer pressure was identified as a barrier to HCPs engaging in self-care as they felt pressure to conform to sociocultural norms of an expected level of dedication [ 44 ]. In contrast to the impacts of stigma, a positive perception of one’s own profession is related to increased commitment and decreased compassion fatigue [ 46 ].

Theme 3: Consequences of compassion fatigue

The findings of one study suggested that compassion fatigue associated with HCP’s professional practice impacted their private lives, predicting greater parental burnout ( r  = 0.542), child abuse ( r  = 0.468), child neglect ( r  = 0.493), spouse conflict ( r  = 0.340), and substance abuse ( r  = 0.298) [ 48 ]. This study identified factors such as direct care of COVID-19 patients ( r  = 0.255), exposure to patient death and suffering due to COVID-19 ( r  = 0.281), and family income loss due to COVID-19 ( r  = 0.366) as risk factors for compassion fatigue [ 48 ]. Additionally, at an organizational-level, two studies conducted in 2020 and 2021 observed that Turkish and Filipino HCPs who reported compassion fatigue also reported lower job satisfaction and reduced professional commitment [ 30 , 46 ]. Consequently, elevated compassion fatigue also increased organizational turnover intent among Filipino HCPs (β = 0.301, P -value = 0.001) [ 30 ]. A study conducted in China found that compassion fatigue predicted negative behavioral intentions towards treating COVID-19 patients, as measured by the Attitude, Subjective Norms, and Behavioral Intention of Nurses toward Mechanically Ventilated Patients (ASIMP) questionnaire [ 33 ]. This suggests that quality of care may be adversely impacted [ 33 ]. Finally, an American study observed that compassion fatigue among HCPs was associated with deteriorating workplace culture [ 52 ].

  • Patient care

The provision of care during the pandemic was impacted by the general lack of preparation for handling novel tasks experienced by many HCPs [ 23 ]. Findings from one study found that many HCPs (73%) experienced a shift in their clinical practice setting, for example, from in-personal care to virtual telehealth consults as a result of the pandemic [ 43 ]. HCPs also experienced an increase in the need to provide palliative care as a result of the negative health impacts of COVID-19, something they may have had limited prior experience with [ 43 ]. In a case study conducted in Japan, the physician reported feeling inexperienced with handling the psychological impact of the pandemic experienced by not only the patients but also the patients’ family [ 45 ]. The consequences of not being able to provide optimal care was found to exacerbate feelings of guilt, powerlessness, and frustration in HCPs [ 41 , 43 ]. In turn, study findings suggest that worsening compassion fatigue may reduce the quality of care provided by HCPs because it has been found to be a significant predictor of negative behavioral intention [ 30 , 33 , 40 , 52 ].

Theme 4: Interventions for compassion fatigue

Two studies in Japan and Uganda investigated potential interventions to support HCPs experiencing COVID-19 related compassion fatigue. On an individual-level, regularly engaging in self-care activities such as expressions of gratitude as well as learning how to recognize signs and symptoms of compassion fatigue were identified as crucial first steps in its management [ 45 , 52 ]. Emotional support from colleagues and mental health specialists was found to be effective in improving the mental health of a Japanese physician experiencing compassion fatigue [ 45 ]. Findings of two studies identified the need for a systematic approach to monitor the progression of psychological symptoms and providing tailored resources in a timely manner to HCPs to help ameliorate compassion fatigue and its consequences [ 29 , 45 ]. Suggested strategies included: facilitating regular consultations with each department [ 45 , 52 ], increasing the staffing number of HCPs in busy departments [ 23 , 45 ], and providing PPEs and vaccines in a timely manner [ 23 , 52 ]. Lastly, findings from two studies in Uganda and the United States suggested that increased remuneration may prevent or minimize compassion fatigue [ 23 , 52 ].

Key findings

This scoping review sought to provide a comprehensive summary of the literature published between January 2020 and May 2023 on the impact of the COVID-19 pandemic on compassion fatigue among HCPs and its subsequent impact on patient care. Most of the included studies were conducted in 2020 and used cross-sectional study designs. Given that the COVID-19 outbreak was declared a global health emergency in early 2020 [ 1 ], cross-sectional study designs were well-placed to provide prompt and important insights on compassion fatigue across the HCP population. Review findings were presented using four themes addressing the prevalence, antecedents, consequences, and consequences of compassion fatigue in HCPs. The prevalence of compassion fatigue was observed to vary across countries. The negative psychological outcomes reported by included studies were precipitated by individual-level factors such as age and occupational role; organizational-factors such as lack of access to PPE; and systems-level factors such as loss of social engagement and stigma. The consequences of compassion fatigue impacted HCPs’ personal and professional roles. Findings suggest an urgent need for policy makers, health managers, and team leaders to develop and implement strategies that target the potential root causes of compassion fatigue in HCPs.

Prevalence of compassion fatigue

Among the five studies that measured prevalence of compassion fatigue, results were highly variable across countries [ 23 , 30 , 31 , 36 , 41 ]. This may be attributed to differences in preparedness for infection containment and variability among health systems’ preparation and ability to respond to supply chain issues [ 53 ]. Taiwan provides an example of how digital technologies were adopted to improve disease surveillance and monitor medical supply chains [ 55 ]. Using the stringent Identify-Isolate-Inform model in conjunction with public mask-wearing and physical distancing, the spread of the disease was effectively contained in Taiwan [ 53 ]. Consequently, despite not enforcing lockdowns, Taiwan blocked the first wave of cases and slowed down subsequent outbreaks, which may contribute to the observed low prevalence of compassion fatigue among HCPs [ 56 ]. In the Philippines, responses to disease outbreaks varied across different municipalities and provinces [ 57 ]. Effective containment measures such as strict border control and early lockdowns in addition to plentiful medical supplies and personnel allowed certain regions to mount a strong response to this public health emergency, subsequently resulting in the observed low prevalence of compassion fatigue among HCPs [ 57 ]. In Uganda, there were generally low levels of preparedness with regards to the infection identification, PPE supply, access to hand-washing facilities, and establishment of isolation facilities [ 58 ]. This may have contributed to an overwhelmed healthcare system and overworked HCPs as the surge of cases was exacerbated by the shortage of disease containment resources [ 58 ]. In April 2020, Spain experienced the second highest infection incidence in the world [ 59 ]. The Spanish health system was overwhelmed by the abundance of patients due to lack of HCPs [ 60 ], hospital capacity, and material supplies [ 59 ]. An increase in compassion fatigue among HCPs was also observed in recent studies from Italy and Canada [ 61 , 62 ]. Overall, the various strategies used to address the resultant COVID-19-related public health crisis presented distinctive challenges to HCPs in different countries. Caution must be taken when interpreting the study findings given the contextual differences across various healthcare systems. The psychological burden and prevalence of compassion fatigue subsequently varied depending on the context.

Antecedents of compassion fatigue

The findings of this review suggest that individual characteristics such as age and occupational role are significant contributing factors to the development of compassion fatigue during COVID-19 [ 63 ]. Specifically, older HCPs were less likely to experience compassion fatigue than younger HCPs according to regression analyses [ 23 , 29 , 44 , 46 ]. This observation may be attributed to their increased work experience. Resilience was also positively linearly related to age [ 64 ]. Factors identified as potential contributors to the observed age-related advantage in wellbeing were access to job resources, better job security, work-life balance, and coping skills [ 64 ]. The compounding of stressors such as an increase in workload during the COVID-19 pandemic could have exacerbated the psychological health of younger HCPs. In the context of telework, older employees tended to create clear boundaries between work and non-work responsibilities [ 64 ]. The rise in telework among HCPs was mostly a consequence of the COVID-19 pandemic which may have increased the psychological burden on younger HCPs [ 65 ]. In addition, a study examining demographic predictors of resilience in nurses reported that younger nurses had less exposure to stress, and thus have fewer opportunities to develop skills in stress management [ 66 ]. As a result of these factors, the younger HCPs were at high risk for compassion fatigue during the COVID-19 pandemic. Interestingly, three of the included studies in this review also observed that physicians were at a higher risk of compassion fatigue compared to nurses [ 36 , 38 , 39 ]. This difference may be attributed to the burden of responsibility in relation to breaking bad news, a task that is often the physicians’ responsibility [ 67 ]. A study examining compassion fatigue in HCPs determined that conflict arising during patient interactions placed HCPs at a risk for compassion fatigue [ 68 ]. Delivery of bad or uncertain news also predicted a greater mental health burden in HCPs [ 68 ].

At the organizational level, findings from the studies included in this review identified that a lack of access to PPE was a contributor to compassion fatigue in HCPs during COVID-19 [ 29 , 52 ]. Specifically, one study reported that the fear of infection and transmission to patients, family, and friends added to the concern of HCPs working in high-risk environments [ 69 ]. This finding can potentially be explained by the increased vulnerability that HCPs experience following a lag in the provision of PPE. Several organizational factors were determined as potential barriers to the distribution of PPE; the unprecedented nature of the pandemic presented challenges for maintaining domestic inventories [ 70 ]. Disruptions to the PPE global supply chain also amplified the equipment shortage [ 70 ]. This finding highlights the importance of monitoring and ensuring that domestic health supplies are adequately stocked.

At the system level, loss of social engagement [ 43 , 52 ] and stigma [ 35 , 41 ] were identified in the studies included in the review as antecedents to compassion fatigue. Public policies such as social-distancing and occupancy capacity limits negatively impact social interactions which may explain the loss of social engagement in addition to worsening mental health well-being in HCPs [ 71 ]. As certain practices transition to telehealth, other studies have found increased mental fatigue and difficulty with maintaining empathetic rapport, which has important implications on patient care [ 72 , 73 ]. In addition, other studies have found that given the proximity of their role to contagion, stigma towards HCPs from patients increased during COVID-19 [ 74 , 75 ]. Consequently, the combinatorial experience of being socially isolated and stigmatized may worsen mental health outcomes [ 76 ]. This points to a need for increased access to support services for HCPs such as virtual communities.

Consequences of compassion fatigue

Review findings suggest that compassion fatigue impacted the private and professional lives of HCPs. The risk for parental burnout has increased across many occupations during the pandemic [ 77 ]. Factors related to low levels of social support, lack of leisure time, and greater parental responsibilities in face of education disruptions adds to the psychological burden of parents [ 77 ]. HCPs were placed in a unique position having to work in highly stressful environments while also balancing household responsibilities and increased challenges related to childcare [ 48 , 78 ]. This finding highlights a need for the provision of child support services for HCPs or a reduction in workload to alleviate the burden of parental and homecare responsibilities particularly in times of public health crises.

Beyond their private lives, this review has found that decreases in HCPs’ professional commitment due to compassion fatigue, may endanger the quality of patient care delivered [ 79 ]. In particular, this may be attributed to the surge in palliative care cases during the pandemic in conjunction with an unprepared workforce, creating psychological stress for HCPs [ 80 ]. In a study examining palliative care preparedness during the pandemic, a lack of core palliative care training and expertise among frontline HCPs [ 81 ] meant many felt emotionally unprepared to address cases with seriously ill patients [ 45 ]. An increased frequency of breaking bad news to patients’ families was associated with negative psychological outcomes [ 82 ]. Providing training on relevant communication skills may protect HCPs from compassion fatigue [ 83 , 84 ].

Implications

The findings of this review highlight the urgency to provide support for HCPs who may be at risk for compassion fatigue which could have subsequent impacts on the provision of patient care [ 85 ]. To address the antecedents of compassion fatigue, this scoping review has identified a need for increased staffing, recruitment, and retention efforts on the part of hospital human resources departments [ 23 , 45 ]. Interventions suggested by studies included in the review encompass the monitoring of psychological well-being among HCPs to inform timely provision of resources [ 29 , 45 ]. Specifically, structured debriefing, training on self-care routine, reduced workload, and normalization of trauma-related therapy are essential interventions [ 86 ]. Additionally, a study identified that fostering collaborative workplace culture encourages social and emotional support among staff [ 45 ]. Certain hospitals have adopted “wobble rooms” as a private unwinding and venting space for employees [ 87 ]. Studies have observed that interventions aimed at improving the well-being of HCPs resulted in enhanced quality and safety of care being delivered [ 75 ].

Strengths and limitations

There are both strengths and limitations in this review. Although some literature reviews focused on the psychological health status of HCPs (e.g., burnout, anxiety, depression), very few studies have specifically explored compassion fatigue. Reviews that considered the impact of the COVID-19 pandemic on HCPs were even more limited. It is known that compassion is a cornerstone of quality health care improvement and increases successful medical outcomes [ 88 , 89 , 90 ]. Nevertheless, prolonged exposure to distressing events by HCPs, such as patient death and suffering, results in the absorption of negative emotional responses and leads to the development of compassion fatigue [ 91 ]. This scoping review presents an extensive exploration of the current body of literature on compassion fatigue among HCPs during the COVID-19 pandemic. Another strength in this study lies in the transparency and reproducibility of the methodology. The scoping review protocol has been published in a peer-reviewed journal to establish high methodological standards for the final scoping review [ 92 ]. Additionally, the study plan was pre-registered with Open Science Framework to ensure commitment to the methodology. Double extraction was performed to ensure that a comprehensive descriptive summary of the studies was achieved.

Some limitations include the short time frame chosen for the included studies that were published since the COVID-19, which may have constrained the breadth and quality of the studies. Longitudinal studies may not be captured in the review as this study methodology requires a prolonged period of time to yield meaningful observations. More data is needed to support conclusions on the impact of compassion fatigue on patient care. Additionally, none of the studies included in the review were conducted between March 2021 and May 2023, which may miss out on meaningful trends in levels of compassion fatigue in HCPs. This scoping review only included literature published in English so studies published in other languages were not assessed. Additionally, no comparisons of compassion fatigue were made among the HCP groups in spite of potentially relevant differences such as patient exposure. There was also a lack of allied health profession representation, with the majority of the study population being nurses or physicians. Lastly, grey literature was not included in this scoping review which may delimitate the information included in the scoping review.

There were recurring themes related to limitations in the included research studies. Several studies identified sampling issues including small sample sizes, restricted sample frame, low response rate, and selection error [ 23 , 29 , 31 , 38 , 39 , 40 , 41 , 42 , 43 , 47 , 50 , 51 , 83 ]. Other studies have called for investigations into how different sociodemographic factors, other psychiatric diseases, health care settings, and workplace environment impact compassion fatigue in HCPs [ 38 , 39 , 47 , 48 , 83 ]. One study observed a lack of homogeneity in the sample due to an overrepresentation of female HCPs in the sample [ 38 ]. Lastly, many studies employed a cross-sectional study design which limits the interpretation of the data in terms of causality [ 23 , 30 , 31 , 34 , 42 , 47 , 48 , 50 ]. While there are limitations to the study, a comprehensive summary of existing literature may be useful to inform future research and policies.

Future research is needed to examine the longitudinal impacts of COVID-19 on compassion fatigue in HCPs. Moreover, research in this area could be strengthened by including a consultation phase with external experts on compassion fatigue to improve the robustness of the scoping review.

Conclusions

The COVID-19 pandemic presented a unique set of challenges to healthcare systems across the globe. This scoping review indicated that the prevalence of compassion fatigue was inconsistent across countries and may reflect the variability of pandemic preparedness among the individual countries. Primary risk factors for the development of compassion fatigue included being younger, female, a physician or nurse, and having limited access to PPE in conjunction with an excessive workload and prolonged work hours. The negative impacts of compassion fatigue were experienced at the individual and organizational level. The findings suggest there is a systemic need to assess, monitor and support health professionals’ well-being particularly during conditions of protracted health crises such as a pandemic. In addition, many health systems and sectors are facing a profound health human resources crisis and therefore ongoing efforts must be made to improve workplace environments and increase recruitment and retention efforts. Lastly, pandemic planning must include provisions to support health providers’ ability to safely do their jobs while also minimizing negative impacts to their health and well-being.

Availability of data and materials

All the material presented in the manuscript is owned by the authors and/or no permissions are required.

Coronavirus disease (COVID-19) pandemic. Accessed 16 Jan 2023. https://www.who.int/europe/emergencies/situations/covid-19

Wilder-Smith A, Osman S. Public health emergencies of international concern: a historic overview. J Travel Med. 2020;27(8):taaa227. https://doi.org/10.1093/jtm/taaa227 .

Article   PubMed   Google Scholar  

Gristina GR, Piccinni M. COVID-19 pandemic in ICU. limited resources for many patients: approaches and criteria for triaging. Minerva Anestesiol. 2021;87(12):1367–79. https://doi.org/10.23736/S0375-9393.21.15736-0 .

Chaka EE, Mekuria M, Melesie G. Access to Essential personal safety, availability of personal protective equipment and perception of healthcare workers during the COVID-19 in public hospital in West Shoa. Infect Drug Resist. 2022;15:2315–23. https://doi.org/10.2147/IDR.S344763 .

Article   PubMed   PubMed Central   Google Scholar  

Gholami M, Fawad I, Shadan S, et al. COVID-19 and healthcare workers: a systematic review and meta-analysis. Int J Infect Dis. 2021;104:335–46. https://doi.org/10.1016/j.ijid.2021.01.013 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Schug C, Geiser F, Hiebel N, et al. Sick Leave and Intention to Quit the Job among Nursing Staff in German Hospitals during the COVID-19 Pandemic. Int J Environ Res Public Health. 2022;19(4):1947. https://doi.org/10.3390/ijerph19041947 .

Lancet T. COVID-19: protecting health-care workers. Lancet Lond Engl. 2020;395(10228):922. https://doi.org/10.1016/S0140-6736(20)30644-9 .

Article   Google Scholar  

Beck E, Daniels J. Intolerance of uncertainty, fear of contamination and perceived social support as predictors of psychological distress in NHS healthcare workers during the COVID-19 pandemic. Psychol Health Med. Published online July 6, 2022:1–13. https://doi.org/10.1080/13548506.2022.2092762.

Nikeghbal K, Kouhnavard B, Shabani A, Zamanian Z. Covid-19 effects on the mental workload and quality of work life in Iranian nurses. Ann Glob Health. 2021;87(1):79. https://doi.org/10.5334/aogh.3386.

WHO Coronavirus (COVID-19) Dashboard. Accessed 16 Jan 2023. https://covid19.who.int

Lluch C, Galiana L, Doménech P, Sansó N. The Impact of the COVID-19 pandemic on burnout, compassion fatigue, and compassion satisfaction in healthcare personnel: a systematic review of the literature published during the first year of the pandemic. Healthcare. 2022;10(2):364. https://doi.org/10.3390/healthcare10020364 .

Salmond E, Salmond S, Ames M, Kamienski M, Holly C. Experiences of compassion fatigue in direct care nurses: a qualitative systematic review. JBI Evid Synth. 2019;17(5):682. https://doi.org/10.11124/JBISRIR-2017-003818 .

Sinclair S, Raffin-Bouchal S, Venturato L, Mijovic-Kondejewski J, Smith-MacDonald L. Compassion fatigue: A meta-narrative review of the healthcare literature. Int J Nurs Stud. 2017;69:9–24. https://doi.org/10.1016/j.ijnurstu.2017.01.003 .

Majid U, Hussain SAS, Zahid A, Haider MH, Arora R. Mental health outcomes in health care providers during the COVID-19 pandemic: an umbrella review. Health Promot Int. 2023;38(2):daad025. https://doi.org/10.1093/heapro/daad025 .

Rahmani F, Hosseinzadeh M, Gholizadeh L. Complicated grief and related factors among nursing staff during the Covid-19 pandemic: a cross-sectional study. BMC Psychiatry. 2023;23(1):73. https://doi.org/10.1186/s12888-023-04562-w .

Statement on the fifteenth meeting of the IHR (2005) Emergency Committee on the COVID-19 pandemic. Accessed July 6, 2023. https://www.who.int/news/item/05-05-2023-statement-on-the-fifteenth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic

Ghahramani S, Kasraei H, Hayati R, Tabrizi R, Marzaleh MA. Health care workers’ mental health in the face of COVID-19: a systematic review and meta-analysis. Int J Psychiatry Clin Pract. 2022;0(0):1–10. https://doi.org/10.1080/13651501.2022.2101927 .

Article   CAS   Google Scholar  

Nie A, Su X, Zhang S, Guan W, Li J. Psychological impact of COVID-19 outbreak on frontline nurses: a cross-sectional survey study. J Clin Nurs. 2020;29(21–22):4217–26. https://doi.org/10.1111/jocn.15454 .

Jalili M, Niroomand M, Hadavand F, Zeinali K, Fotouhi A. Burnout among healthcare professionals during COVID-19 pandemic: a cross-sectional study. Int Arch Occup Environ Health. 2021;94(6):1345–52. https://doi.org/10.1007/s00420-021-01695-x .

Iddrisu M, Poku CA, Mensah E, Attafuah PYA, Dzansi G, Adjorlolo S. Work-related psychosocial challenges and coping strategies among nursing workforce during the COVID-19 pandemic: a scoping review. BMC Nurs. 2023;22(1):210. https://doi.org/10.1186/s12912-023-01368-9 .

Yang BJ, Yen CW, Lin SJ, et al. Emergency nurses’ burnout levels as the mediator of the relationship between stress and posttraumatic stress disorder symptoms during COVID-19 pandemic. J Adv Nurs. 2022;78(9):2861–71. https://doi.org/10.1111/jan.15214 .

Fukushima H, Imai H, Miyakoshi C, Naito A, Otani K, Matsuishi K. The sustained psychological impact of coronavirus disease 2019 pandemic on hospital workers 2 years after the outbreak: a repeated cross-sectional study in Kobe. BMC Psychiatry. 2023;23(1):313. https://doi.org/10.1186/s12888-023-04788-8 .

Amir K, Okalo P. Frontline nurses’ compassion fatigue and associated predictive factors during the second wave of COVID-19 in Kampala. Uganda Nurs Open. 2022;9(5):2390–6. https://doi.org/10.1002/nop2.1253 .

Calkins K, Guttormson J, McAndrew NS, et al. The early impact of COVID-19 on intensive care nurses’ personal and professional well-being: a qualitative study. Intensive Crit Care Nurs. 2023;76:103388. https://doi.org/10.1016/j.iccn.2023.103388 .

Sexton JB, Adair KC, Proulx J, et al. Emotional exhaustion among US health care workers before and during the COVID-19 Pandemic, 2019–2021. JAMA Netw Open. 2022;5(9):e2232748. https://doi.org/10.1001/jamanetworkopen.2022.32748 .

Slatten LA, David Carson K, Carson PP. Compassion fatigue and burnout what managers should know. Health Care Manag. 2011;30(4):325–33. https://doi.org/10.1097/HCM.0b013e31823511f7 .

Martin J. © Joanna Briggs Institute 2017 Critical Appraisal Checklist for Systematic Reviews and Research Syntheses. Published online 2017.

PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation | Annals of Internal Medicine. Accessed 16 Jan 16 2023. https://www.acpjournals.org/doi/full/ https://doi.org/10.7326/M18-0850?rfr_dat=cr_pub++0pubmed&url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org

Moreno-Mulet C, Sansó N, Carrero-Planells A, et al. The Impact of the COVID-19 Pandemic on ICU healthcare professionals: a mixed methods study. Int J Environ Res Public Health. 2021;18(17):9243. https://doi.org/10.3390/ijerph18179243 .

Labrague LJ, de los Santos JAA. Resilience as a mediator between compassion fatigue, nurses’ work outcomes, and quality of care during the COVID-19 pandemic. Appl Nurs Res. 2021;61:151476.

Su PA, Lo MC, Wang CL, et al. The correlation between professional quality of life and mental health outcomes among hospital personnel during the Covid-19 pandemic in Taiwan. J Multidiscip Healthc. 2021;14:3485–95. https://doi.org/10.2147/JMDH.S330533 .

Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implementation Sci. 2010;5:69. https://doi.org/10.1186/1748-5908-5-69 .

Cheng J, Cui J, Yu W, Kang H, Tian Y, Jiang X. Factors influencing nurses’ behavioral intention toward caring for COVID-19 patients on mechanical ventilation: a cross-sectional study. PLoS ONE. 2021;16(11):e0259658. https://doi.org/10.1371/journal.pone.0259658 .

Pérez-Chacón M, Chacón A, Borda-Mas M, Avargues-Navarro ML. Sensory processing sensitivity and compassion satisfaction as risk/protective factors from burnout and compassion fatigue in healthcare and education professionals. Int J Environ Res Public Health. 2021;18(2):611. https://doi.org/10.3390/ijerph18020611 .

Ramaci T, Barattucci M, Ledda C, Rapisarda V. Social Stigma during COVID-19 and its Impact on HCWs Outcomes. Sustainability. 2020;12(9):3834. https://doi.org/10.3390/su12093834 .

Ruiz-Fernández MD, Ramos-Pichardo JD, Ibáñez-Masero O, Cabrera-Troya J, Carmona-Rega MI, Ortega-Galán ÁM. Compassion fatigue, burnout, compassion satisfaction and perceived stress in healthcare professionals during the COVID-19 health crisis in Spain. J Clin Nurs. 2020;29(21–22):4321–30. https://doi.org/10.1111/jocn.15469 .

Kase SM, Gribben JL, Guttmann KF, Waldman ED, Weintraub AS. Compassion fatigue, burnout, and compassion satisfaction in pediatric subspecialists during the SARS-CoV-2 pandemic. Pediatr Res. 2022;91(1):143–8. https://doi.org/10.1038/s41390-021-01635-y .

Article   CAS   PubMed   Google Scholar  

Carmassi C, Dell’Oste V, Bertelloni CA, et al. Gender and occupational role differences in work-related post-traumatic stress symptoms, burnout and global functioning in emergency healthcare workers. Intensive Crit Care Nurs. 2022;69:103154. https://doi.org/10.1016/j.iccn.2021.103154 .

Ruiz-Fernández MD, Ramos-Pichardo JD, Ibáñez-Masero O, Carmona-Rega MI, Sánchez-Ruiz MJ, Ortega-Galán ÁM. Professional quality of life, self-compassion, resilience, and empathy in healthcare professionals during COVID-19 crisis in Spain. Res Nurs Health. 2021;44(4):620–32. https://doi.org/10.1002/nur.22158 .

Yılmaz A, Bay F, Erdem Ö, Özkalp B. The professional quality of life for healthcare workers during the COVID-19 Pandemic in Turkey and the influencing factors. Bezmialem Sci. 2022;10(3):361–9. https://doi.org/10.14235/bas.galenos.2021.5837 .

Missouridou E, Mangoulia P, Pavlou V, et al. Wounded healers during the COVID-19 syndemic: Compassion fatigue and compassion satisfaction among nursing care providers in Greece. Perspect Psychiatr Care. 2022;58(4):1421–32. https://doi.org/10.1111/ppc.12946 . Published online September 10, 2021.

Zakeri MA, Rahiminezhad E, Salehi F, Ganjeh H, Dehghan M. compassion satisfaction, compassion fatigue and hardiness among nurses: a comparison before and during the COVID-19 Outbreak. Front Psychol. 2022;12:815180. https://doi.org/10.3389/fpsyg.2021.815180 .

Austin EJ, Blacker A, Kalia I. “Watching the tsunami come”: a case study of female healthcare provider experiences during the COVID-19 pandemic. Appl Psychol Health Well-Being. 2021;13(4):781–97. https://doi.org/10.1111/aphw.12269 .

Kong KYC, Ganapathy S. Are we in control of our demons?: understanding compassion satisfaction, compassion fatigue and burnout in an asian pediatric emergency department in a pandemic. Pediatr Emerg Care. 2022;38(3):e1058. https://doi.org/10.1097/PEC.0000000000002656 .

Nishihara T, Ohashi A, Nakashima Y, Yamashita T, Hiyama K, Kuroiwa M. Compassion fatigue in a health care worker treating COVID-19 patients: a case report. Biopsychosoc Med. 2022;16(1):10. https://doi.org/10.1186/s13030-022-00239-0 .

Kaya ŞD, Mehmet N, Şafak K. Professional commitment, satisfaction and quality of life of nurses during the COVID-19 Pandemic in Konya. Turkey Ethiop J Health Sci. 2022;32(2):393–404. https://doi.org/10.4314/ejhs.v32i2.20 .

Kottoor AS, Chacko N. Role of entrapment in relation between fear of Covid-19 and compassion fatigue among nurses. Int J Behav Sci. 2022;15(4):250–5. https://doi.org/10.30491/ijbs.2022.288846.1573 .

Stevenson MC, Schaefer CT, Ravipati VM. COVID-19 patient care predicts nurses’ parental burnout and child abuse: Mediating effects of compassion fatigue. Child Abuse Negl. 2022;130:105458. https://doi.org/10.1016/j.chiabu.2021.105458 .

Hochwarter W, Jordan S, Kiewitz C, et al. Losing compassion for patients? The implications of COVID-19 on compassion fatigue and event-related post-traumatic stress disorder in nurses. J Manag Psychol. 2022;37(3):206–23. https://doi.org/10.1108/JMP-01-2021-0037 .

Cuartero-Castañer ME, Hidalgo-Andrade P, Cañas-Lerma AJ. professional quality of life, engagement, and self-care in healthcare professionals in Ecuador during the COVID-19 Pandemic. Healthcare. 2021;9(5):515. https://doi.org/10.3390/healthcare9050515 .

Spiridigliozzi S. Exploring the relationship between faith and the experience of burnout, compassion fatigue, and compassion satisfaction for hospice workers during a global pandemic: a multidisciplinary study. Dr Diss Proj. Published online April 1, 2022. https://digitalcommons.liberty.edu/doctoral/3572

Gribben JL, Kase SM, Guttmann KF, Waldman ED, Weintraub AS. Impact of the SARS-CoV-2 pandemic on pediatric subspecialists’ well-being and perception of workplace value. Pediatr Res. Published online January 20, 2023:1–7. https://doi.org/10.1038/s41390-023-02474-9.

Chien LC, Beÿ CK, Koenig KL. Taiwan’s Successful COVID-19 mitigation and containment strategy: achieving quasi population immunity. Disaster Med Public Health Prep.:1–4. https://doi.org/10.1017/dmp.2020.357.

Cucinotta D, Vanelli M. WHO Declares COVID-19 a Pandemic. Acta Bio-Medica Atenei Parm. 2020;91(1):157–60. https://doi.org/10.23750/abm.v91i1.9397 .

Kuo S, Ou HT, Wang CJ. Managing medication supply chains: Lessons learned from Taiwan during the COVID-19 pandemic and preparedness planning for the future. J Am Pharm Assoc. 2021;61(1):e12–5. https://doi.org/10.1016/j.japh.2020.08.029 .

Cheng HY, Liu DP. Early Prompt Response to COVID-19 in Taiwan: Comprehensive surveillance, decisive border control, and information technology support. J Formos Med Assoc. Published online November 11, 2022. https://doi.org/10.1016/j.jfma.2022.11.002.

S. Talabis DA, Babierra AL, H. Buhat CA, Lutero DS, Quindala KM, Rabajante JF. Local government responses for COVID-19 management in the Philippines. BMC Public Health. 2021;21:1711. https://doi.org/10.1186/s12889-021-11746-0 .

Rashid N, Nazziwa A, Nanyeenya N, Madinah N, Lwere K. Preparedness, identification and care of COVID-19 cases by front line health workers in selected health facilities in mbale district uganda: a cross-sectional study. East Afr Health Res J. 2021;5(2):144–50. https://doi.org/10.24248/eahrj.v5i2.665 .

Alfonso Viguria U, Casamitjana N. Early Interventions and Impact of COVID-19 in Spain. Int J Environ Res Public Health. 2021;18(8):4026. https://doi.org/10.3390/ijerph18084026 .

Rodríguez-Almagro J, Hernández-Martínez A, Romero-Blanco C, Martínez-Arce A, Prado-Laguna MD, García-Sanchez FJ. Experiences and Perceptions of Nursing Students during the COVID-19 Crisis in Spain. Int J Environ Res Public Health. 2021;18(19):10459.

Dodek PM, Cheung EO, Burns KEA, et al. Moral distress and other wellness measures in Canadian critical care physicians. Ann Am Thorac Soc. 2021;18(8):1343–51. https://doi.org/10.1513/AnnalsATS.202009-1118OC .

Franza F, Basta R, Pellegrino F, Solomita B, Fasano V. The role of fatigue of compassion, burnout and hopelessness in healthcare: experience in the time of covid-19 outbreak. Psychiatr Danub. 32.

Coşkun Şimşek D, Günay U. Experiences of nurses who have children when caring for COVID-19 patients. Int Nurs Rev. 2021;68(2):219–27. https://doi.org/10.1111/inr.12651 .

Scheibe S, De Bloom J, Modderman T. Resilience during crisis and the role of age: involuntary telework during the COVID-19 Pandemic. Int J Environ Res Public Health. 2022;19(3):1762. https://doi.org/10.3390/ijerph19031762 .

Mann DM, Chen J, Chunara R, Testa PA, Nov O. COVID-19 transforms health care through telemedicine: Evidence from the field. J Am Med Inform Assoc JAMIA. 2020;27(7):1132–5. https://doi.org/10.1093/jamia/ocaa072 .

Afshari D, Nourollahi-darabad M, Chinisaz N. Demographic predictors of resilience among nurses during the COVID-19 pandemic. Work. 2021;68(2):297–303. https://doi.org/10.3233/WOR-203376 .

Monden KR, Gentry L, Cox TR. Delivering bad news to patients. Proc Bayl Univ Med Cent. 2016;29(1):101–2.

Sorenson C, Bolick B, Wright K, Hamilton R. Understanding compassion fatigue in healthcare providers: a review of current literature. J Nurs Scholarsh. 2016;48(5):456–65. https://doi.org/10.1111/jnu.12229 .

Alharbi J, Jackson D, Usher K. The potential for COVID-19 to contribute to compassion fatigue in critical care nurses. J Clin Nurs. 2020;29(15–16):2762–4. https://doi.org/10.1111/jocn.15314 .

Cohen J, van der Meulen Rodgers Y. Contributing factors to personal protective equipment shortages during the COVID-19 pandemic. Prev Med. 2020;141:106263.

Institute of Professional Psychology, Bahria University Karachi Campus, Karachi, Pakistan, Waris Nawaz M, Imtiaz S, Quaid-i-Azam University Islamabad, Islamabad, Pakistan, Kausar E, Institute of Professional Psychology, Bahria University Karachi Campus, Karachi, Pakistan. self-care of frontline health care workers: during covid-19 pandemic. Psychiatr Danub. 2020;32(3–4):557–62. https://doi.org/10.24869/psyd.2020.557 .

Mano MS, Morgan G. Telehealth, social media, patient empowerment, and physician burnout: seeking middle ground. Am Soc Clin Oncol Educ Book Am Soc Clin Oncol Annu Meet. 2022;42:1–10. https://doi.org/10.1200/EDBK_100030 .

Myronuk L. Effect of telemedicine via videoconference on provider fatigue and empathy: Implications for the Quadruple Aim. Healthc Manage Forum. 2022;35(3):174–8. https://doi.org/10.1177/08404704211059944 .

Abuhammad S, Alzoubi KH, Al‐Azzam S, et al. Stigma toward healthcare providers from patients during COVID‐19 era in Jordan. Public Health Nurs Boston Mass. Published online March 25, 2022: https://doi.org/10.1111/phn.13071

Nashwan AJ, Valdez GFD, AL-Fayyadh S, et al. Stigma towards health care providers taking care of COVID-19 patients: a multi-country study. Heliyon. 2022;8(4):e09300. https://doi.org/10.1016/j.heliyon.2022.e09300 .

Shiu C, Chen WT, Hung CC, Huang EPC, Lee TSH. COVID-19 stigma associates with burnout among healthcare providers: evidence from Taiwanese physicians and nurses. J Formos Med Assoc Taiwan Yi Zhi. 2022;121(8):1384–91. https://doi.org/10.1016/j.jfma.2021.09.022 .

Griffith AK. Parental burnout and child maltreatment during the COVID-19 Pandemic. J Fam Violence. 2022;37(5):725–31. https://doi.org/10.1007/s10896-020-00172-2 .

Çakmak G, Öztürk ZA. Being both a parent and a healthcare worker in the pandemic: who could be exhausted more? Healthcare. 2021;9(5):564. https://doi.org/10.3390/healthcare9050564 .

Cavanagh N, Cockett G, Heinrich C, et al. Compassion fatigue in healthcare providers: a systematic review and meta-analysis. Nurs Ethics. 2020;27(3):639–65. https://doi.org/10.1177/0969733019889400 .

Boufkhed S, Harding R, Kutluk T, Husseini A, Pourghazian N, Shamieh O. What is the preparedness and capacity of palliative care services in Middle-Eastern and North African Countries to Respond to COVID-19? a rapid survey. J Pain Symptom Manage. 2021;61(2):e13–50. https://doi.org/10.1016/j.jpainsymman.2020.10.025 .

Gelfman LP, Morrison RS, Moreno J, Chai E. Palliative care as essential to a hospital system’s pandemic preparedness planning: how to get ready for the next wave. J Palliat Med. 2021;24(5):656–8. https://doi.org/10.1089/jpm.2020.0670 .

Messerotti A, Banchelli F, Ferrari S, et al. Investigating the association between physicians self-efficacy regarding communication skills and risk of “burnout.” Health Qual Life Outcomes. 2020;18:271. https://doi.org/10.1186/s12955-020-01504-y .

Gribben JL, Kase SM, Waldman ED, Weintraub AS. A cross-sectional analysis of compassion fatigue, burnout, and compassion satisfaction in pediatric critical care physicians in the United States. Pediatr Crit Care Med J Soc Crit Care Med World Fed Pediatr Intensive Crit Care Soc. 2019;20(3):213–22. https://doi.org/10.1097/PCC.0000000000001803 .

Sengupta M, Roy A, Gupta S, Chakrabarti S, Mukhopadhyay I. Art of breaking bad news: a qualitative study in Indian healthcare perspective. Indian J Psychiatry. 2022;64(1):25–37. https://doi.org/10.4103/indianjpsychiatry.indianjpsychiatry_346_21 .

Cross LA. Compassion fatigue in palliative care nursing: a concept analysis. J Hosp Palliat Nurs. 2019;21(1):21. https://doi.org/10.1097/NJH.0000000000000477 .

Paiva-Salisbury ML, Schwanz KA. Building compassion fatigue resilience: awareness, prevention, and intervention for pre-professionals and current practitioners. J Health Serv Psychol. 2022;48(1):39–46. https://doi.org/10.1007/s42843-022-00054-9 .

Jun 8, information 2020 | For more, Corpuz-Bosshart contact L. ‘Wobble room’ provides time-out for COVID-19 frontliners. UBC News. Published June 8, 2020. Accessed 17 Jan 2023. https://news.ubc.ca/2020/06/08/making-a-difference-wobble-room-provides-time-out-for-covid-19-frontliners/

Gupta N, Dhamija S, Patil J, Chaudhari B. Impact of COVID-19 pandemic on healthcare workers. Ind Psychiatry J. 2021;30(Suppl 1):S282–4. https://doi.org/10.4103/0972-6748.328830 .

Menon GR, Yadav J, Aggarwal S, et al. Psychological distress and burnout among healthcare worker during COVID-19 pandemic in India—a cross-sectional study. PLoS ONE. 2022;17(3):e0264956. https://doi.org/10.1371/journal.pone.0264956 .

Nishimura Y, Miyoshi T, Sato A, et al. Burnout of healthcare workers amid the COVID-19 Pandemic: a follow-up study. Int J Environ Res Public Health. 2021;18(21):11581. https://doi.org/10.3390/ijerph182111581 .

Jemal K, Hailu D, Mekonnen M, Tesfa B, Bekele K, Kinati T. The importance of compassion and respectful care for the health workforce: a mixed-methods study. J Public Health. 2023;31(2):167–78. https://doi.org/10.1007/s10389-021-01495-0 .

Hui L, Garnett A, Oleyniov C, Boamah S. Compassion fatigue in health providers during the COVID-19 pandemic: A scoping review protocol. BMJ Open. 2023;13:e069843. https://doi.org/10.1136/bmjopen-2022-069843 .

Download references

Acknowledgements

I declare that the authors have no competing interests as defined by BMC, or other interests that might be perceived to influence the results and/or discussion reported in this paper.

Author information

Authors and affiliations.

Arthur Labatt Family School of Nursing, Western University, London, ON, Canada

Anna Garnett & Christina Oleynikov

Medical Sciences, Western University, London, ON, Canada

School of Nursing, McMaster University, Hamilton, ON, Canada

Sheila Boamah

You can also search for this author in PubMed   Google Scholar

Contributions

AG is responsible for conception and design of the review. AG, LH, CO & SB contributed to the acquisition and analysis of the data. AG, LH & SB interpreted the data. LH drafted the manuscript and AG was a major contributor to the final version of the manuscript. AG, LH, CO & SB read, provided feedback and approved the final manuscript

Corresponding author

Correspondence to Anna Garnett .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was not required because this manuscript is a review of published literature.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Critical appraisals of included articles.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Garnett, A., Hui, L., Oleynikov, C. et al. Compassion fatigue in healthcare providers: a scoping review. BMC Health Serv Res 23 , 1336 (2023). https://doi.org/10.1186/s12913-023-10356-3

Download citation

Received : 07 August 2023

Accepted : 20 November 2023

Published : 01 December 2023

DOI : https://doi.org/10.1186/s12913-023-10356-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Compassion fatigue
  • Healthcare provider
  • Psychological health

BMC Health Services Research

ISSN: 1472-6963

3 importance of recommendation in research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Writing an Effective & Supportive Recommendation Letter

Sarvenaz sarabipour.

1 Institute for Computational Medicine and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States

Sarah J. Hainer

2 Department of Biological Sciences, University of Pittsburgh, Pittsburgh, Pennsylvania, United States

Emily Furlong

3 Sir William Dunn School of Pathology, University of Oxford, Oxford, United Kingdom

Nafisa M. Jadavji

4 Department of Biomedical Sciences, Midwestern University, Glendale, United States

5 Department of Neuroscience, Carleton University, Ottawa, Canada

Charlotte M. de Winde

6 MRC Laboratory for Molecular Cell Biology, University College London, London, United Kingdom

7 Department of Molecular Cell Biology & Immunology, Amsterdam UMC Locatie VUmc, Amsterdam, The Netherlands

Natalia Bielczyk

8 Welcome Solutions, Nijmegen, the Netherlands

9 Stichting Solaris Onderzoek en Ontwikkeling, Nijmegen, the Netherlands

Aparna P. Shah

10 The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, Maryland, United States

Author Contributions

Writing recommendation letters on behalf of students and other early-career researchers is an important mentoring task within academia. An effective recommendation letter describes key candidate qualities such as academic achievements, extracurricular activities, outstanding personality traits, participation in and dedication to a particular discipline, and the mentor’s confidence in the candidate’s abilities. In this Words of Advice, we provide guidance to researchers on composing constructive and supportive recommendation letters, including tips for structuring and providing specific and effective examples, while maintaining a balance in language and avoiding potential biases.

Introduction

A letter of recommendation or a reference letter is a statement of support for a student or an early-career researcher (ECR; a non-tenured scientist who may be a research trainee, postdoctoral fellow, laboratory technician, or junior faculty colleague) who is a candidate for future employment, promotion, education, or funding opportunities. Letters of recommendation are commonly requested at different stages of an academic research career and sometimes for transitioning to a non-academic career. Candidates need to request letters early on and prepare relevant information for the individual who is approached for recommendation [ 1 , 2 ]. Writing recommendation letters in support of ECRs for career development opportunities is an important task undertaken frequently by academics. ECRs can also serve as mentors during their training period and may be asked to write letters for their mentees. This offers the ECRs an excellent opportunity to gain experience in drafting these important documents, but may present a particular challenge for individuals with little experience. In general, a letter of recommendation should present a well-documented evaluation and provide sufficient evidence and information about an individual to assist a person or a selection committee in making their decision on an application [ 1 ]. Specifically, the letter should address the purpose for which it is written (which is generally to provide support of the candidate’s application and recommendation for the opportunity) and describe key candidate qualities, the significance of the work performed, the candidate’s other accomplishments and the mentor’s confidence in the candidate’s abilities. It should be written in clear and unbiased language. While a poorly written letter may not result in loss of the opportunity for the candidate, a well-written one can help an application stand out from the others, thus well-enhancing the candidate’s chances for the opportunity.

Letter readers at review, funding, admissions, hiring and promotion committees need to examine the letter objectively with a keenness for information on the quality of the candidate’s work and perspective on their scientific character [ 6 ]. However well-intentioned, letters can fall short of providing a positive, effective, and supportive document [ 1 , 3 – 5 ]. To prevent this, it is important to make every letter personal; thus, writing letters requires time and careful consideration. This article draws from our collective experiences as ECRs and the literature to highlight best practices and key elements for those asked to provide recommendation letters for their colleagues, students, or researchers who have studied or trained in their classroom or research laboratory. We hope that these guidelines will be helpful for letter writers to provide an overall picture of the candidate’s capabilities, potential and professional promise.

Decide on whether to write the letter

Before you start, it is important to evaluate your relationship with the candidate and ability to assess their skills and abilities honestly. Consider how well and in what context you know the person, as well as whether you can be supportive of their application [ 7 ]. Examine the description of the opportunity for which the letter is being requested ( Figure 1 ). Often you will receive a request by a student or a researcher whom you know very well and have interacted with in different settings – in and out of the classroom, your laboratory or that of a colleague, or within your department – and whose performance you find to be consistently satisfactory or excellent. Sometimes a mentee may request a recommendation letter when still employed or working with you, their research advisor. This can come as an unpleasant surprise if you are unaware that the trainee was seeking other opportunities (for instance, if they haven’t been employed with you for long, or have just embarked on a new project). While the mentee should be transparent about their goals and searching for opportunities, you should as a mentor offer to provide the letter for your mentee (see Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is nihms-1675979-f0001.jpg

First, it is important to establish whether you are equipped to write a strong letter of support. If not, it is best to have a candid conversation with the applicant and discuss alternative options or opportunities. If you are in a position to write a strong letter of support, first acquire information regarding the application and the candidate, draft a letter in advance (see Box 1 ) and submit the letter on time. When drafting the letter, incorporate specific examples, avoid biases, and discuss the letter with the candidate (see Tables 1 – 2 for specific examples). After submission, store a digital copy for potential future use for the same candidate.

Key do’s and don’ts when being asked to write a letter of recommendation

Other requests may be made by a candidate who has made no impression on you, or only a negative one. In this case, consider the candidate’s potential and future goals, and be fair in your evaluation. Sending a negative letter or a generic positive letter for individuals you barely know is not helpful to the selection committee and can backfire for the candidate. It can also, in some instances, backfire for you if a colleague accepts a candidate based on your generic positive letter when you did not necessarily fully support that individual. For instance, letter writers sometimes stretch the truth to make a candidate sound better than they really are, thinking it is helpful. If you do not know the applicant well enough or feel that you cannot be supportive, you are not in a strong position to write the recommendation letter and should decline the request, being open about why you are declining to write the letter. Also, be selective about writing on behalf of colleagues who may be in one’s field but whose work is not well known to you. If you have to read the candidate’s curriculum vitae to find out who they are and what they have done, then you may not be qualified to write the letter [ 8 ].

When declining a request to provide a letter of support, it is important to explain your reasoning to the candidate and suggest how they might improve their prospects for the future [ 8 ]. If the candidate is having a similar problem with other mentors, try to help them identify a more appropriate referee or to explore whether they are making an appropriate application in the first place. Suggest constructive steps to improve relationships with mentors to identify individuals to provide letters in the future. Most importantly, do not let the candidate assume that all opportunities for obtaining supportive letters of recommendation have been permanently lost. Emphasize the candidate’s strengths by asking them to share a favourite paper, assignment, project, or other positive experience that may have taken place outside of your class or lab, to help you identify their strengths. Finally, discuss with the candidate their career goals to help them realize what they need to focus on to become more competitive or steer them in a different career direction. This conversation can mark an important step and become a great interaction and mentoring opportunity for ECRs.

Examine the application requirements

Once you decide to write a recommendation letter, it is important to know what type and level of opportunity the candidate is applying for, as this will determine what should be discussed in the letter ( Figure 1 ). You should carefully read the opportunity posting description and/or ask the candidate to summarize the main requirements and let you know the specific points that they find important to highlight. Pay close attention to the language of the position announcement to fully address the requested information and tailor the letter to the specific needs of the institution, employer, or funding organisation. In some instances, a waiver form or an option indicating whether or not the candidate waives their right to see the recommendation document is provided. If the candidate queries a waiver decision, note that often referees are not allowed to send a letter that is not confidential and that there may be important benefits to maintaining the confidentiality of letters (see Table 1 ). Specifically, selection committees may view confidential letters as having greater credibility and, value and some letter writers may feel less reserved in their praise of candidates in confidential letters.

Acquire candidate information and discuss letter content

To acquire appropriate information about the candidate, one or more of the following documents may be valuable: a resume or curriculum vitae (CV), a publication or a manuscript, an assignment or exam written for your course, a copy of the application essay or personal statement, a transcript of academic records, a summary of current work, and specific recommendation forms or questionnaires (if provided) [ 9 ]. Alternatively, you may ask the candidate to complete a questionnaire asking for necessary information and supporting documents [ 10 ]. Examine the candidate’s CV and provide important context to the achievements listed therein. Tailor the letter for the opportunity using these documents as a guide, but do not repeat their contents as the candidate likely submits them separately. Even the most articulate of candidates may find it difficult to describe their qualities in writing [ 11 ]. Furthermore, a request may be made by a person who has made a good impression, but for whom you lack significant information to be able to write a strong letter. Thus, even if you know a candidate well, schedule a brief in-person, phone, or virtual meeting with them to 1) fill in gaps in your knowledge about them, 2) understand why they are applying for this particular opportunity, 3) help bring their past accomplishments into sharper focus, and 4) discuss their short- and long-term goals and how their current studies or research activities relate to the opportunity they are applying for and to these goals. Other key information to gather from the applicant includes the date on which the recommendation letter is due, as well as details on how to submit it.

For most applications (for both academic and non-academic opportunities), a letter of recommendation will need to cover both scholarly capabilities and achievements as well as a broader range of personal qualities and experiences beyond the classroom or the laboratory. This includes extracurricular experiences and traits such as creativity, tenacity, and collegiality. If necessary, discuss with the candidate what they would like to see additionally highlighted. As another example of matching a letter with its purpose, a letter for a fellowship application for a specific project should discuss the validity and feasibility of the project, as well as the candidate’s qualifications for fulfilling the project.

Draft the letter early and maintain a copy

Another factor that greatly facilitates letter writing is drafting one as soon as possible after you have taught or trained the candidate, while your impressions are still clear. You might consider encouraging the candidate to make their requests early [ 11 ]. These letters can be placed in the candidate’s portfolio and maintained in your own files for future reference. If you are writing a letter in response to a request, start drafting it well in advance and anticipate multiple rounds of revision before submission. Once you have been asked by a candidate to write a letter, that candidate may return frequently, over a number of years, for additional letters. Therefore, maintain a digital copy of the letter for your records and for potential future applications for the same candidate.

Structure your letter

In the opening, you should introduce yourself and the candidate, state your qualifications and explain how you became acquainted with the candidate, as well as the purpose of the letter, and a summary of your recommendation ( Table 2 ). To explain your relationship with the candidate you should fully describe the capacity in which you know them: the type of experience, the period during which you worked with the candidate, and any special assignments or responsibilities that the candidate performed under your guidance. For instance, the letter may start with: “This candidate completed their postdoctoral training under my supervision. I am pleased to be able to provide my strongest support in recommending them for this opportunity.” You may also consider ranking the candidate among similar level candidates within the opening section to give an immediate impression of your thoughts. Depending on the position, ranking the candidate may also be desired by selection committees, and may be requested within the letter. For instance, the recommendation form or instructions may ask you to rank the candidate in the top 1%, 5%, 10%, etc., of applicants. You could write "the student is in the top 5% of undergraduate students I have trained" Or “There are currently x graduate students in our department and I rank this candidate at the top 1%. Their experimental/computational skills are the best I have ever had in my own laboratory.”. Do not forget to include with whom or what group you are comparing the individual. If you have not yet trained many individuals in your own laboratory, include those that you trained previously as a researcher as reference. Having concentrated on the candidate’s individual or unique strengths, you might find it difficult to provide a ranking. This is less of an issue if a candidate is unambiguously among the top 10% that you have mentored but not all who come to you for a letter will fall within that small group. If you wish to offer a comparative perspective, you might more readily be able to do so in more specific areas such as whether the candidate is one of the most articulate, original, clear-thinking, motivated, or intellectually curious.

Key do’s and don’ts when writing a letter of recommendation

The body of the recommendation letter should provide specific information about the candidate and address any questions or requirements posed in the selection criteria (see sections above). Some applications may ask for comments on a candidate’s scholarly performance. Refer the reader to the candidate’s CV and/or transcript if necessary but don’t report grades, unless to make an exceptional point (such as they were the only student to earn a top grade in your class). The body of the recommendation letter will contain the majority of the information including specific examples, relevant candidate qualities, and your experiences with the candidate, and therefore the majority of this manuscript focuses on what to include in this section.

The closing paragraph of the letter should briefly 1) summarize your opinions about the candidate, 2) clearly state your recommendation and strong support of the candidate for the opportunity that they are seeking, and 3) offer the recipient of the letter the option to contact you if they need any further information. Make sure to provide your email address and phone number in case the recipient has additional questions. The overall tone of the letter can represent your confidence in the applicant. If opportunity criteria are detailed and the candidate meets these criteria completely, include this information. Do not focus on what you may perceive as a candidate’s negative qualities as such tone may do more harm than intended ( Table 2 ). Finally, be aware of the Forer’s effect, a cognitive error, in which a very general description, that fits almost everyone, is used to describe a person [ 20 ]. Such generalizations can be harmful, as they provide the candidate the impression that they received a valuable, positive letter, but for the committee, who receive hundreds of similar letters, this is non-informative and unhelpful to the application.

Describe relevant candidate qualities with specific examples and without overhyping

In discussing a candidate’s qualities and character, proceed in ways similar to those used for intellectual evaluation ( Box 1 ). Information to specifically highlight may include personal characteristics, such as integrity, resilience, poise, confidence, dependability, patience, creativity, enthusiasm, teaching capabilities, problem-solving abilities, ability to manage trainees and to work with colleagues, curriculum development skills, collaboration skills, experience in grant writing, ability to organize events and demonstrate abilities in project management, and ability to troubleshoot (see section “ Use ethical principles, positive and inclusive language within the letter ” below for tips on using inclusive terminology). The candidate may also have a specific area of knowledge, strengths and experiences worth highlighting such as strong communication skills, expertise in a particular scientific subfield, an undergraduate degree with a double major, relevant work or research experience, coaching, and/or other extracurricular activities. Consider whether the candidate has taught others in the lab, or shown particular motivation and commitment in their work. When writing letters for mentees who are applying for (non-)academic jobs or admission to academic institutions, do not merely emphasize their strengths, achievements and potential, but also try to 1) convey a sense of what makes them a potential fit for that position or funding opportunity, and 2) fill in the gaps. Gaps may include an insufficient description of the candidate’s strengths or research given restrictions on document length. Importantly, to identify these gaps, one must have carefully reviewed both the opportunity posting as well as the application materials (see Box 1 , Table 2 ).

Recommendations for Letter Writers

  • Consider characteristics that excite & motivate this candidate.
  • Include qualities that you remember most about the candidate.
  • Detail their unusual competence, talent, mentorship, teaching or leadership abilities.
  • Explain the candidate’s disappointments or failures & the way they reacted & overcame.
  • Discuss if they demonstrated a willingness to take intellectual risks beyond the normal research & classroom experience.
  • Ensure that you have knowledge of the institution that the candidate is applying for.
  • Consider what makes you believe this particular opportunity is a good match for this candidate.
  • Consider how they might fit into the institution’s community & grow from their experience.
  • Describe their personality & social skills.
  • Discuss how the candidate interacts with teachers & peers.
  • Use ethical principles, positive & inclusive language within the letter.
  • Do not list facts & details, every paper, or discovery of the candidate’s career.
  • Only mention unusual family or community circumstances after consulting the candidate.
  • A thoughtful letter from a respective colleague with a sense of perspective can be quite valuable.
  • Each letter takes time & effort, take it seriously.

When writing letters to nominate colleagues for promotion or awards, place stronger emphasis on their achievements and contributions to a field, or on their track record of teaching, mentorship and service, to aid the judging panel. In addition to describing the candidate as they are right now, you can discuss the development the person has undergone (for specific examples see Table 2 ).

A letter of recommendation can also explain weaknesses or ambiguities in the candidate’s record. If appropriate – and only after consulting the candidate - you may wish to mention a family illness, financial hardship, or other factors that may have resulted in a setback or specific portion of the candidate’s application perceived weakness (such as in the candidate’s transcript). For example, sometimes there are acceptable circumstances for a gap in a candidate’s publication record—perhaps a medical condition or a family situation kept them out of the lab for a period of time. Importantly, being upfront about why there is a perceived gap or blemish in the application package can strengthen the application. Put a positive spin on the perceived negatives using terms such as “has taken steps to address gaps in knowledge”, “has worked hard to,” and “made great progress in” (see Table 2 ).

Describe a candidate’s intellectual capabilities in terms that reflect their distinctive or individual strengths and be prepared to support your judgment with field-specific content [ 12 ] and concrete examples. These can significantly strengthen a letter and will demonstrate a strong relationship between you and the candidate. Describe what the candidate’s strengths are, moments they have overcome adversity, what is important to them. For example: “candidate x is exceptionally intelligent. They proved to be a very quick study, learning the elements of research design and technique y in record time. Furthermore, their questions are always thoughtful and penetrating.”. Mention the candidate’s diligence, work ethic, and curiosity and do not merely state that “the applicant is strong” without specific examples. Describing improvements to candidate skills over time can help highlight their work ethic, resolve, and achievements over time. However, do not belabor a potential lower starting point.

Provide specific examples for when leadership was demonstrated, but do not include leadership qualities if they have not been demonstrated. For example, describe the candidate’s qualities such as independence, critical thinking, creativity, resilience, ability to design and interpret experiments; ability to identify the next steps and generate interesting questions or ideas, and what you were especially impressed by. Do not generically list the applicant as independent with no support or if this statement would be untrue.

Do not qualify candidate qualities based on a stereotype for specific identities. Quantify the candidate’s abilities, especially with respect to other scientists who have achieved success in the field and who the letter reader might know. Many letter writers rank applicants according to their own measure of what makes a good researcher, graduate trainee, or technician according to a combination of research strengths, leadership skills, writing ability, oral communication, teaching ability, and collegiality. Describe what the role of the candidate was in their project and eventual publication and do not assume letter readers will identify this information on their own (see Table 2 ). Including a description about roles and responsibilities can help to quantify a candidate’s contribution to the listed work. For example, “The candidate is the first author of the paper, designed, and led the project.”. Even the best mentor can overlook important points, especially since mentors typically have multiple mentees under their supervision. Thus, it can help to ask the candidate what they consider their strengths or traits, and accomplishments of which they are proud.

If you lack sufficient information to answer certain questions about the candidate, it is best to maintain the integrity and credibility of your letter - as the recommending person, you are potentially writing to a colleague and/or someone who will be impacted by your letter; therefore, honesty is key above all. Avoid the misconception that the more superlatives you use, the stronger the letter. Heavy use of generic phrases or clichés is unhelpful. Your letter can only be effective if it contains substantive information about the specific candidate and their qualifications for the opportunity. A recommendation that paints an unrealistic picture of a candidate may be discounted. All information in a letter of recommendation should be, to the best of your knowledge, accurate. Therefore, present the person truthfully but positively. Write strongly and specifically about someone who is truly excellent (explicitly describe how and why they are special). Write a balanced letter without overhyping the candidate as it will not help them.

Be careful about what you leave out of the letter

Beware of what you leave out of the recommendation letter. For most opportunities, there are expectations of what should be included in a letter, and therefore what is not said can be just as important as what is said. Importantly, do not assume all the same information is necessary for every opportunity. In general, you should include the information stated above, covering how you know the candidate, their strengths, specific examples to support your statements, and how the candidate fits well for the opportunity. For example, if you don’t mention a candidate’s leadership skills or their ability to work well with others, the letter reader may wonder why, if the opportunity requires these skills. Always remember that opportunities are sought by many individuals, so evaluators may look for any reason to disregard an application, such as a letter not following instructions or discussing the appropriate material. Also promote the candidate by discussing all of their scholarly and non-scholarly efforts, including non-peer reviewed research outputs such as preprints, academic and non-academic service, and advocacy work which are among their broader impact and all indicative of valuable leadership qualities for both academic and non-academic environments ( Table 2 ).

Provide an even-handed judgment of scholarly impact, be fair and describe accomplishments fairly by writing a balanced letter about the candidate’s attributes that is thoughtful and personal (see Table 2 ). Submitting a generic, hastily written recommendation letter is not helpful and can backfire for both the candidate and the letter writer as you will often leave out important information for the specific opportunity; thus, allow for sufficient time and effort on each candidate/application.

Making the letter memorable by adding content that the reader will remember, such as an unusual anecdote, or use of a unique term to describe the candidate. This will help the application stand out from all the others. Tailor the letter to the candidate, including as much unique, relevant information as possible and avoid including personal information unless the candidate gives consent. Provide meaningful examples of achievements and provide stories or anecdotes that illustrate the candidate’s strengths. Say what the candidate specifically did to give you that impression ( Box 1 ). Don’t merely praise the candidate using generalities such as “candidate x is a quick learner”.

Use ethical principles, positive and inclusive language within the letter

Gender affects scientific careers. Avoid providing information that is irrelevant to the opportunity, such as ethnicity, age, hobbies, or marital status. Write about professional attributes that pertain to the application. However, there are qualities that might be important to the job or funding opportunity. For instance, personal information may illustrate the ability to persevere and overcome adversity - qualities that are helpful in academia and other career paths. It is critical to pay attention to biases and choices of words while writing the letter [ 13 , 14 ]. Advocacy bias (a letter writer is more likely to write a strong letter for someone similar to themselves) has been identified as an issue in academic environments [ 3 ]. Studies have also shown that there are often differences in the choice of words used in letters for male and female scientists [ 3 , 5 ]. For instance, letters for women have been found not to contain much specific and descriptive language. Descriptions often pay greater attention to the personal lives or personal characteristics of women than men, focusing on items that have little relevance in a letter of recommendation. When writing recommendation letters, employers have a tendency to focus on scholarly capabilities in male candidates and personality features in female candidates; for instance, female candidates tend to be depicted in letters as teachers and trainees, whereas male candidates are described as researchers and professionals [ 15 ]. Also, letters towards males often contain more standout words such as “superb”, “outstanding”, and “excellent”. Furthermore, letters for women had been found to contain more doubt-raising statements, including negative or unexplained comments [ 3 , 15 , 16 ]. This is discriminative towards women and gives a less clear picture of women as professionals. Keep the letter gender neutral. Do not write statements such as “candidate x is a kind woman” or “candidate y is a fantastic female scientist” as these have no bearing on whether someone will do well in graduate school or in a job. One way to reduce gender bias is by checking your reference letter with a gender bias calculator [ 17 , 18 ]. Test for gender biases by writing a letter of recommendation for any candidate, male or female, and then switch all the pronouns to the opposite gender. Read the letter over and ask yourself if it sounds odd. If it does, you should probably change the terms used [ 17 ]. Other biases also exist, and so while gender bias has been the most heavily investigated, bias based on other identities (race, nationality, ethnicity, among others) should also be examined and assessed in advance and during letter writing to ensure accurate and appropriate recommendations for all.

Revise and submit on time

The recommendation letter should be written using language that is straightforward and concise [ 19 ]. Avoid using jargon or language that is too general or effusive ( Table 1 ). Formats and styles of single and co-signed letters are also important considerations. In some applications, the format is determined by the application portal itself in which the recommender is asked to answer a series of questions. If these questions do not cover everything you would like to address you could inquire if there is the option to provide a letter as well. Conversely, if the recommendation questionnaire asks for information that you cannot provide, it is best to explicitly mention this in writing. The care with which you write the letter will also influence the effectiveness of the letter - writing eloquently is another way of registering your support for the candidate. Letters longer than two pages can be counterproductive, and off-putting as reviewers normally have a large quantity of letters to read. In special cases, longer letters may be more favourable depending on the opportunity. On the other hand, anything shorter than a page may imply a lack of interest or knowledge, or a negative impression on the candidate. In letter format, write at least 3-4 paragraphs. It is important to note that letters from different sectors, such as academia versus industry tend to be of different lengths. Ensure that your letter is received by the requested method (mail or e-mail) and deadline, as a late submission could be detrimental for the candidate. Write and sign the letter on your department letterhead which is a further form of identification.

Conclusions

Recommendation letters can serve as important tools for assessing ECRs as potential candidates for a job, course, or funding opportunity. Candidates need to request letters in advance and provide relevant information for the recommender. Readers at selection committees need to examine the letter objectively with an eye for information on the quality of the candidate’s scholarly and non-scholarly endeavours and scientific traits. As a referee, it is important that you are positive, candid, yet helpful, as you work with the candidate in drafting a letter in their support. In writing a recommendation letter, summarize your thoughts on the candidate and emphasize your strong support for their candidacy. A successful letter communicates the writer’s enthusiasm for an individual, but does so realistically, sympathetically, and with concrete examples to support the writer’s associations. Writing recommendation letters can help mentors examine their interactions with their mentee and know them in different light. Express your willingness to help further by concluding the letter with an offer to be contacted should the reader need more information. Remember that a letter writer’s judgment and credibility are at stake thus do spend the time and effort to present yourself as a recommender in the best light and help ECRs in their career path.

Acknowledgements

S.J.H. was supported by the National Institutes of Health grant R35GM133732. A.P.S. was partially supported by the NARSAD Young Investigator Grant 27705.

Abbreviations:

Conflicts of Interest

The authors declare no conflicts of interest.

IMAGES

  1. Academic Recommendation Letter Examples and Writing Tips Grad School, Graduate School, Resume

    3 importance of recommendation in research

  2. Recommendation Report Template

    3 importance of recommendation in research

  3. Week 4 Challenge Guide

    3 importance of recommendation in research

  4. Recommendation report ideas. Recommendation Report Ideas. 2022-10-30

    3 importance of recommendation in research

  5. IMPORTANCE OF RECOMMENDATION IN RESEARCH

    3 importance of recommendation in research

  6. IMPORTANCE OF RECOMMENDATION IN RESEARCH

    3 importance of recommendation in research

VIDEO

  1. PRICE ACTION, CANDLESTICK & INDICATORS कौनसे TIME FRAME ME USE KARE ?🤔 #sharemarket #trading

  2. BANKNIFTY & NIFTY ANALYSIS FOR 21-MARCH-2024 #sharemarket #trading

  3. EB-2 NIW: On balance, it would be beneficial to the United States to waive the job offer (Part 1)

  4. DAY 8 OF 10K TO 10 Lakh Trading Challenge #sharemarket#trading

  5. Suzlon Share का वाढतोय ?

  6. DAY 10 OF 10K TO 10 Lakh Trading Challenge #sharemarket#trading

COMMENTS

  1. How to Write Recommendations in Research

    Recommendations for future research should be: Concrete and specific. Supported with a clear rationale. Directly connected to your research. Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

  2. Research Recommendations

    It is important to provide a clear rationale for each recommendation to help the researcher understand why it is important. Consider limitations and ethical considerations: Consider any limitations or potential ethical considerations that may arise in conducting the research. Provide recommendations for addressing these issues or mitigating ...

  3. How to Write Recommendations in Research

    Here is a step-wise guide to build your understanding on the development of research recommendations. 1. Understand the Research Question: Understand the research question and objectives before writing recommendations. Also, ensure that your recommendations are relevant and directly address the goals of the study. 2.

  4. What are Implications and Recommendations in Research? How to Write It

    Implications and recommendations in research are two important aspects of a research paper or your thesis or dissertation. Implications discuss the importance of the research findings, while recommendations offer specific actions to solve a problem. So, the basic difference between the two is in their function and the questions asked to achieve it.

  5. Implications or Recommendations in Research: What's the Difference

    Example 3. A study found that many research articles do not contain the sample sizes needed to statistically confirm their findings. The recommendation: To improve the current state of the field, researchers should consider doing power analysis based on their experiment's design. What else is important about implications and recommendations?

  6. Draw conclusions and make recommendations (Chapter 6)

    Having drawn your conclusions you can then make recommendations. These should flow from your conclusions. They are suggestions about action that might be taken by people or organizations in the light of the conclusions that you have drawn from the results of the research. Like the conclusions, the recommendations may be open to debate.

  7. How to formulate research recommendations

    How to formulate research recommendations. "More research is needed" is a conclusion that fits most systematic reviews. But authors need to be more specific about what exactly is required. Long awaited reports of new research, systematic reviews, and clinical guidelines are too often a disappointing anticlimax for those wishing to use them ...

  8. Health research: How to formulate research recommendations

    Others thought that adding evidence to the set of core elements was important as it provided a summary of the supporting evidence, particularly as the recommendation was likely to be abstracted and used separately from the review or research that led to its formulation. In contrast, the suggested study type (s) was left as an optional element.

  9. Research Recommendations Process and Methods Guide

    The foundation of NICE guidance is the synthesis of evidence primarily through the process of systematic reviewing and, if appropriate, modelling and cost effectiveness decision analysis. The results of these analyses are then discussed by independent committees. These committees include NHS staff, healthcare professionals, social care practitioners, commissioners and providers of care ...

  10. Defining an Optimal Format for Presenting Research Needs [Internet]

    Future research needs recommendations are valuable inputs for researchers, funders, and advocates making decisions about avenues for future scientific exploration. We performed an empirical evaluation of the published literature to appreciate the variability in the presentation of information on future research needs. We found that most systematic reviews, meta-analyses, or economic analyses ...

  11. What Makes a Good Recommendation?

    Among the possible measurements for sets of papers are topic diversity, the breadth and depth of the covered topics as well as the extent to which all important subtopics are covered by the papers, i.e. coverage. Other measures might consider the coherence of the scientific papers.

  12. Findings, Conclusions, and Recommendations

    Recommendation 1: Social scientists who are planning to add biological specimens to their survey research should familiarize themselves with existing best practices for the collection, storage, use, and distribution of biospecimens. First and foremost, the design of the protocol for collection must ensure the safety of both participants and survey staff (data and specimen collectors and handlers).

  13. 7 Developing recommendations

    Each research recommendation should be formulated as 1 question or as a set of closely related questions. It should consider the importance of issues relating to equality and diversity (for example, gender, ethnicity and people with special needs) and take into account the criteria set out in table 7.3 Each recommendation should also be ...

  14. Recommendation systems: Principles, methods and evaluation

    The use of efficient and accurate recommendation techniques is very important for a system that will provide good and useful recommendation to its individual users. This explains the importance of understanding the features and potentials of different recommendation techniques. ... Research paper recommendation with topic analysis. In Computer ...

  15. (PDF) How to formulate research recommendations

    Nonetheless, Brown and Clarke's (2006) six-step theme analysis methodology were used. These consist of being acquainted with the data, creating preliminary codes, looking for themes, going over ...

  16. Research Recommendations Process and Methods Guide [Internet]

    the research recommendations are relevant to current practice. we communicate well with the research community. This process and methods guide has been developed to help guidance-producing centres make research recommendations. It describes a step-by-step approach to identifying uncertainties, formulating research recommendations and research ...

  17. Grading quality of evidence and strength of recommendations

    Definitions. We have used the following definitions: the quality of evidence indicates the extent to which one can be confident that an estimate of effect is correct. The strength of a recommendation indicates the extent to which one can be confident that adherence to the recommendation will do more good than harm.

  18. Scholarly recommendation systems: a literature survey

    A scholarly recommendation system is an important tool for identifying prior and related resources such as literature, datasets, grants, and collaborators. A well-designed scholarly recommender significantly saves the time of researchers and can provide information that would not otherwise be considered. The usefulness of scholarly recommendations, especially literature recommendations, has ...

  19. Research recommendations

    As we develop guidance, we identify gaps and uncertainties in the evidence base which could benefit from further research. The most important unanswered questions are developed into research recommendations. Read our process and methods guide (PDF). Browse the list below to find a topic of interest.

  20. Recommendation research trends: Review, approaches and open issues

    This study will help the researchers and academicians in quickly understanding the existing work and in planning future recommendation studies for designing a unified and coherent recommender ...

  21. (PDF) CHAPTER FIVE Summary, Conclusion and Recommendation

    ISBN: 978-978-59429-9-6. CHAPTER FIVE. Summary, Conclusion and Recommendation. Aisha Ibrahim Zaid. Department of Adult Educ. & Ext. Services. Faculty of Education and Extension Services. Usmanu ...

  22. Conclusions and recommendations for future research

    The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8. We have developed and ...

  23. Result Diversification in Search and Recommendation: A Survey

    Diversifying return results is an important research topic in retrieval systems in order to satisfy both the various interests of customers and the equal market exposure of providers. There has been growing attention on diversity-aware research during recent years, accompanied by a proliferation of literature on methods to promote diversity in search and recommendation. However, diversity ...

  24. Compassion fatigue in healthcare providers: a scoping review

    The detrimental impacts of COVID-19 on healthcare providers' psychological health and well-being continue to affect their professional roles and activities, leading to compassion fatigue. The purpose of this review was to identify and summarize published literature on compassion fatigue among healthcare providers and its impact on patient care. Six databases were searched: MEDLINE (Ovid ...

  25. Writing an Effective & Supportive Recommendation Letter

    Writing recommendation letters on behalf of students and other early-career researchers is an important mentoring task within academia. An effective recommendation letter describes key candidate qualities such as academic achievements, extracurricular activities, outstanding personality traits, participation in and dedication to a particular discipline, and the mentor's confidence in the ...

  26. Paid Research Participant Opportunity on the Topic of Feminism

    Focus groups of six to 10 undergraduates who identify as feminist will be asked what you think are important feminist issues and important types of feminist activism, among other questions. ... Jill Geisler Wheeler will be responsible for the college's honors program and lead expansion of undergraduate research and other experiential learning ...