U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Reumatologia
  • v.59(1); 2021

Logo of reumatol

Peer review guidance: a primer for researchers

Olena zimba.

1 Department of Internal Medicine No. 2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine

Armen Yuri Gasparyan

2 Departments of Rheumatology and Research and Development, Dudley Group NHS Foundation Trust (Teaching Trust of the University of Birmingham, UK), Russells Hall Hospital, Dudley, West Midlands, UK

The peer review process is essential for quality checks and validation of journal submissions. Although it has some limitations, including manipulations and biased and unfair evaluations, there is no other alternative to the system. Several peer review models are now practised, with public review being the most appropriate in view of the open science movement. Constructive reviewer comments are increasingly recognised as scholarly contributions which should meet certain ethics and reporting standards. The Publons platform, which is now part of the Web of Science Group (Clarivate Analytics), credits validated reviewer accomplishments and serves as an instrument for selecting and promoting the best reviewers. All authors with relevant profiles may act as reviewers. Adherence to research reporting standards and access to bibliographic databases are recommended to help reviewers draft evidence-based and detailed comments.

Introduction

The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors’ mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for ‘elite’ research fellows who contribute to their professional societies and add value by voluntarily sharing their knowledge and experience.

Since the launch of the first academic periodicals back in 1665, the peer review has been mandatory for validating scientific facts, selecting influential works, and minimizing chances of publishing erroneous research reports [ 1 ]. Over the past centuries, peer review models have evolved from single-handed editorial evaluations to collegial discussions, with numerous strengths and inevitable limitations of each practised model [ 2 , 3 ]. With multiplication of periodicals and editorial management platforms, the reviewer pool has expanded and internationalized. Various sets of rules have been proposed to select skilled reviewers and employ globally acceptable tools and language styles [ 4 , 5 ].

In the era of digitization, the ethical dimension of the peer review has emerged, necessitating involvement of peers with full understanding of research and publication ethics to exclude unethical articles from the pool of evidence-based research and reviews [ 6 ]. In the time of the COVID-19 pandemic, some, if not most, journals face the unavailability of skilled reviewers, resulting in an unprecedented increase of articles without a history of peer review or those with surprisingly short evaluation timelines [ 7 ].

Editorial recommendations and the best reviewers

Guidance on peer review and selection of reviewers is currently available in the recommendations of global editorial associations which can be consulted by journal editors for updating their ethics statements and by research managers for crediting the evaluators. The International Committee on Medical Journal Editors (ICMJE) qualifies peer review as a continuation of the scientific process that should involve experts who are able to timely respond to reviewer invitations, submitting unbiased and constructive comments, and keeping confidentiality [ 8 ].

The reviewer roles and responsibilities are listed in the updated recommendations of the Council of Science Editors (CSE) [ 9 ] where ethical conduct is viewed as a premise of the quality evaluations. The Committee on Publication Ethics (COPE) further emphasizes editorial strategies that ensure transparent and unbiased reviewer evaluations by trained professionals [ 10 ]. Finally, the World Association of Medical Editors (WAME) prioritizes selecting the best reviewers with validated profiles to avoid substandard or fraudulent reviewer comments [ 11 ]. Accordingly, the Sarajevo Declaration on Integrity and Visibility of Scholarly Publications encourages reviewers to register with the Open Researcher and Contributor ID (ORCID) platform to validate and publicize their scholarly activities [ 12 ].

Although the best reviewer criteria are not listed in the editorial recommendations, it is apparent that the manuscript evaluators should be active researchers with extensive experience in the subject matter and an impressive list of relevant and recent publications [ 13 ]. All authors embarking on an academic career and publishing articles with active contact details can be involved in the evaluation of others’ scholarly works [ 14 ]. Ideally, the reviewers should be peers of the manuscript authors with equal scholarly ranks and credentials.

However, journal editors may employ schemes that engage junior research fellows as co-reviewers along with their mentors and senior fellows [ 15 ]. Such a scheme is successfully practised within the framework of the Emerging EULAR (European League Against Rheumatism) Network (EMEUNET) where seasoned authors (mentors) train ongoing researchers (mentees) how to evaluate submissions to the top rheumatology journals and select the best evaluators for regular contributors to these journals [ 16 ].

The awareness of the EQUATOR Network reporting standards may help the reviewers to evaluate methodology and suggest related revisions. Statistical skills help the reviewers to detect basic mistakes and suggest additional analyses. For example, scanning data presentation and revealing mistakes in the presentation of means and standard deviations often prompt re-analyses of distributions and replacement of parametric tests with non-parametric ones [ 17 , 18 ].

Constructive reviewer comments

The main goal of the peer review is to support authors in their attempt to publish ethically sound and professionally validated works that may attract readers’ attention and positively influence healthcare research and practice. As such, an optimal reviewer comment has to comprehensively examine all parts of the research and review work ( Table I ). The best reviewers are viewed as contributors who guide authors on how to correct mistakes, discuss study limitations, and highlight its strengths [ 19 ].

Structure of a reviewer comment to be forwarded to authors

Some of the currently practised review models are well positioned to help authors reveal and correct their mistakes at pre- or post-publication stages ( Table II ). The global move toward open science is particularly instrumental for increasing the quality and transparency of reviewer contributions.

Advantages and disadvantages of common manuscript evaluation models

Since there are no universally acceptable criteria for selecting reviewers and structuring their comments, instructions of all peer-reviewed journal should specify priorities, models, and expected review outcomes [ 20 ]. Monitoring and reporting average peer review timelines is also required to encourage timely evaluations and avoid delays. Depending on journal policies and article types, the first round of peer review may last from a few days to a few weeks. The fast-track review (up to 3 days) is practised by some top journals which process clinical trial reports and other priority items.

In exceptional cases, reviewer contributions may result in substantive changes, appreciated by authors in the official acknowledgments. In most cases, however, reviewers should avoid engaging in the authors’ research and writing. They should refrain from instructing the authors on additional tests and data collection as these may delay publication of original submissions with conclusive results.

Established publishers often employ advanced editorial management systems that support reviewers by providing instantaneous access to the review instructions, online structured forms, and some bibliographic databases. Such support enables drafting of evidence-based comments that examine the novelty, ethical soundness, and implications of the reviewed manuscripts [ 21 ].

Encouraging reviewers to submit their recommendations on manuscript acceptance/rejection and related editorial tasks is now a common practice. Skilled reviewers may prompt the editors to reject or transfer manuscripts which fall outside the journal scope, perform additional ethics checks, and minimize chances of publishing erroneous and unethical articles. They may also raise concerns over the editorial strategies in their comments to the editors.

Since reviewer and editor roles are distinct, reviewer recommendations are aimed at helping editors, but not at replacing their decision-making functions. The final decisions rest with handling editors. Handling editors weigh not only reviewer comments, but also priorities related to article types and geographic origins, space limitations in certain periods, and envisaged influence in terms of social media attention and citations. This is why rejections of even flawless manuscripts are likely at early rounds of internal and external evaluations across most peer-reviewed journals.

Reviewers are often requested to comment on language correctness and overall readability of the evaluated manuscripts. Given the wide availability of in-house and external editing services, reviewer comments on language mistakes and typos are categorized as minor. At the same time, non-Anglophone experts’ poor language skills often exclude them from contributing to the peer review in most influential journals [ 22 ]. Comments should be properly edited to convey messages in positive or neutral tones, express ideas of varying degrees of certainty, and present logical order of words, sentences, and paragraphs [ 23 , 24 ]. Consulting linguists on communication culture, passing advanced language courses, and honing commenting skills may increase the overall quality and appeal of the reviewer accomplishments [ 5 , 25 ].

Peer reviewer credits

Various crediting mechanisms have been proposed to motivate reviewers and maintain the integrity of science communication [ 26 ]. Annual reviewer acknowledgments are widely practised for naming manuscript evaluators and appreciating their scholarly contributions. Given the need to weigh reviewer contributions, some journal editors distinguish ‘elite’ reviewers with numerous evaluations and award those with timely and outstanding accomplishments [ 27 ]. Such targeted recognition ensures ethical soundness of the peer review and facilitates promotion of the best candidates for grant funding and academic job appointments [ 28 ].

Also, large publishers and learned societies issue certificates of excellence in reviewing which may include Continuing Professional Development (CPD) points [ 29 ]. Finally, an entirely new crediting mechanism is proposed to award bonus points to active reviewers who may collect, transfer, and use these points to discount gold open-access charges within the publisher consortia [ 30 ].

With the launch of Publons ( http://publons.com/ ) and its integration with Web of Science Group (Clarivate Analytics), reviewer recognition has become a matter of scientific prestige. Reviewers can now freely open their Publons accounts and record their contributions to online journals with Digital Object Identifiers (DOI). Journal editors, in turn, may generate official reviewer acknowledgments and encourage reviewers to forward them to Publons for building up individual reviewer and journal profiles. All published articles maintain e-links to their review records and post-publication promotion on social media, allowing the reviewers to continuously track expert evaluations and comments. A paid-up partnership is also available to journals and publishers for automatically transferring peer-review records to Publons upon mutually acceptable arrangements.

Listing reviewer accomplishments on an individual Publons profile showcases scholarly contributions of the account holder. The reviewer accomplishments placed next to the account holders’ own articles and editorial accomplishments point to the diversity of scholarly contributions. Researchers may establish links between their Publons and ORCID accounts to further benefit from complementary services of both platforms. Publons Academy ( https://publons.com/community/academy/ ) additionally offers an online training course to novice researchers who may improve their reviewing skills under the guidance of experienced mentors and journal editors. Finally, journal editors may conduct searches through the Publons platform to select the best reviewers across academic disciplines.

Peer review ethics

Prior to accepting reviewer invitations, scholars need to weigh a number of factors which may compromise their evaluations. First of all, they are required to accept the reviewer invitations if they are capable of timely submitting their comments. Peer review timelines depend on article type and vary widely across journals. The rules of transparent publishing necessitate recording manuscript submission and acceptance dates in article footnotes to inform readers of the evaluation speed and to help investigators in the event of multiple unethical submissions. Timely reviewer accomplishments often enable fast publication of valuable works with positive implications for healthcare. Unjustifiably long peer review, on the contrary, delays dissemination of influential reports and results in ethical misconduct, such as plagiarism of a manuscript under evaluation [ 31 ].

In the times of proliferation of open-access journals relying on article processing charges, unjustifiably short review may point to the absence of quality evaluation and apparently ‘predatory’ publishing practice [ 32 , 33 ]. Authors when choosing their target journals should take into account the peer review strategy and associated timelines to avoid substandard periodicals.

Reviewer primary interests (unbiased evaluation of manuscripts) may come into conflict with secondary interests (promotion of their own scholarly works), necessitating disclosures by filling in related parts in the online reviewer window or uploading the ICMJE conflict of interest forms. Biomedical reviewers, who are directly or indirectly supported by the pharmaceutical industry, may encounter conflicts while evaluating drug research. Such instances require explicit disclosures of conflicts and/or rejections of reviewer invitations.

Journal editors are obliged to employ mechanisms for disclosing reviewer financial and non-financial conflicts of interest to avoid processing of biased comments [ 34 ]. They should also cautiously process negative comments that oppose dissenting, but still valid, scientific ideas [ 35 ]. Reviewer conflicts that stem from academic activities in a competitive environment may introduce biases, resulting in unfair rejections of manuscripts with opposing concepts, results, and interpretations. The same academic conflicts may lead to coercive reviewer self-citations, forcing authors to incorporate suggested reviewer references or face negative feedback and an unjustified rejection [ 36 ]. Notably, several publisher investigations have demonstrated a global scale of such misconduct, involving some highly cited researchers and top scientific journals [ 37 ].

Fake peer review, an extreme example of conflict of interest, is another form of misconduct that has surfaced in the time of mass proliferation of gold open-access journals and publication of articles without quality checks [ 38 ]. Fake reviews are generated by manipulating authors and commercial editing agencies with full access to their own manuscripts and peer review evaluations in the journal editorial management systems. The sole aim of these reviews is to break the manuscript evaluation process and to pave the way for publication of pseudoscientific articles. Authors of these articles are often supported by funds intended for the growth of science in non-Anglophone countries [ 39 ]. Iranian and Chinese authors are often caught submitting fake reviews, resulting in mass retractions by large publishers [ 38 ]. Several suggestions have been made to overcome this issue, with assigning independent reviewers and requesting their ORCID IDs viewed as the most practical options [ 40 ].

Conclusions

The peer review process is regulated by publishers and editors, enforcing updated global editorial recommendations. Selecting the best reviewers and providing authors with constructive comments may improve the quality of published articles. Reviewers are selected in view of their professional backgrounds and skills in research reporting, statistics, ethics, and language. Quality reviewer comments attract superior submissions and add to the journal’s scientific prestige [ 41 ].

In the era of digitization and open science, various online tools and platforms are available to upgrade the peer review and credit experts for their scholarly contributions. With its links to the ORCID platform and social media channels, Publons now offers the optimal model for crediting and keeping track of the best and most active reviewers. Publons Academy additionally offers online training for novice researchers who may benefit from the experience of their mentoring editors. Overall, reviewer training in how to evaluate journal submissions and avoid related misconduct is an important process, which some indexed journals are experimenting with [ 42 ].

The timelines and rigour of the peer review may change during the current pandemic. However, journal editors should mobilize their resources to avoid publication of unchecked and misleading reports. Additional efforts are required to monitor published contents and encourage readers to post their comments on publishers’ online platforms (blogs) and other social media channels [ 43 , 44 ].

The authors declare no conflict of interest.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism. Run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

how peer reviewed research works

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

how peer reviewed research works

Understanding Science

How science REALLY works...

  • Understanding Science 101
  • Peer-reviewed journals are publications in which scientific contributions have been vetted by experts in the relevant field.
  • Peer-reviewed articles provide a trusted form of scientific communication. Peer-reviewed work isn’t necessarily correct or conclusive, but it does meet the standards of science.

Scrutinizing science: Peer review

In science, peer review helps provide assurance that published research meets minimum standards for scientific quality. Peer review typically works something like this:

  • A group of scientists completes a study and writes it up in the form of an article. They submit it to a journal for publication.
  • The journal’s editors send the article to several other scientists who work in the same field (i.e., the “peers” of peer review).
  • Those reviewers provide feedback on the article and tell the editor whether or not they think the study is of high enough quality to be published.
  • The authors may then revise their article and resubmit it for consideration.
  • Only articles that meet good scientific standards (e.g., acknowledge and build upon other work in the field, rely on logical reasoning and well-designed studies, back up claims with evidence , etc.) are accepted for publication.

Peer review and publication are time-consuming, frequently involving more than a year between submission and publication. The process is also highly competitive. For example, the highly-regarded journal Science accepts less than 8% of the articles it receives, and The New England Journal of Medicine publishes just 6% of its submissions.

Peer-reviewed articles provide a trusted form of scientific communication. Even if you are unfamiliar with the topic or the scientists who authored a particular study, you can trust peer-reviewed work to meet certain standards of scientific quality. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. No scientist would want to base their own work on someone else’s unreliable study! Peer-reviewed work isn’t necessarily correct or conclusive, but it does meet the standards of science. And that means that once a piece of scientific research passes through peer review and is published, science must deal with it somehow — perhaps by incorporating it into the established body of scientific knowledge, building on it further, figuring out why it is wrong, or trying to replicate its results.

PEER REVIEW: NOT JUST SCIENCE

Many fields outside of science use peer review to ensure quality. Philosophy journals, for example, make publication decisions based on the reviews of other philosophers, and the same is true of scholarly journals on topics as diverse as law, art, and ethics. Even those outside the research community often use some form of peer review. Figure-skating championships may be judged by former skaters and coaches. Wine-makers may help evaluate wine in competitions. Artists may help judge art contests. So while peer review is a hallmark of science, it is not unique to science.

  • Science in action
  • Take a sidetrip

What’s peer review good for? To find out, explore what happens when the process is by-passed. Visit  Cold fusion: A case study for scientific behavior .

  • To find out how to tell if research is peer-reviewed and why this is important, check out this  handy guide from Sense About Science .
  • Advanced: Visit the Visionlearning website for advanced material on peer review .
  • Advanced: Visit The Scientist  magazine to learn about  how peer review benefits the people doing the reviewing .

Publish or perish?

Copycats in science: The role of replication

Subscribe to our newsletter

  • The science flowchart
  • Science stories
  • Grade-level teaching guides
  • Teaching resource database
  • Journaling tool
  • Misconceptions

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

Peer review process

Introduction to peer review, what is peer review.

Peer review is the system used to assess the quality of a manuscript before it is published. Independent researchers in the relevant research area assess submitted manuscripts for originality, validity and significance to help editors determine whether a manuscript should be published in their journal.

How does it work?

When a manuscript is submitted to a journal, it is assessed to see if it meets the criteria for submission. If it does, the editorial team will select potential peer reviewers within the field of research to peer-review the manuscript and make recommendations.

There are four main types of peer review used by BMC:

Single-blind: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report.

Double-blind: the reviewers do not know the names of the authors, and the authors do not know who reviewed their manuscript.

Open peer: authors know who the reviewers are, and the reviewers know who the authors are. If the manuscript is accepted, the named reviewer reports are published alongside the article and the authors’ response to the reviewer.

Transparent peer: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report. If the manuscript is accepted, the anonymous reviewer reports are published alongside the article and the authors’ response to the reviewer.

Different journals use different types of peer review. You can find out which peer-review system is used by a particular journal in the journal’s ‘About’ page.

Why do peer review?

Peer review is an integral part of scientific publishing that confirms the validity of the manuscript. Peer reviewers are experts who volunteer their time to help improve the manuscripts they review. By undergoing peer review, manuscripts should become:

More robust - peer reviewers may point out gaps in a paper that require more explanation or additional experiments.

Easier to read - if parts of your paper are difficult to understand, reviewers can suggest changes.

More useful - peer reviewers also consider the importance of your paper to others in your field.

For more information and advice on how to get published, please see our blog series here .

How peer review works

peer-review-illustration-tpr-small

The peer review process can be single-blind, double-blind, open or transparent.

You can find out which peer review system is used by a particular journal in the journal's 'About' page.

N. B. This diagram is a representation of the peer review process, and should not be taken as the definitive approach used by every journal.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on 6 May 2022 by Tegan George . Revised on 2 September 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer review.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism, run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymised) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymised comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymised) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymised) review – where the identities of the author, reviewers, and editors are all anonymised – does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimises potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymise everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimise back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarise the argument in your own words

Summarising the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organised. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticised, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the ‘compliment sandwich’, where you ‘sandwich’ your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarised or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • Reject the manuscript and send it back to author, or
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, September 02). What Is Peer Review? | Types & Examples. Scribbr. Retrieved 9 April 2024, from https://www.scribbr.co.uk/research-methods/peer-reviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a double-blind study | introduction & examples, a quick guide to experimental design | 5 steps & examples, data cleaning | a guide with examples & steps.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 12 November 2021

Demystifying the process of scholarly peer-review: an autoethnographic investigation of feedback literacy of two award-winning peer reviewers

  • Sin Wang Chong   ORCID: orcid.org/0000-0002-4519-0544 1 &
  • Shannon Mason 2  

Humanities and Social Sciences Communications volume  8 , Article number:  266 ( 2021 ) Cite this article

2655 Accesses

6 Citations

18 Altmetric

Metrics details

  • Language and linguistics

A Correction to this article was published on 26 November 2021

This article has been updated

Peer reviewers serve a vital role in assessing the value of published scholarship and improving the quality of submitted manuscripts. To provide more appropriate and systematic support to peer reviewers, especially those new to the role, this study documents the feedback practices and experiences of two award-winning peer reviewers in the field of education. Adopting a conceptual framework of feedback literacy and an autoethnographic-ecological lens, findings shed light on how the two authors design opportunities for feedback uptake, navigate responsibilities, reflect on their feedback experiences, and understand journal standards. Informed by ecological systems theory, the reflective narratives reveal how they unravel the five layers of contextual influences on their feedback practices as peer reviewers (micro, meso, exo, macro, chrono). Implications related to peer reviewer support are discussed and future research directions are proposed.

Similar content being viewed by others

how peer reviewed research works

The transformative power of values-enacted scholarship

Nicky Agate, Rebecca Kennison, … Penelope Weber

What matters in the cultivation of student feedback literacy: exploring university EFL teachers’ perceptions and practices

Zhenfang Xie & Wen Liu

how peer reviewed research works

Writing impact case studies: a comparative study of high-scoring and low-scoring case studies from REF2014

Bella Reichard, Mark S Reed, … Andrea Whittle

Introduction

The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory if not always in practice) that only research that has been conducted according to methodological and ethical principles be published in reputable journals and other dissemination outlets (Starck, 2017 ). On the other hand, it is seen as an opportunity to improve the quality of manuscripts, as peers identify errors and areas of weakness, and offer suggestions for improvement (Kelly et al., 2014 ). Whether or not peer review is actually successful in these areas is open to considerable debate, but in any case it is the “critical juncture where scientific work is accepted for publication or rejected” (Heesen and Bright, 2020 , p. 2). In contemporary academia, where higher education systems across the world are contending with decreasing levels of public funding, there is increasing pressure on researchers to be ‘productive’, which is largely measured by the number of papers published, and of funding grants awarded (Kandiko, 2010 ), both of which involve peer review.

Researchers are generally invited to review manuscripts once they have established themselves in their disciplinary field through publication of their own research. This means that for early career researchers (ECRs), their first exposure to the peer-review process is generally as an author. These early experiences influence the ways ECRs themselves conduct peer review. However, negative experiences can have a profound and lasting impact on researchers’ professional identity. This appears to be particularly true when feedback is perceived to be unfair, with feedback tone largely shaping author experience (Horn, 2016 ). In most fields, reviewers remain anonymous to ensure freedom to give honest and critical feedback, although there are concerns that a lack of accountability can result in ‘bad’ and ‘rude’ reviews (Mavrogenis et al., 2020 ). Such reviews can negatively impact all researchers, but disproportionately impact underrepresented researchers (Silbiger and Stubler, 2019 ). Regardless of career phase, no one is served well by unprofessional reviews, which contribute to the ongoing problem of bullying and toxicity prevalent in academia, with serious implications on the health and well-being of researchers (Keashly and Neuman, 2010 ).

Because of its position as the central process through which research is vetted and refined, peer review should play a similarly central role in researcher training, although it rarely features. In surveying almost 3000 researchers, Warne ( 2016 ) found that support for reviewers was mostly received “in the form of journal guidelines or informally as advice from supervisors or colleagues” (p. 41), with very few engaging in formal training. Among more than 1600 reviewers of 41 nursing journals, only one third received any form of support (Freda et al., 2009 ), with participants across both of these studies calling for further training. In light of the lack of widespread formal training, most researchers learn ‘on the job’, and little is known about how researchers develop their knowledge and skills in providing effective assessment feedback to their peers. In this study, we undertake such an investigation, by drawing on our first-hand experiences. Through a collaborative and reflective process, we look to identify the forms and forces of our feedback literacy development, and seek to answer specifically the following research questions:

What are the exhibited features of peer reviewer feedback literacy?

What are the forces at work that affect the development of feedback literacy?

Literature review

Conceptualisation of feedback literacy.

The notion of feedback literacy originates from the research base of new literacy studies, which examines ‘literacies’ from a sociocultural perspective (Gee, 1999 ; Street, 1997 ). In the educational context, one of the most notable types of literacy is assessment literacy (Stiggins, 1999 ). Traditionally, assessment literacy is perceived as one of the indispensable qualities of a successful educator, which refers to the skills and knowledge for teachers “to deal with the new world of assessment” (Fulcher, 2012 , p. 115). Following this line of teacher-oriented assessment literacy, recent attempts have been made to develop more subject-specific assessment literacy constructs (e.g., Levi and Inbar-Lourie, 2019 ). Given the rise of student-centred approaches and formative assessment in higher education, researchers began to make the case for students to be ‘assessment literate’; comprising of such knowledge and skills as understanding of assessment standards, the relationship between assessment and learning, peer assessment, and self-assessment skills (Price et al., 2012 ). Feedback literacy, as argued by Winstone and Carless ( 2019 ), is essentially a subset of assessment literacy because “part of learning through assessment is using feedback to calibrate evaluative judgement” (p. 24). The notion of feedback literacy was first extensively discussed by Sutton ( 2012 ) and more recently by Carless and Boud ( 2018 ). Focusing on students’ feedback literacy, Sutton ( 2012 ) conceptualised feedback literacy as a three-dimensional construct—an epistemological dimension (what do I know about feedback?), an ontological dimension (How capable am I to understand feedback?), and a practical dimension (How can I engage with feedback?). In close alignment with Sutton’s construct, the seminal conceptual paper by Carless and Boud ( 2018 ) further illustrated the four distinctive abilities of feedback literate students: the abilities to (1) understand the formative role of feedback, (2) make informed and accurate evaluative judgement against standards, (3) manage emotions especially in the face of critical and harsh feedback, and (4) take action based on feedback. Since the publication of Carless and Boud ( 2018 ), student and teacher feedback literacy has been in the limelight of assessment research in higher education (e.g., Chong 2021b ; Carless and Winstone 2020 ). These conceptual contributions expand the notion of feedback literacy to consider not only the manifestations of various forms of effective student engagement with feedback but also the confluence of contexts and individual differences of students in developing students’ feedback literacy by drawing upon various theoretical perspectives (e.g., ecological systems theory; sociomaterial perspective) and disciplines (e.g., business and human resource management). Others address practicalities of feedback literacy; for example, how teachers and students can work in synergy to develop feedback literacy (Carless and Winstone, 2020 ) and ways to maximise student engagement with feedback at a curricular level (Malecka et al., 2020). In addition to conceptualisation, advancement of the notion of feedback literacy is evident in the recent proliferation of primary studies. The majority of these studies are conducted in the field of higher education, focusing mostly on student feedback literacy in classrooms (e.g., Molloy et al., 2019 ; Winstone et al., 2019 ) and in the workplace (Noble et al., 2020 ), with a handful focused on teacher feedback literacy (e.g., Xu and Carless 2016 ). Some studies focusing on student feedback literacy adopt a qualitative case study research design to delve into individual students’ experience of engaging with various forms of feedback. For example, Han and Xu ( 2019 ) analysed the profiles of feedback literacy of two Chinese undergraduate students. Findings uncovered students’ resistance to engagement with feedback, which relates to the misalignment between the cognitive, social, and affective components of individual students’ feedback literacy profiles. Others reported interventions designed to facilitate students’ uptake of feedback, focusing on their effectiveness and students’ perceptions. Specifically, affordances and constraints of educational technology such as electronic feedback portfolio (Chong, 2019 ; Winstone et al., 2019 ) are investigated. Of particular interest is a recent study by Noble et al. ( 2020 ), which looked into student feedback literacy in the workplace by probing into the perceptions of a group of Australian healthcare students towards a feedback literacy training programme conducted prior to their placement. There is, however, a dearth of primary research in other areas where elicitation, process, and enactment of feedback are vital; for instance, academics’ feedback literacy. In the ‘publish or perish’ culture of higher education, academics, especially ECRs, face immense pressure to publish in top-tiered journals in their fields and face the daunting peer-review process, while juggling other teaching and administrative responsibilities (Hollywood et al., 2019 ; Tynan and Garbett 2007 ). Taking up the role of authors and reviewers, researchers have to possess the capacity and disposition to engage meaningfully with feedback provided by peer reviewers and to provide constructive comments to authors. Similar to students, researchers have to learn how to manage their emotions in the face of critical feedback, to understand the formative values of feedback, and to make informed judgements about the quality of feedback (Gravett et al., 2019 ). At the same time, feedback literacy of academics also resembles that of teachers. When considering the kind of feedback given to authors, academics who serve as peer reviewers have to (1) design opportunities for feedback uptake, (2) maintain a professional and supportive relationship with authors, and (3) take into account the practical dimension of giving feedback (e.g., how to strike a balance between quality of feedback and time constraints due to multiple commitments) (Carless and Winstone 2020 ). To address the above, one of the aims of the present study is to expand the application of feedback literacy as a useful analytical lens to areas outside the classroom, that is, scholarly peer-review activities in academia, by presenting, analysing, and synthesising the personal experiences of the authors as successful peer reviewers for academic journals.

Conceptual framework

We adopt a feedback literacy of peer reviewers framework (Chong 2021a ) as an analytical lens to analyse, systemise, and synthesise our own experiences and practices as scholarly peer reviewers (Fig. 1 ). This two-tier framework includes a dimension on the manifestation of feedback literacy, which categorises five features of feedback literacy of peer reviewers, informed by student and teacher feedback literacy frameworks by Carless and Boud ( 2018 ) and Carless and Winstone ( 2020 ). When engaging in scholarly peer review, reviewers are expected to be able to provide constructive and formative feedback, which authors can act on in their revisions ( engineer feedback uptake ). Besides, peer reviewers who are usually full-time researchers or academics lead hectic professional lives; thus, when writing reviewers’ reports, it is important for them to consider practically and realistically the time they can invest and how their various degrees of commitment may have an impact on the feedback they provide ( navigate responsibilities ). Furthermore, peer reviewers should consider the emotional and relational influences their feedback exert on the authors. It is crucial for feedback to be not only informative but also supportive and professional (Chong, 2018 ) ( maintain relationships ). Equally important, it is imperative for peer reviewers to critically reflect on their own experience in the scholarly peer-review process, including their experience of receiving and giving feedback to academic peers, as well as the ways authors and editors respond to their feedback ( reflect on feedback experienc e). Lastly, acting as gatekeepers of journals to assess the quality of manuscripts, peer reviewers have to demonstrate an accurate understanding of the journals’ aims, remit, guidelines and standards, and reflect those in their written assessments of submitted manuscripts ( understand standards ). Situated in the context of scholarly peer review, this collaborative autoethnographic study conceptualises feedback literacy not only as a set of abilities but also orientations (London and Smither, 2002 ; Steelman and Wolfeld, 2016 ), which refers to academics’ tendency, beliefs, and habits in relation to engaging with feedback (London and Smither, 2002 ). According to Cheung ( 2000 ), orientations are influenced by a plethora of factors, namely experiences, cultures, and politics. It is important to understand feedback literacy as orientations because it takes into account that feedback is a convoluted process and is influenced by a plethora of contextual and personal factors. Informed by ecological systems theory (Bronfenbrenner, 1986 ; Neal and Neal, 2013 ) and synthesising existing feedback literacy models (Carless and Boud, 2018 ; Carless and Winstone, 2020 ; Chong, 2021a , 2021b ), we consider feedback literacy as a malleable, situated, and emergent construct, which is influenced by the interplay of various networked layers of ecological systems (Neal and Neal, 2013 ) (Fig. 1 ). Also important is that conceptualising feedback literacy as orientations avoids dichotomisation (feedback literate vs. feedback illiterate), emphasises the developmental nature of feedback literacy, and better captures the multifaceted manifestations of feedback engagement.

figure 1

The outer ring of the figure shows the components of feedback literacy while the inner ring concerns the layers of contexts (ecosystems) which influence the manifestation of feedback literacy of peer reviewers.

Echoing recent conceptual papers on feedback literacy which emphasises the indispensable role of contexts (Chong 2021b ; Boud and Dawson, 2021 ; Gravett et al., 2019 ), our conceptual framework includes an underlying dimension of networked ecological systems (micro, meso, exo, macro, and chrono), which portrays the contextual forces shaping our feedback orientations. Informed by the networked ecological system theory of Neal and Neal ( 2013 ), we postulate that there are five systems of contextual influence, which affect the feedback experience and development of feedback literacy of peer reviewers. The five ecological systems refer to ‘settings’, which is defined by Bronfenbrenner ( 1986 ) as “place[s] where people can readily engage in social interactions” (p. 22). Even though Bronfenbrenner’s ( 1986 ) somewhat dated definition of ‘place’ is limited to ‘physical space’, we believe that ‘places’ should be more broadly defined in the 21st century to encompass physical and virtual, recent and dated, closed and distanced locations where people engage; as for ‘interactions’, from a sociocultural perspective, we understand that ‘interactions’ can include not only social, but also cognitive and emotional exchanges (Vygotsky, 1978 ). Microsystem refers to a setting where people, including the focal individual, interact. Mesosystem , on the other hand, means the interactions between people from different settings and the influence they exert on the focal individual. An exosystem , similar to a microsystem, is understood as a single setting but this setting excludes the focal individual but it is likely that participants in this setting would interact with the focal individual. The remaining two systems, macrosystem and chronosystem, refer not only to ‘settings’ but ‘forces that shape the patterns of social interactions that define settings’ (Neal and Neal, 2013 , p. 729). Macrosystem is “the set of social patterns that govern the formation and dissolution of… interactions… and thus the relationship among ecological systems” (ibid). Some examples of macrosystems given by Neal and Neal ( 2013 ) include political and cultural systems. Finally, chronosystem is “the observation that patterns of social interactions between individuals change over time, and that such changes impact on the focal individual” (ibid, p. 729). Figure 2 illustrates this networked ecological systems theory using a hypothetical example of an early career researcher who is involved in scholarly peer review for Journal A; at the same time, they are completing a PhD and are working as a faculty member at a university.

figure 2

This is a hypothetical example of an early career researcher who is involved in scholarly peer review for Journal A.

From the reviewed literature on the construct of feedback literacy, the investigation of feedback literacy as a personal, situated, and unfolding process is best done through an autoethnographic lens, which underscores critical self-reflection. Autoethnography refers to “an approach to research and writing that seeks to describe and systematically analyse (graphy) personal experience (auto) in order to understand cultural experience (ethno)” (Ellis et al., 2011 , p. 273). Autoethnography stems from research in the field of anthropology and is later introduced to the fields of education by Ellis and Bochner ( 1996 ). In higher education research, autoethnographic studies are conducted to illuminate on topics related to identity and teaching practices (e.g., Abedi Asante and Abubakari, 2020 ; Hains-Wesson and Young 2016 ; Kumar, 2020 ). In this article, a collaborative approach to autoethnography is adopted. Based on Chang et al. ( 2013 ), Lapadat ( 2017 ) defines collaborative autoethnography (CAE) as follows:

… an autobiographic qualitative research method that combines the autobiographic study of self with ethnographic analysis of the sociocultural milieu within which the researchers are situated, and in which the collaborating researchers interact dialogically to analyse and interpret the collection of autobiographic data. (p. 598)

CAE is not only a product but a worldview and process (Wall, 2006 ). CAE is a discrete view about the world and research, which straddles between paradigmatic boundaries of scientific and literary studies. Similar to traditional scientific research, CAE advocates systematicity in the research process and consideration is given to such crucial research issues as reliability, validity, generalisability, and ethics (Lapadat, 2017 ). In closer alignment with studies on humanities and literature, the goal of CAE is not to uncover irrefutable universal truths and generate theories; instead, researchers of CAE are interested in co-constructing and analysing their own personal narratives or ‘stories’ to enrich and/or challenge mainstream beliefs and ideas, embracing diverse rather than canonical ways of behaviour, experience, and thinking (Ellis et al., 2011 ). Regarding the role of researchers, CAE researchers openly acknowledge the influence (and also vulnerability) of researchers throughout the research process and interpret this juxtaposition of identities between researchers and participants of research as conducive to offering an insider’s perspective to illustrate sociocultural phenomena (Sughrua, 2019 ). For our CAE on the scholarly peer-review experiences of two ECRs, the purpose is to reconstruct, analyse, and publicise our lived experience as peer reviewers and how multiple forces (i.e., ecological systems) interact to shape our identity, experience, and feedback practice. As a research process, CAE is a collaborative and dynamic reflective journey towards self-discovery, resulting in narratives, which connect with and add to the existing literature base in a personalised manner (Ellis et al., 2011 ). The collaborators should go beyond personal reflection to engage in dialogues to identify similarities and differences in experiences to throw new light on sociocultural phenomena (Merga et al., 2018 ). The iterative process of self- and collective reflections takes place when CAE researchers write about their own “remembered moments perceived to have significantly impacted the trajectory of a person’s life” and read each other’s stories (Ellis et al., 2011 , p. 275). These ‘moments’ or vignettes are usually written retrospectively, selectively, and systematically to shed light on facets of personal experience (Hughes et al., 2012 ). In addition to personal stories, some autoethnographies and CAEs utilise multiple data sources (e.g., reflective essays, diaries, photographs, interviews with co-researchers) and various ways of expressions (e.g., metaphors) to achieve some sort of triangulation and to present evidence in a ‘systematic’ yet evocative manner (Kumar, 2020 ). One could easily notice that overarching methodological principles are discussed in lieu of a set of rigid and linear steps because the process of reconstructing experience through storytelling can be messy and emergent, and certain degree of flexibility is necessary. However, autoethnographic studies, like other primary studies, address core research issues including reliability (reader’s judgement of the credibility of the narrator), validity (reader’s judgement that the narratives are believable), and generalisability (resemblance between the reader’s experience and the narrative, or enlightenment of the reader regarding unfamiliar cultural practices) (Ellis et al., 2011 ). Ethical issues also need to be considered. For example, authors are expected to be honest in reporting their experiences; to protect the privacy of the people who ‘participated’ in our stories, pseudonyms need to be used (Wilkinson, 2019 ). For the current study, we follow the suggested CAE process outlined by Chang et al. ( 2013 ), which includes four stages: deciding on topic and method , collecting materials , making meaning , and writing . When deciding on the topic, we decided to focus on our experience as scholarly peer reviewers because doing peer review and having our work reviewed are an indispensable part of our academic lives. The next is to collect relevant autoethnographic materials. In this study, we follow Kumar ( 2020 ) to focus on multiple data sources: (1) reflective essays which were written separately through ‘recalling’, which is referred to by Chang et al. ( 2013 ) as ‘a free-spirited way of bringing out memories about critical events, people, place, behaviours, talks, thoughts, perspectives, opinions, and emotions pertaining to the research topic’ (p. 113), and (2) discussion meetings. In our reflective essays, we included written records of reflection and excerpts of feedback in our peer-review reports. Following material collection is meaning making. CAE, as opposed to autoethnography, emphasises the importance of engaging in dialogues with collaborators and through this process we identify similarities and differences in our experiences (Sughrua, 2019 ). To do so, we exchanged our reflective essays; we read each other’s reflections and added questions or comments on the margins. Then, we met online twice to share our experiences and exchange views regarding the two reflective essays we wrote. Both meetings lasted for approximately 90 min, were audio-recorded and transcribed. After each meeting, we coded our stories and experiences with reference to the two dimensions of the ecological framework of feedback literacy (Fig. 1 ). With regards to coding our data, we followed the model of Miles and Huberman ( 1994 ), which comprises four stages: data reduction (abstracting data), data display (visualising data in tabular form), conclusion-drawing, and verification. The coding and writing processes were done collaboratively on Google Docs and care was taken to address the aforesaid ethical (e.g., honesty, privacy) and methodological issues (e.g., validity, reliability, generalisability). As a CAE study, the participants are the researchers themselves, that is, the two authors of this paper. We acknowledge that research data are collected from human subjects (from the two authors), such data are collected in accordance with the standards and guidelines of the School Research Ethics Committee at the School of Social Sciences, Education and Social Work, Queen’s University Belfast (Ref: 005_2021). Despite our different experiences in our unique training and employment contexts, we share some common characteristics, both being ECRs (<5 years post-PhD), working in the field of education, active in the scholarly publication process as both authors and peer reviewers. Importantly for this study, we were both recipients of Reviewer of the Year Award 2019 awarded jointly by the journal, Higher Education Research & Development and the publisher , Taylor & Francis. This award in recognition of the quality of our reviewing efforts, as determined by the editorial board of a prestigious higher education journal, provided a strong impetus for this study, providing an opportunity to reflect on our own experiences and practices. The extent of our peer-review activities during our early career leading up to the time of data collection is summarised in Table 1 .

Findings and discussion

Analysis of the four individual essays (E1 and E2 for each participant) and transcripts of the two subsequent discussions (D1 and D2) resulted in the identification of multiple descriptive codes and in turn a number of overarching themes (Supplementary Appendix 1). Our reporting of these themes is guided by our conceptual framework, where we first focus on the five manifestations of feedback literacy to highlight the experiences that contribute to our growth as effective and confident peer reviewers. Then, we report on the five ecological systems to unravel how each contextual layer develops our feedback literacy as peer reviewers. (Note that the discussion of the chronosystem has been necessarily incorporated into each of the four others dimensions: microsystem , mesosystem , exosystem , and macrosystem in order to demonstrate temporal changes). In particular, similarities and differences will be underscored, and connections with manifested feedback beliefs and behaviours will be made. We include quotes from both Author 1 (A1) and Author 2 (A2), in order to illustrate our findings, and to show the richness and depth of the data collected (Corden and Sainsbury, 2006 ). Transcribed quotes may be lightly edited while retaining meaning, for example through the removal of fillers and repetitions, which is generally accepted practice to ensure readability ( ibid ).

Manifestations of feedback literacy

Engineering feedback uptake.

The two authors have a strong sense of the purpose of peer review as promoting not only research quality, but the growth of researchers. One way that we engineer author uptake is to ensure that feedback is ‘clear’ (A2,E1), ‘explicit’ (A2,E1), ‘specific’ (A1,E1), and importantly ‘actionable… to ensure that authors can act on this feedback so that their manuscripts can be improved and ultimately accepted for publication’ (A1,E1). In less than favourable author outcomes, we ensure that there is reference to the role of the feedback in promoting the development of the manuscript, which A1 refers to as ‘promotion of a growth mindset’ (A1,E1). For example, after requesting a second round of major revisions, A2 ‘acknowledged the frustration that the author might have felt on getting further revisions by noting how much improvement was made to the paper, but also making clear the justification for sending it off for more work’ (A2,E1). We both note that we tend to write longer reviews when a rejection is the recommended outcome, as our ultimate goal is to aid in the development of a manuscript.

Rejections doesn’t mean a paper is beyond repair. It can still be fixed and improved; a rejection simply means that the fix may be too extensive even for multiple review cycles. It is crucial to let the authors whose manuscripts are rejected know that they can still act on the feedback to improve their work; they should not give up on their own work. I think this message is especially important to first-time authors or early career researchers. (A1,E1)

In promoting a growth mindset and in providing actionable feedback, we hope to ‘show the authors that I’m not targeting them, but their work’ (A1,D1). We particularly draw on our own experiences as ECRs, with first-hand understanding that ‘everyone takes it personally when they get rejected. Yeah. Moreover, it is hard to separate (yourself from the paper)’ (A2,D1).

Navigating responsibilities

As with most academics, the two authors have multiple pressures on their time, and there ‘isn’t much formal recognition or reward’ (A1,E1) and ‘little extrinsic incentive for me to review’ (A2,E1). Nevertheless we both view our roles as peer reviewers as ‘an important part of the process’ (A2,E1), ‘a modest way for me to give back to the academic community’ (A1,E1). Through peer review we have built a sense of ‘identity as an academic’ (A1,D1), through ‘being a member of the academic community’ (A2,D1). While A1 commits to ‘review as many papers as possible’ (A1,E1) and A2 will usually accept offers to review, there are still limits on our time and therefore we consider the topic and methods employed when deciding whether or not to accept an invitation, as well as the journal itself, as we feel we can review more efficiently for journals with which we are more familiar. A1 and A2 have different processes for conducting their review that are most efficient for their own situations. For A1, the process begins with reading the whole manuscript in one go, adding notes to the pdf document along the way, which he then reviews, and makes a tentative decision, including ‘a few reasons why I have come to this decision’ (A1,E1). After waiting at least one day, he reviews all of the notes and begins writing the report, which is divided into the sections of the paper. He notes it ‘usually takes me 30–45 min to write a report. I then proofread this report and submit it to the system. So it usually takes me no more than three hours to complete a review’ (A1,E1). For A2, the process for reviewing and structuring the report is quite different, with a need to ‘just find small but regular opportunities to work on the review’ (A2,E1). As was the case during her Ph.D, which involved juggling research and raising two babies, ‘I’ve trained myself to be able to do things in bits’ (A2,D1). So A2 also begins by reading the paper once through, although generally without making initial comments. The next phase involves going through the paper at various points in time whenever possible, and at the same time building up the report, making the report structurally slightly different to that of A1.

What my reviews look like are bullet points, basically. And they’re not really in a particular order. They generally… follow the flow (of the paper). But I mean, I might think of something, looking at the methods and realise, hey, you haven’t defined this concept in the literature review so I’ll just add you haven’t done this. And so I will usually preface (the review)… Here’s a list of suggestions. Some of them are minor, some of them are serious, but they’re in no particular order. (A1,D1)

As such, both reviewers engage in personalised strategies to make more effective use of their time. Both A1 and A2 give explicit but not exhaustive examples of an area of concern, and they also pose questions for the author to consider, in both cases placing the onus back on the author to take action. As A1 notes, ‘I’m not going to do a summary of that reference for you. I’m just going to include that there. If you’d like you can check it out’ (A1,D1). For A2, a lack of adequate reporting of the methods employed in a study makes it difficult to proceed, and in such cases will not invest further time, sending it back to the editor, because ‘I can’t even comment on the findings… I can’t go on. I’m not gonna waste my time’ (A2,D1). In cases where the authors may be ‘on the fence’ about a particular review, they will use the confidential comments to the editor to help work through difficult cases as ‘they are obviously very experienced reviewers’ (A1,D1). Delegating tasks to the expertise of the editorial teams when appropriate also ensures time is used more prudently.

Maintaining relationships

Except in a few cases where A2 has reviewed for journals with a single-blind model, the vast majority of the reviews that we have completed have been double-blind. This means that we are unaware of the identity of the author/s, and we are unknown to them. However, ‘even with blind-reviews I tend to think of it as a conversation with a person’ (A2,E1). A1 talks about the need to have respect for the author and their expertise and effort ‘regardless of the quality of the submission (which can be in some cases subjective)’ (A1,E1). A2 writes similarly about the ‘privilege’ and ‘responsibility’ of being able to review manuscripts that authors ‘have put so much time and energy into possibly over an extended period’ (A2,E1). In this way it is possible to develop a sort of relationship with an author even without knowing their identity. In trying to articulate the nature of that relationship (which we struggle to do so definitively), we note that it is more than just a reviewer, and A2 reflected on a recent review, which went through a number of rounds of resubmission where ‘it felt like we were developing a relationship, more like a mentor than a reviewer’ (A2,E1).

I consider this role as a peer reviewer more than giving helpful and actionable feedback; I would like to be a supporter and critical friend to the authors, even though in most cases I don’t even know who they are or what career stage they are at (A1,E1).
In any case, as A1 notes, ‘we don’t even need to know who that person is because we know that people like encouragement’ (A1,D1), and we are very conscious of the emotional impact that feedback can have on authors, and the inherent power imbalance in the relationship. For this reason, A1 is ‘cautious about the way I write so that I don’t accidentally make the authors the target of my feedback’. As A2 notes ‘I don’t want authors feeling depressed after reading a review’ (A2,E1). While we note that we try to deliver our feedback with ‘respect’ (A1,E1; A1,E2; A2,D1) ‘empathy’ (A1,E1), and ‘kindness’ (A2,D1), we both noted that we do not ‘sugar coat’ our feedback and A1 describes himself as ‘harsh’ and ‘critical’ (A1,E1) while A2 describes herself as ‘pretty direct’ (A2,E1). In our discussion, we tried to delve into this seeming contradiction:… the encouragement, hopefully is to the researcher, but the directness it should be, I hope, is related directly to whatever it is, the methods or the reporting or the scope of the literature review. It’s something specific about the manuscript itself. And I know myself, being an ECR and being reviewed, that it’s hard to separate yourself from your work… And I want to make it really explicit. If it’s critical, it’s not about the person. It’s about the work, you know, the weakness of the work, but not the person. (A2,D1)

A1 explains that at times his initial report may be highly critical, and at times he will ‘sit back and rethink… With empathy, I will write feedback, which is more constructive’ (A1,E1). However, he adds that ‘I will never try to overrate a piece or sugar-coat my comments just to sound “friendly”’ (A1,E1), with the ultimate goal being to uphold academic rigour. Thus, honesty is seen as the best strategy to maintain a strong, professional relationship with reviewers. Another strategy employed by A2 is showing explicit commitment to the review process. One way this is communicated is by prefacing a review with a summary of the paper, not only ‘to confirm with the author that I am interpreting the findings in the way that they intended, but also importantly to show that I have engaged with the paper’ (A2,E1). Further, if the recommendation is for a further round of review, she will state directly to the authors ‘that I would be happy to review a revised manuscript’ (A2,E1).

Reflecting on feedback experience

As ECRs we have engaged in the scholarly publishing process initially as authors, subsequently as reviewers, and most recently as Associate Editors. Insights gained in each of these roles have influenced our feedback practices, and have interacted to ‘develop a more holistic understanding of the whole review process’ (A1,E1).

We reflect on our experiences as authors beginning in our doctoral candidatures, with reviews that ranged from ‘the most helpful to the most cynical’ (A1,E1). A2 reflected on two particular experiences both of which resulted in rejection, one being ‘snarky’ and ‘unprofessional’ with ‘no substance’, the other providing ‘strong encouragement … the focus was clearly on the paper and not me personally’ (A2,E1). It was this experience that showed the divergence between the tone and content of review despite the same outcome, and as result A2 committed to being ‘ the amazing one’. A1 also drew from a negative experience noting that ‘I remember the least useful feedback as much as I do with the most constructive one’ (A1,E1). This was particularly the case when a reviewer made apparently politically-motivated judgements that A1 ‘felt very uncomfortable with’ and flagged with the editor (A1,E1). Through these experiences both authors wrote in their essays about the need to focus on the work and not on the individual, with an understanding that a review ‘can have a really serious impact’ (A2,D1) on an author.

It is important to note that neither authors have been involved in any formal or informal training on how to conduct peer review, although A1 expresses appreciation of the regular practice of one journal for which he reviews, where ‘the editor would write an email to the reviewers giving feedback on the feedback we have given’ (A1,E1). For A2, an important source of learning is in comparing her reviews with that of others who have reviewed the same manuscript, the norm for some journals being to send all reports to all reviewers along with the final decision.

I’m always interested to see how [my] review compares with others. Have I given the same recommendation? Have I identified the same areas of weakness? Have I formatted my review in the same way? How does the tone of delivery differ? I generally find that I give a similar if not the same response to other reviews, and I’m happy to see that I often pick up the same issues with methodology. (A2,E1)

For A2 there is comfort in seeing reviews that are similar to others, although we both draw on experiences where our recommendation diverged from others, with a source of assurance being the ultimate decision of the editor.

So it’s like, I don’t think it can be published and that [other] reviewer thinks it’s excellent. So usually, what the editor would do in this instance is invite the third one. Right, yeah. But then this editor told me… that they decided to go with my decision to reject because they find that my comments are more convincing. (A1,D1)

A2 also was surprised to read another report of the same manuscript she reviewed, that raised similar concerns and gave the same recommendation for major revisions, but noted the ‘wording is soooo snarky. What need?’ (A2,E1). In one case that A1 detailed in our first discussion, significant but improbable changes made to the methodology section of a resubmitted paper caused him to question the honesty of the reporting, making him ‘uncomfortable’ and as a result reported his concerns to the editor. In this case the review took some time to craft, trying to balance the ‘fine line between catering for the emotion [of the author], right, and upholding the academic standards’ (A1,D1). While he conceded initially his report was ‘kind of too harsh… later I think I rephrased it a little bit, I kind of softened (it)’.

While the role of Associate Editor is very new to A2 and thus was yet unable to comment, for A1 the ‘opportunity to read various kinds of comments given by reviewers’ (A1,E1) is viewed favourably. This includes not only how reviewers structure their feedback, but also how they use the confidential comments to the editors to express their thoughts more openly, providing important insights into the process that are largely hidden.

Understanding standards

While our reviewing practices are informed more broadly ‘according to more general academic standards of the study itself, and the clarity and fullness of the reporting’ (A2,E1), we look in the first instance to advice and guidelines from journals to develop an understanding of journal-specific standards, although A2 notes that a lack of review guidelines for one of the earliest journals she reviewed led her to ‘searching Google for standard criteria’ (A2,E1). However, our development in this area seems to come from developing a familiarity with a journal, particularly through engagement with the journal as an author.

In addition to reading the scope and instructions for authors to obtain such basic information as readership, length of submissions, citation style, the best way for me to understand the requirements and preferences of the journals is my own experience as an author. I review for journals which I have published in and for those which I have not. I always find it easier to make a judgement about whether the manuscripts I review meet the journal’s standards if I have published there before. (A1,E1)

Indeed, it seems that journal familiarity is connected closely to our confidence in reviewing, and while both authors ‘review for journals which I have published in and for those which I have not’ (A1,E1), A2 states that she is reluctant to ‘readily accept an offer to review for a journal that I’m not familiar with’, and A1 takes extra time to ‘do more preparatory work before I begin reading the manuscript and writing the review’ when reviewing for an unfamiliar journal.

Ecological systems

Microsystem.

Three microsystems exert influence on A1’s and A2’s development of feedback literacy: university, journal community, and Twitter.

In regards to the university, we are full-time academics in research-intensive universities in the UK and Japan where expectations for academics include publishing research in high-impact journals ‘which is vital to promotion’ (A1,E2). It is especially true in A2’s context where the national higher education agenda is to increase world rankings of universities. Thus, ‘there is little value placed on peer review, as it is not directly related to the broader agenda’ (A2,E2). When considering his recent relocation to the UK together with the current pandemic, A1 navigated his responsibilities within the university context and decided to allocate more time to his university-related responsibilities, especially providing learning and pastoral support to his students, who are mostly international students. Besides, A2 observed that there is a dearth of institution-wide support on conducting peer review although ‘there are a lot of training opportunities related to how to write academic papers in English, how to present at international conferences, how to write grant applications’, etc. (A2,E2). As a result, she ‘struggled for a couple of years’ because of the lack of institutional support for her development as a peer reviewer’ (A2,D2); but this helplessness also motivated her to seek her own ways to learn how to give feedback, such as ‘seeing through glimpses of other reviews, how others approach it, in terms of length, structure, tone, foci etc.’ (A2,E2). A1 shares the same view that no training is available at his institution to support his development as a peer reviewer. However, his postgraduate supervision experiences enabled him to reflect on how his feedback can benefit researchers. In our second online discussion, A1 shared that he held individual advising sessions with some postgraduate students, which made him realise that it is important for feedback to serve the function to inspire rather than to ‘give them right answers’ (A1,D2).

Because of the lack of formal training provided by universities, both authors searched for other professional communities to help us develop our expertise in giving feedback as peer reviewers, with journal communities being the next microsystem. We found that international journals provide valuable opportunities for us to understand more about the whole peer-review process, in particular the role of feedback. For A1, the training which he received from the editor-in-chief when he took up the associate editorship of a language education journal two years ago was particularly useful. A1 benefited greatly from meetings with the editor who walked him through every stage in the review process and provided ‘hands-on experience on how to handle delicate scenarios’ (A1,E2). Since then, A1 has had plenty of opportunities to oversee various stages of peer review and read a large number of reviewers’ reports which helped him gain ‘a holistic understanding of the peer-review process’ (A1,E2) and gradually made him become more cognizant of how he wants to give feedback. Although there was no explicit instruction on the technical aspect of giving feedback, A1 found that being an associate editor has developed his ‘consciousness’ and ‘awareness’ of giving feedback as a peer reviewer (A1,D2). Further, he felt that his editorial experiences provided him the awareness to constantly refine and improve his ways of giving feedback, especially ways to make his feedback ‘more structured, evidence-based, and objective’ (A1,E2). Despite not reflecting from the perspective of an editor, A2 recalled her experience as an author who received in-depth and constructive feedback from a reviewer, which really impacted the way she viewed the whole review process. She understood from this experience that even though the paper under review may not be particularly strong, peer reviewers should always aim to provide formative feedback which helps the authors to improve their work. These positive experiences of the two authors are impactful on the ways they give feedback as peer reviewers. In addition, close engagement with a specific journal has helped A2 to develop a sense of belonging, making it ‘much more than a journal, but also a way to become part of an academic community’ (A2,E2). With such a sense of belonging, it is more likely for her to be ‘pulled towards that journal than others’ when she can only review a limited number of manuscripts (A2,D2).

Another professional community in which we are both involved is Twitter. We regard Twitter as a platform for self-learning, reflection, and inspiration. We perceive Twitter as a space where we get to learn from others’ peer-review experiences and disciplinary practices. For example, A1 found the tweets on peer-review informative ‘because they are written by different stakeholders in the process—the authors, editors, reviewers’ and offer ‘different perspectives and sometimes different versions of the same story’ (A1,E2). A2 recalled a tweet she came across about the ‘infamous Reviewer 2’ and how she learned to not make the same mistakes (A2,D2). Reading other people’s experiences helps us reconsider our own feedback practices and, more broadly, the whole peer-review system because we ‘get a glimpse of the do’s and don’ts for peer reviewers’ (A1,E2).

Further to our three common microsystems, A2 also draws on a unique microsystem, that of her former profession as a teacher, which shapes her feedback practices in three ways. First, in her four years of teacher training, a lot of emphasis was placed on assessment and feedback such as ‘error correction’; this understanding related to giving feedback to students and was solidified through ‘learning on the job’ (A2,D2). Second, A2 acknowledges that as a teacher, she has a passion to ‘guide others in their knowledge and skill development… and continue this in our review practices’ (A2,E2). Finally, her teaching experience prepared her to consider the authors’ emotional responses in her peer-review feedback practices, constantly ‘thinking there’s a person there who’s going to be shattered getting a rejection’ (A2,D2).

Mesosystem considers the confluence of our interactions in various microsystems. Particularly, we experienced a lack of support from our institutions, which pushed us to seek alternative paths to acquire the art of giving feedback. This has made us realise the importance of self-learning in developing feedback literacy as peer reviewers, especially in how to develop constructive and actionable feedback. Both authors self-learn how to give feedback by reading others’ feedback. A1 felt ‘fortunate to be involved in journal editing and Twitter’ because he gets ‘a glimpse of how other peer reviewers give feedback to authors’ (A1,E2). A2, on the other hand, learned through her correspondences with a journal editor who made her stop ‘looking for every word’ and move away from ‘over proofreading and over editing’ (A2,D2).

Focusing on the chronosystem, it is noticed that both authors adjusted how they give feedback over time because of the aggregated influence of their microsystems. What stands out is that they have become more strategic in giving feedback. One way this is achieved is through focusing their comments on the arguments of the manuscripts instead of burning the midnight oil with error-correcting.

Exosystem concerns the environment where the focal individuals do not have direct interactions with the people in it but have access to information about. In his case, A1’s understanding of advising techniques promoted by a self-access language learning centre is conducive to the cultivation of his feedback literacy. Although A1 is not a part of the language advising team, he has a working relationship with the director. A1 was especially impressed by the learner-centeredness of an advising process:

The primary duty of the language advisor is not to be confused with that of a language teacher. Language teachers may teach a lecture on a linguistic feature or correct errors on an essay, but language advisors focus on designing activities and engaging students in dialogues to help them reflect on their own learning needs… The advisors may also suggest useful resources to the students which cater to their needs. In short, language advisors work in partnership with the students to help them improve their language while language teachers are often perceived as more authoritative figures (A1, E2).

His understanding of advising has affected how A1 provides feedback as a peer reviewer in a number of ways. First, A1 places much more emphasis on humanising his feedback, for example, by considering ‘ways to work in partnership with the authors and making this “partnership mindset” explicit to the authors through writing’ (A1,E2). One way to operationalise this ‘partnership mindset’ in peer review is to ‘ask a lot of questions’ and provide ‘multiple suggestions’ for the authors to choose from (A1,E2). Furthermore, his knowledge of the difference between feedback as giving advice and feedback as instruction has led him to include feedback, which points authors to additional resources. Below is a feedback point A1 gave in one of his reviews:

The description of the data analysis process was very brief. While we are not aiming at validity and reliability in qualitative studies, it is important for qualitative researchers to describe in detail how the data collected were analysed (e.g. iterative coding, inductive/deductive coding, thematic analysis) in order to ascertain that the findings were credible and trustworthy. See Johnny Saldaña’s ‘The Coding Manual for Qualitative Researchers’.

Another exosystem that we have knowledge about is formal peer-review training courses provided by publishers. These online courses are usually run asynchronously. Even though we did not enrol in these courses, our interest in peer review has led us to skim the content of these courses. Both of us questioned the value of formal peer-review training in developing feedback literacy of peer reviewers. For example, A2 felt that opportunities to review are more important because they ‘put you in that position where you have responsibility and have to think critically about how you are going to respond’ (A2,D2). To A1, formal peer-review training mostly focuses on developing peer reviewers’ ‘understanding of the whole mechanism’ but not providing ‘training on how to give feedback… For example, do you always ask a question without giving the answers you know? What is a good suggestion?’ (A1,D2).

Macrosystem

The two authors have diverse sociocultural experiences because of their family backgrounds and work contexts. When reflecting on their sociocultural experiences, A1 focused on his upbringing in Hong Kong where both of his parents are school teachers and his professional experience as a language teacher in secondary and tertiary education in Hong Kong while A2 discussed her experience of working in academia in Japan as an anglophone.

Observing his parents’ interactions with their students in schools, A1 was immersed in an Asian educational discourse characterised by ‘mutual respect and all sorts of formality’ (A1,E2). After he finished university, A1 became a school teacher and then a university lecturer (equivalent to a teaching fellow in the UK), getting immersed continuously in the etiquette of educational discourse in Hong Kong. Because of this, A1 knows that being professional means to be ‘formal and objective’ and there is a constant expectation to ‘treat people with respect’ (A1,E2). At the same time, his parents are unlike typical Asian parents; they are ‘more open-minded’, which made him more willing to listen and ‘consider different perspectives’ (A1,D2). Additionally, social hierarchy also impacted his approach to giving feedback as a peer reviewer. A1 started his career as a school teacher and then a university lecturer in Hong Kong with no formal research training. After obtaining his BA and MA, it is not until recently that A1 obtained his PhD by Prior Publication. Perhaps because of his background as a frontline teacher, A1 did not regard himself as ‘a formally trained researcher’ and perceived himself as not ‘elite enough to give feedback to other researchers’ (A1,E2). Both his childhood and his self-perceived identity have led to the formation of two feedback strategies: asking questions and providing a structured report mimicking the sections in the manuscript. A1 frequently asks questions in his reports ‘in a bid to offset some of the responsibilities to the authors’ (A1,E2). A1 struggles to decide whether to address authors using second- or third-person pronouns. A1 consistently uses third-person pronouns in his feedback because he wants to sound ‘very formal’ (A1,D2). However, A1 shared that he has recently started using second-person pronouns to make his feedback more interactive.

A2, on the other hand, pondered upon her sociocultural experiences as a school teacher in Australia, her position as an anglophone in a Japanese university, and her status as first-generation high school graduate. Reflecting on her career as a school teacher, A2 shared that her students had high expectations on her feedback:

So if you give feedback that seems unfair, you know … they’ll turn around and say, ‘What are you talking about’? They’re going to react back if your feedback is not clear. I think a lot of them [the students] appreciate the honesty. (A2,D2)

A2 acknowledges that her identity as a native English speaker has given her the advantage to publish extensively in international journals because of her high level of English proficiency and her access to ‘data from the US and from Australia which are more marketable’ (A2,D2). At the same time, as a native English speaker, she has empathy for her Japanese colleagues who struggle to write proficiently in English and some who even ‘pay thousands of dollars to have their work translated’ (A2,D2). Therefore, when giving feedback as a peer reviewer, she tries not to make a judgement on an author’s English proficiency and will not reject a paper based on the standard of English alone. Finally, as a first-generation scholar without any previous connections to academia, she struggles with belonging and self-confidence. As a result she notes that it usually takes her a long time to complete a review because she would like to be sure what she is saying is ‘right or constructive and is not on the wrong track’ (A2,D2).

Implications and future directions

In investigating the manifestations of the authors’ feedback literacy development, and the ecological systems in which this development occurs, this study unpacks the various sources of influence behind our feedback behaviours as two relatively new but highly commended peer reviewers. The findings show that our feedback literacy development is highly personalised and contextualised, and the sources of influence are diverse and interconnected, albeit largely informal. Our peer-review practices are influenced by our experiences within academia, but influences are much broader and begin much earlier. Peer-review skills were enhanced through direct experience not only in peer review but also in other activities related to the peer-review process, and as such more hands-on, on-site feedback training for peer reviewers may be more appropriate than knowledge-based training. The authors gain valuable insights from seeing the reviews of others, and as this is often not possible until scholars take on more senior roles within journals, co-reviewing is a potential way for ECRs to gain experience (McDowell et al., 2019 ). We draw practical and moral support from various communities, particularly online to promote “intellectual candour”, which refers to honest expressions of vulnerability for learning and trust building (Molloy and Bearman, 2019 , p. 32); in response to this finding we have developed an online community of practice, specifically as a space for discussing issues related to peer review (a Twitter account called “Scholarly Peers”). Importantly, our review practices are a product not only of how we review, but why we review, and as such training should not focus solely on the mechanics of review, but extend to its role within academia, and its impact not only on the quality of scholarship, but on the growth of researchers.

The significance of this study is its insider perspective, and the multifaceted framework that allows the capturing of the complexity of factors that influence individual feedback literacy development of two recognised peer reviewers. It must be stressed that the findings of this study are highly idiosyncratic, focusing on the experiences of only two peer reviewers and the educational research discipline. While the research design is such that it is not an attempt to describe a ‘typical’ or ‘expected’ experience, the scope of the study is a limitation, and future research could be expanded to studies of larger cohorts in order to identify broader trends. In this study, we have not included the reviewer reports themselves, and these reports provide a potentially rich source of data, which will be a focus in our continued investigation in this area. Further research could also investigate the role that peer-review training courses play in the feedback literacy development and practices of new and experienced peer reviewers. Since journal peer review is a communication process, it is equally important to investigate authors’ perspectives and experiences, especially pertaining to how authors interpret reviewers’ feedback based on the ways that it is written.

Data availability

Because of the sensitive nature of the data these are not made available.

Change history

26 november 2021.

A Correction to this paper has been published: https://doi.org/10.1057/s41599-021-00996-3

Abedi Asante L, Abubakari Z (2020) Pursuing PhD by publication in geography: a collaborative autoethnography of two African doctoral researchers. J Geogr High Educ 45(1):87–107. https://doi.org/10.1080/03098265.2020.1803817

Article   Google Scholar  

Boud D, Dawson P (2021). What feedback literate teachers do: An empirically-derived competency framework. Assess Eval High Educ. Advanced online publication. https://doi.org/10.1080/02602938.2021.1910928

Bronfenbrenner U (1986) Ecology of the family as a context for human development. Res Perspect Dev Psychol 22:723–742. https://doi.org/10.1037/0012-1649.22.6.723

Carless D, Boud D (2018) The development of student feedback literacy: enabling uptake of feedback. Assess Eval High Educ 43(8):1315–1325. https://doi.org/10.1080/02602938.2018.1463354

Carless D, Winstone N (2020) Teacher feedback literacy and its interplay with student feedback literacy. Teach High Educ, 1–14. https://doi.org/10.1080/13562517.2020.1782372

Chang H, Ngunjiri FW, Hernandez KC (2013) Collaborative autoethnography. Left Coast Press

Cheung D (2000) Measuring teachers’ meta-orientations to curriculum: application of hierarchical confirmatory factor analysis. The J Exp Educ 68(2):149–165. https://doi.org/10.1080/00220970009598500

Chong SW (2021a) Improving peer-review by developing peer reviewers’ feedback literacy. Learn Publ 34(3):461–467. https://doi.org/10.1002/leap.1378

Chong SW (2021b) Reconsidering student feedback literacy from an ecological perspective. Assess Eval High Educ 46(1):92–104. https://doi.org/10.1080/02602938.2020.1730765

Chong SW (2019) College students’ perception of e-feedback: a grounded theory perspective. Assess Eval High Educ 44(7):1090–1105. https://doi.org/10.1080/02602938.2019.1572067

Chong SW (2018) Interpersonal aspect of written feedback: a community college students’ perspective. Res Post-Compul Educ 23(4):499–519. https://doi.org/10.1080/13596748.2018.1526906

Corden A, Sainsbury R (2006) Using verbatim quotations in reporting qualitative social research: the views of research users. University of York Social Policy Research Unit

Ellis C, Adams TE, Bochner AP (2011) Autoethnography: An Overview. Historical Soc Res, 12:273–290

Ellis C, Bochner A (1996) Composing ethnography: Alternative forms of qualitative writing. Sage

Freda MC, Kearney MH, Baggs JG, Broome ME, Dougherty M (2009) Peer reviewer training and editor support: results from an international survey of nursing peer reviewers. J Profession Nurs 25(2):101–108. https://doi.org/10.1016/j.profnurs.2008.08.007

Fulcher G (2012) Assessment literacy for the language classroom. Lang Assess Quart 9(2):113–132. https://doi.org/10.1080/15434303.2011.642041

Gee JP (1999) Reading and the new literacy studies: reframing the national academy of sciences report on reading. J Liter Res 3(3):355–374. https://doi.org/10.1080/10862969909548052

Gravett K, Kinchin IM, Winstone NE, Balloo K, Heron M, Hosein A, Lygo-Baker S, Medland E (2019) The development of academics’ feedback literacy: experiences of learning from critical feedback via scholarly peer review. Assess Eval High Educ 45(5):651–665. https://doi.org/10.1080/02602938.2019.1686749

Hains-Wesson R, Young K (2016) A collaborative autoethnography study to inform the teaching of reflective practice in STEM. High Educ Res Dev 36(2):297–310. https://doi.org/10.1080/07294360.2016.1196653

Han Y, Xu Y (2019) Student feedback literacy and engagement with feedback: a case study of Chinese undergraduate students. Teach High Educ, https://doi.org/10.1080/13562517.2019.1648410

Heesen R, Bright LK (2020) Is Peer Review a Good Idea? Br J Philos Sci, https://doi.org/10.1093/bjps/axz029

Hollywood A, McCarthy D, Spencely C, Winstone N (2019) ‘Overwhelmed at first’: the experience of career development in early career academics. J Furth High Educ 44(7):998–1012. https://doi.org/10.1080/0309877X.2019.1636213

Horn SA (2016) The social and psychological costs of peer review: stress and coping with manuscript rejection. J Manage Inquiry 25(1):11–26. https://doi.org/10.1177/1056492615586597

Hughes S, Pennington JL, Makris S (2012) Translating Autoethnography Across the AERA Standards: Toward Understanding Autoethnographic Scholarship as Empirical Research. Educ Researcher, 41(6):209–219

Kandiko CB(2010) Neoliberalism in higher education: a comparative approach. Int J Art Sci 3(14):153–175. http://www.openaccesslibrary.org/images/BGS220_Camille_B._Kandiko.pdf

Keashly L, Neuman JH (2010) Faculty experiences with bullying in higher education-causes, consequences, and management. Adm Theory Prax 32(1):48–70. https://doi.org/10.2753/ATP1084-1806320103

Kelly J, Sadegieh T, Adeli K (2014) Peer review in scientific publications: benefits, critiques, & a survival guide. J Int Fed Clin Chem Labor Med 25(3):227–243. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975196/

Google Scholar  

Kumar KL (2020) Understanding and expressing academic identity through systematic autoethnography. High Educ Res Dev, https://doi.org/10.1080/07294360.2020.1799950

Lapadat JC (2017) Ethics in autoethnography and collaborative autoethnography. Qual Inquiry 23(8):589–603. https://doi.org/10.1177/1077800417704462

Levi T, Inbar-Lourie O (2019) Assessment literacy or language assessment literacy: learning from the teachers. Lang Assess Quarter 17(2):168–182. https://doi.org/10.1080/15434303.2019.1692347

London MS, Smither JW (2002) Feedback orientation, feedback culture, and the longitudinal performance management process. Hum Res Manage Rev 12(1):81–100. https://doi.org/10.1016/S1053-4822(01)00043-2

Malecka B, Boud D, Carless D (2020) Eliciting, processing and enacting feedback: mechanisms for embedding student feedback literacy within the curriculum. Teach High Educ, 1–15. https://doi.org/10.1080/13562517.2020.1754784

Mavrogenis AF, Quaile A, Scarlat MM (2020) The good, the bad and the rude peer-review. Int Orthopaed 44(3):413–415. https://doi.org/10.1007/s00264-020-04504-1

McDowell GS, Knutsen JD, Graham JM, Oelker SK, Lijek RS (2019) Co-reviewing and ghostwriting by early-career researchers in the peer review of manuscripts. ELife 8:e48425. https://doi.org/10.7554/eLife.48425

Article   CAS   PubMed   PubMed Central   Google Scholar  

Merga MK, Mason S, Morris JE (2018) Early career experiences of navigating journal article publication: lessons learned using an autoethnographic approach. Learn Publ 31(4):381–389. https://doi.org/10.1002/leap.1192

Miles MB, Huberman AM (1994) Qualitative data analysis: An expanded sourcebook (2nd edn.). Sage

Molloy E, Bearman M (2019) Embracing the tension between vulnerability and credibility: ‘Intellectual candour’ in health professions education. Med Educ 53(1):32–41. https://doi.org/10.1111/medu.13649

Article   PubMed   Google Scholar  

Molloy E, Boud D, Henderson M (2019) Developing a learning-centred framework for feedback literacy. Assess Eval High Educ 45(4):527–540. https://doi.org/10.1080/02602938.2019.1667955

Neal JW, Neal ZP (2013) Nested or networked? Future directions for ecological systems theory. Soc Dev 22(4):722–737. https://doi.org/10.1111/sode.12018

Noble C, Billett S, Armit L, Collier L, Hilder J, Sly C, Molloy E (2020) “It’s yours to take”: generating learner feedback literacy in the workplace. Adv Health Sci Educ Theory Pract 25(1):55–74. https://doi.org/10.1007/s10459-019-09905-5

Price M, Rust C, O’Donovan B, Handley K, Bryant R (2012) Assessment literacy: the foundation for improving student learning. Oxford Centre for Staff and Learning Development

Silbiger NJ, Stubler AD (2019) Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. PeerJ 7:e8247. https://doi.org/10.7717/peerj.8247

Article   PubMed   PubMed Central   Google Scholar  

Starck JM (2017) Scientific peer review: guidelines for informative peer review. Springer Spektrum

Steelman LA, Wolfeld L (2016) The manager as coach: the role of feedback orientation. J Busi Psychol 33(1):41–53. https://doi.org/10.1007/s10869-016-9473-6

Stiggins RJ (1999) Evaluating classroom assessment training in teacher education programs. Educ Meas: Issue Pract 18(1):23–27. https://doi.org/10.1111/j.1745-3992.1999.tb00004.x

Street B (1997) The implications of the ‘new literacy studies’ for literacy Education. Engl Educ 31(3):45–59. https://doi.org/10.1111/j.1754-8845.1997.tb00133.x

Sughrua WM (2019) A nomenclature for critical autoethnography in the arena of disciplinary atomization. Cult Stud Crit Methodol 19(6):429–465. https://doi.org/10.1177/1532708619863459

Sutton P (2012) Conceptualizing feedback literacy: knowing, being, and acting. Innov Educ Teach Int 49(1):31–40. https://doi.org/10.1080/14703297.2012.647781

Article   MathSciNet   Google Scholar  

Tynan BR, Garbett DL (2007) Negotiating the university research culture: collaborative voices of new academics. High Educ Res Dev 26(4):411–424. https://doi.org/10.1080/07294360701658617

Vygotsky LS (1978) Mind in society: The development of higher psychological processes. Harvard University Press

Wall S (2006) An autoethnography on learning about autoethnography. Int J Qual Methods 5(2):146–160. https://doi.org/10.1177/160940690600500205

Article   ADS   MathSciNet   Google Scholar  

Warne V (2016) Rewarding reviewers-sense or sensibility? A Wiley study explained. Learn Publ 29:41–40. https://doi.org/10.1002/leap.1002

Wilkinson S (2019) The story of Samantha: the teaching performances and inauthenticities of an early career human geography lecturer. High Educ Res Dev 38(2):398–410. https://doi.org/10.1080/07294360.2018.1517731

Winstone N, Carless D (2019) Designing effective feedback processes in higher education: a learning-focused approach. Routledge

Winstone NE, Mathlin G, Nash RA (2019) Building feedback literacy: students’ perceptions of the developing engagement with feedback toolkit. Front Educ 4:1–11. https://doi.org/10.3389/feduc.2019.00039

Xu Y, Carless D (2016) ‘Only true friends could be cruelly honest’: cognitive scaffolding and social-affective support in teacher feedback literacy. Assess Eval High Educ 42(7):1082–1094. https://doi.org/10.1080/02602938.2016.1226759

Download references

Author information

Authors and affiliations.

Queen’s University Belfast, Belfast, UK

Sin Wang Chong

Nagasaki University, Nagasaki, Japan

Shannon Mason

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sin Wang Chong .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

We acknowledge that research data are collected from human subjects (from the two authors), such data are collected in accordance with the standards and guidelines of the School Research Ethics Committee at the School of Social Sciences, Education and Social Work, Queen’s University Belfast (Ref: 005_2021).

Informed consent

Since the participants are the two authors, there is no informed consent form.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material file #1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chong, S.W., Mason, S. Demystifying the process of scholarly peer-review: an autoethnographic investigation of feedback literacy of two award-winning peer reviewers. Humanit Soc Sci Commun 8 , 266 (2021). https://doi.org/10.1057/s41599-021-00951-2

Download citation

Received : 02 August 2021

Accepted : 12 October 2021

Published : 12 November 2021

DOI : https://doi.org/10.1057/s41599-021-00951-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

how peer reviewed research works

Disclaimer » Advertising

  • HealthyChildren.org

Issue Cover

  • Previous Article
  • Next Article

What is the Purpose of Peer Review?

What makes a good peer reviewer, how do you decide whether to review a paper, how do you complete a peer review, limitations of peer review, conclusions, research methods: how to perform an effective peer review.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • CME Quiz Close Quiz
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Elise Peterson Lu , Brett G. Fischer , Melissa A. Plesac , Andrew P.J. Olson; Research Methods: How to Perform an Effective Peer Review. Hosp Pediatr November 2022; 12 (11): e409–e413. https://doi.org/10.1542/hpeds.2022-006764

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Scientific peer review has existed for centuries and is a cornerstone of the scientific publication process. Because the number of scientific publications has rapidly increased over the past decades, so has the number of peer reviews and peer reviewers. In this paper, drawing on the relevant medical literature and our collective experience as peer reviewers, we provide a user guide to the peer review process, including discussion of the purpose and limitations of peer review, the qualities of a good peer reviewer, and a step-by-step process of how to conduct an effective peer review.

Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1 , 2   It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3   In 2012, there were more than 28 000 scholarly peer-reviewed journals and more than 3 million peer reviewed articles are now published annually. 3 , 4   However, even with this volume, most peer reviewers learn to review “on the (unpaid) job” and no standard training system exists to ensure quality and consistency. 5   Expectations and format vary between journals and most, but not all, provide basic instructions for reviewers. In this paper, we provide a general introduction to the peer review process and identify common strategies for success as well as pitfalls to avoid.

Modern peer review serves 2 primary purposes: (1) as “a screen before the diffusion of new knowledge” 6   and (2) as a method to improve the quality of published work. 1 , 5  

As screeners, peer reviewers evaluate the quality, validity, relevance, and significance of research before publication to maintain the credibility of the publications they serve and their fields of study. 1 , 2 , 7   Although peer reviewers are not the final decision makers on publication (that role belongs to the editor), their recommendations affect editorial decisions and thoughtful comments influence an article’s fate. 6 , 8  

As advisors and evaluators of manuscripts, reviewers have an opportunity and responsibility to give authors an outside expert’s perspective on their work. 9   They provide feedback that can improve methodology, enhance rigor, improve clarity, and redefine the scope of articles. 5 , 8 , 10   This often happens even if a paper is not ultimately accepted at the reviewer’s journal because peer reviewers’ comments are incorporated into revised drafts that are submitted to another journal. In a 2019 survey of authors, reviewers, and editors, 83% said that peer review helps science communication and 90% of authors reported that peer review improved their last paper. 11  

Expertise: Peer reviewers should be up to date with current literature, practice guidelines, and methodology within their subject area. However, academic rank and seniority do not define expertise and are not actually correlated with performance in peer review. 13  

Professionalism: Reviewers should be reliable and objective, aware of their own biases, and respectful of the confidentiality of the peer review process.

Critical skill : Reviewers should be organized, thorough, and detailed in their critique with the goal of improving the manuscript under their review, regardless of disposition. They should provide constructive comments that are specific and addressable, referencing literature when possible. A peer reviewer should leave a paper better than he or she found it.

Is the manuscript within your area of expertise? Generally, if you are asked to review a paper, it is because an editor felt that you were a qualified expert. In a 2019 survey, 74% of requested reviews were within the reviewer’s area of expertise. 11   This, of course, does not mean that you must be widely published in the area, only that you have enough expertise and comfort with the topic to critique and add to the paper.

Do you have any biases that may affect your review? Are there elements of the methodology, content area, or theory with which you disagree? Some disagreements between authors and reviewers are common, expected, and even helpful. However, if a reviewer fundamentally disagrees with an author’s premise such that he or she cannot be constructive, the review invitation should be declined.

Do you have the time? The average review for a clinical journal takes 5 to 6 hours, though many take longer depending on the complexity of the research and the experience of the reviewer. 1 , 14   Journals vary on the requested timeline for return of reviews, though it is usually 1 to 4 weeks. Peer review is often the longest part of the publication process and delays contribute to slower dissemination of important work and decreased author satisfaction. 15   Be mindful of your schedule and only accept a review invitation if you can reasonably return the review in the requested time.

Once you have determined that you are the right person and decided to take on the review, reply to the inviting e-mail or click the associated link to accept (or decline) the invitation. Journal editors invite a limited number of reviewers at a time and wait for responses before inviting others. A common complaint among journal editors surveyed was that reviewers would often take days to weeks to respond to requests, or not respond at all, making it difficult to find appropriate reviewers and prolonging an already long process. 5  

Now that you have decided to take on the review, it is best of have a systematic way of both evaluating the manuscript and writing the review. Various suggestions exist in the literature, but we will describe our standard procedure for review, incorporating specific do’s and don’ts summarized in Table 1 .

Dos and Don’ts of Peer Review

First, read the manuscript once without making notes or forming opinions to get a sense of the paper as whole. Assess the overall tone and flow and define what the authors identify as the main point of their work. Does the work overall make sense? Do the authors tell the story effectively?

Next, read the manuscript again with an eye toward review, taking notes and formulating thoughts on strengths and weaknesses. Consider the methodology and identify the specific type of research described. Refer to the corresponding reporting guideline if applicable (CONSORT for randomized control trials, STROBE for observational studies, PRISMA for systematic reviews). Reporting guidelines often include a checklist, flow diagram, or structured text giving a minimum list of information needed in a manuscript based on the type of research done. 16   This allows the reviewer to formulate a more nuanced and specific assessment of the manuscript.

Next, review the main findings, the significance of the work, and what contribution it makes to the field. Examine the presentation and flow of the manuscript but do not copy edit the text. At this point, you should start to write your review. Some journals provide a format for their reviews, but often it is up to the reviewer. In surveys of journal editors and reviewers, a review organized by manuscript section was the most favored, 5 , 6   so that is what we will describe here.

As you write your review, consider starting with a brief summary of the work that identifies the main topic, explains the basic approach, and describes the findings and conclusions. 12 , 17   Though not universally included in all reviews, we have found this step to be helpful in ensuring that the work is conveyed clearly enough for the reviewer to summarize it. Include brief notes on the significance of the work and what it adds to current knowledge. Critique the presentation of the work: is it clearly written? Is its length appropriate? List any major concerns with the work overall, such as major methodological flaws or inaccurate conclusions that should disqualify it from publication, though do not comment directly on disposition. Then perform your review by section:

Abstract : Is it consistent with the rest of the paper? Does it adequately describe the major points?

Introduction : This section should provide adequate background to explain the need for the study. Generally, classic or highly relevant studies should be cited, but citations do not have to be exhaustive. The research question and hypothesis should be clearly stated.

Methods: Evaluate both the methods themselves and the way in which they are explained. Does the methodology used meet the needs of the questions proposed? Is there sufficient detail to explain what the authors did and, if not, what needs to be added? For clinical research, examine the inclusion/exclusion criteria, control populations, and possible sources of bias. Reporting guidelines can be particularly helpful in determining the appropriateness of the methods and how they are reported.

Some journals will expect an evaluation of the statistics used, whereas others will have a separate statistician evaluate, and the reviewers are generally not expected to have an exhaustive knowledge of statistical methods. Clarify expectations if needed and, if you do not feel qualified to evaluate the statistics, make this clear in your review.

Results: Evaluate the presentation of the results. Is information given in sufficient detail to assess credibility? Are the results consistent with the methodology reported? Are the figures and tables consistent with the text, easy to interpret, and relevant to the work? Make note of data that could be better detailed in figures or tables, rather than included in the text. Make note of inappropriate interpretation in the results section (this should be in discussion) or rehashing of methods.

Discussion: Evaluate the authors’ interpretation of their results, how they address limitations, and the implications of their work. How does the work contribute to the field, and do the authors adequately describe those contributions? Make note of overinterpretation or conclusions not supported by the data.

The length of your review often correlates with your opinion of the quality of the work. If an article has major flaws that you think preclude publication, write a brief review that focuses on the big picture. Articles that may not be accepted but still represent quality work merit longer reviews aimed at helping the author improve the work for resubmission elsewhere.

Generally, do not include your recommendation on disposition in the body of the review itself. Acceptance or rejection is ultimately determined by the editor and including your recommendation in your comments to the authors can be confusing. A journal editor’s decision on acceptance or rejection may depend on more factors than just the quality of the work, including the subject area, journal priorities, other contemporaneous submissions, and page constraints.

Many submission sites include a separate question asking whether to accept, accept with major revision, or reject. If this specific format is not included, then add your recommendation in the “confidential notes to the editor.” Your recommendation should be consistent with the content of your review: don’t give a glowing review but recommend rejection or harshly criticize a manuscript but recommend publication. Last, regardless of your ultimate recommendation on disposition, it is imperative to use respectful and professional language and tone in your written review.

Although peer review is often described as the “gatekeeper” of science and characterized as a quality control measure, peer review is not ideally designed to detect fundamental errors, plagiarism, or fraud. In multiple studies, peer reviewers detected only 20% to 33% of intentionally inserted errors in scientific manuscripts. 18 , 19   Plagiarism similarly is not detected in peer review, largely because of the huge volume of literature available to plagiarize. Most journals now use computer software to identify plagiarism before a manuscript goes to peer review. Finally, outright fraud often goes undetected in peer review. Reviewers start from a position of respect for the authors and trust the data they are given barring obvious inconsistencies. Ultimately, reviewers are “gatekeepers, not detectives.” 7  

Peer review is also limited by bias. Even with the best of intentions, reviewers bring biases including but not limited to prestige bias, affiliation bias, nationality bias, language bias, gender bias, content bias, confirmation bias, bias against interdisciplinary research, publication bias, conservatism, and bias of conflict of interest. 3 , 4 , 6   For example, peer reviewers score methodology higher and are more likely to recommend publication when prestigious author names or institutions are visible. 20   Although bias can be mitigated both by the reviewer and by the journal, it cannot be eliminated. Reviewers should be mindful of their own biases while performing reviews and work to actively mitigate them. For example, if English language editing is necessary, state this with specific examples rather than suggesting the authors seek editing by a “native English speaker.”

Peer review is an essential, though imperfect, part of the forward movement of science. Peer review can function as both a gatekeeper to protect the published record of science and a mechanism to improve research at the level of individual manuscripts. Here, we have described our strategy, summarized in Table 2 , for performing a thorough peer review, with a focus on organization, objectivity, and constructiveness. By using a systematized strategy to evaluate manuscripts and an organized format for writing reviews, you can provide a relatively objective perspective in editorial decision-making. By providing specific and constructive feedback to authors, you contribute to the quality of the published literature.

Take-home Points

FUNDING: No external funding.

CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no potential conflicts of interest to disclose.

Dr Lu performed the literature review and wrote the manuscript. Dr Fischer assisted in the literature review and reviewed and edited the manuscript. Dr Plesac provided background information on the process of peer review, reviewed and edited the manuscript, and completed revisions. Dr Olson provided background information and practical advice, critically reviewed and revised the manuscript, and approved the final manuscript.

Advertising Disclaimer »

Citing articles via

Email alerts.

how peer reviewed research works

Affiliations

  • Editorial Board
  • Editorial Policies
  • Pediatrics On Call
  • Online ISSN 2154-1671
  • Print ISSN 2154-1663
  • Pediatrics Open Science
  • Hospital Pediatrics
  • Pediatrics in Review
  • AAP Grand Rounds
  • Latest News
  • Pediatric Care Online
  • Red Book Online
  • Pediatric Patient Education
  • AAP Toolkits
  • AAP Pediatric Coding Newsletter

First 1,000 Days Knowledge Center

Institutions/librarians, group practices, licensing/permissions, integrations, advertising.

  • Privacy Statement | Accessibility Statement | Terms of Use | Support Center | Contact Us
  • © Copyright American Academy of Pediatrics

This Feature Is Available To Subscribers Only

Sign In or Create an Account

  • Technical Support
  • Find My Rep

You are here

What is peer review.

Peer review is ‘a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.’ You can learn more in this explainer from the Social Science Space.  

A picture showing a manuscript with annotations, a notebook, and a journal.

Peer review brings academic research to publication in the following ways:

  • Evaluation – Peer review is an effective form of research evaluation to help select the highest quality articles for publication.
  • Integrity – Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.
  • Quality – The filtering process and revision advice improve the quality of the final research article as well as offering the author new insights into their research methods and the results that they have compiled. Peer review gives authors access to the opinions of experts in the field who can provide support and insight.

Types of peer review

  • Single-anonymized  – the name of the reviewer is hidden from the author.
  • Double-anonymized  – names are hidden from both reviewers and the authors.
  • Triple-anonymized  – names are hidden from authors, reviewers, and the editor.
  • Open peer review comes in many forms . At Sage we offer a form of open peer review on some journals via our Transparent Peer Review program , whereby the reviews are published alongside the article. The names of the reviewers may also be published, depending on the reviewers’ preference.
  • Post publication peer review can offer useful interaction and a discussion forum for the research community. This form of peer review is not usual or appropriate in all fields.

To learn more about the different types of peer review, see page 14 of ‘ The Nuts and Bolts of Peer Review ’ from Sense about Science.

Please double check the manuscript submission guidelines of the journal you are reviewing in order to ensure that you understand the method of peer review being used.

  • Journal Author Gateway
  • Journal Editor Gateway
  • Transparent Peer Review
  • How to Review Articles
  • Using Sage Track
  • Peer Review Ethics
  • Resources for Reviewers
  • Reviewer Rewards
  • Ethics & Responsibility
  • Sage Editorial Policies
  • Publication Ethics Policies
  • Sage Chinese Author Gateway 中国作者资源
  • Open Resources & Current Initiatives
  • Discipline Hubs

Editorial Manager, our manuscript submissions site will be unavailable between 12pm April 5, 2024 and 12pm April 8 2024 (Pacific Standard Time). We apologize for any inconvenience this may cause.

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

how peer reviewed research works

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

how peer reviewed research works

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

how peer reviewed research works

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

how peer reviewed research works

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

How Scientific and Technical Information Works

  • Introduction to the Ecosystem
  • Types of Sources

Peer Review in a Nutshell

What gets to be peer reviewed, is my article peer reviewed, learn more about issues related to peer review, types of peer review (from plos).

  • Search Strategies
  • How to Evaluate Information
  • Cite Sources
  • Citation Management Software This link opens in a new window

Peer review is a formal process used for reviewing and editing scientific writing for publication. It is an essential component of the scholarly publishing process. When done correctly, it is a vital part of building trust and maintaining high standards of quality for published research. Here are the main steps in the process:

  • The author submits their work to a journal.
  • The journal's editor reads it and decides whether to bring it under review . At this stage she's looking for first indications not just of the quality of the article and the science underpinning it, but also of the relevance of the research to the field and to the journal itself.
  • The review is often "double blind" which means the author doesn't know who the reviewers are, and the reviewers don't know who the author is.
  • Each reviewer assesses the article for a number of factors, including the overall quality of the author's study or experiment, the soundness of the results, whether the author's conclusions or discoveries are meaningful to the field, and how the writing can be improved. 
  • Reviewers provide feedback to the journal editor, including a recommendation (accept, accept with revisions, or reject), and suggestions for revisions to the article. The editor aggregates all of this information and shares it with the author.
  • There may be multiple rounds of revisions , as the author makes updates and the editor accepts or rejects them. This is often the lengthiest part of the publication process.
  • Eventually, if all goes well, the editor will eventually decide the article is ready to be accepted . When it is published, it will be considered by everyone to be a peer reviewed article.

1 Book chapters published in edited volumes may or may not have gone through a peer review process. This is dependent on the policies of the publishers of the book. Look for information about their review process on their website.

2 Dissertations go through a rigorous review as part of the author's defense process for their PhD. If the author is successful, they are usually able to publish their dissertation through the university's repository, but they may also choose to submit parts of it for publication in academic journals or other forums, in which case it would undergo further reviews, including peer review.

3 Conference proceedings may go through anything from a brief review for publishability, all the way to a traditional peer review process. This depends on the organization responsible for hosting the conference and publishing the proceedings. Look for information about their review process on their website

To determine whether your article is peer reviewed, you need to find out if the journal it is published in is peer reviewed (also known as "refereed"). There are a few quick ways to do this: 

  • The most straightforward way is to locate the journal's website online through a quick Google search, and check for a page or section on their editorial process. Usually, journals that are peer reviewed provide this information up-front on their websites, because they want their readers to know what kind and caliber of publication they are.

how peer reviewed research works

  • Ulrich's This link opens in a new window Provides detailed publication information about magazines, journals, newspapers, newsletters, e-zines and more.
  • "Peer Review: The Worst Way to Judge Research, Except for All the Others" by Aaron E. Carroll, New York Times (2018) This New York Times Upshot article takes a look at many of the problems with the peer review system, and what might be done to address them.
  • "Scammers impersonate guest editors to get sham papers published" by Holly Else, Nature News (2021) It's important to be aware of the limits of peer review, and this recent (Nov. 2021) news article from Nature Publishing is an all-too-real example of how scammers have recently been able to exploit flaws in the system on a massive scale.
  • "Ivermectin shows that not all science is worth following" by James Heathers, the Atlantic (2021) A researcher who conducts "forensic peer review" explains many of the ways that the COVID-19 pandemic has upended already unstable publishing practices, and how low-quality scientific studies can make their way into the literature, creating a false impression of, for example, the efficacy of ivermectin as an antiviral.

how peer reviewed research works

  • Open Peer Review (Public Library of Science) There are many ways that the review process can look. Learn more about the work PLoS and other organizations are doing to open up the process and increase transparency in peer reviewing here.
  • << Previous: Types of Sources
  • Next: Search Strategies >>
  • Last Updated: Mar 28, 2024 12:30 PM
  • URL: https://researchguides.gonzaga.edu/STEMinformationliteracy

University of Texas

  • University of Texas Libraries
  • UT Libraries

Social Work Research

  • Peer Review & Evaluation
  • Getting Started
  • Encyclopedias & Handbooks
  • Evidence Based Practice
  • Reading Scholarly Articles
  • Books & Dissertations
  • Data and Statistics
  • Film & Video
  • Policy & Government Documents
  • Trabajo Social
  • Citing & Citation Management
  • Research Funding
  • Open Access
  • Choosing & Assessing Journals
  • Increasing Access to Your Work
  • Tracking Your Impact
  • Systematic Reviews
  • Tests & Measures
  • What is Peer Review?
  • How to check peer review

Peer Review is a critical part of evaluating information. It is a process that journals use to ensure the articles they publish represent the best scholarship currently available, and articles from peer reviewed journal are often grounded in empirical research. When an article is submitted to a peer reviewed journal, the editors send it out to other scholars in the same field (the author's peers) to get their assessment of the quality of the scholarship, its relevance to the field, its appropriateness for the journal, etc. Sometimes, you'll see this referred to as "refereed." 

Publications that don't use peer review (Time, Cosmo, Salon) rely on an editor to determine the value of an article. Their goal is mainly to educate or entertain the general public, not to support scholarly research.

Most library databases will have a search feature that allows you to limit your results to peer reviewed or scholarly sources.

If you can't tell whether or not a journal is peer-reviewed, check Ulrichsweb.

  • Access the database here: Ulrichsweb
  • Type in the title of the journal
  • Peer-reviewed journals will have a referee jersey ("refereed" is another term for "peer-reviewed") - example below

how peer reviewed research works

Evaluation Criteria

Use the criteria below to help you evaluate a source.  As you do, remember:

  • Each criterion should be considered in the context of your research question . For example, currency changes if you are working on a current event vs. a historical topic.
  • Weigh all four criteria when making your decision. For example, the information may appear accurate, but if the authority is suspect you may want to find a more authoritative source for your information.

Criteria to consider:

  • Currency : When was the information published or last updated? Is it current enough for your topic?
  • Relevance : Is this information that you are looking for? Is it related to your topic? Is it detailed enough to help you answer questions on your topic.
  • Use Web of Science database and/or Google Scholar
  • Accuracy : Is the information true? What information does the author cite or refer to? Can you find this information anywhere else? Can you find evidence to back it up from another resource? Are studies mentioned but not cited (this would be something to check on)? Can you locate those studies?
  • Methodology:  What type of study did they conduct? Is it an appropriate type of study to answer their research question?  How many people were involved in the study? Is the sample size large and diverse enough to give trustworthy results?
  • Purpose/perspective : What is the purpose of the information? Was it written to sell something or to convince you of something? Is this fact or opinion based? Is it unfairly biased?

Learn more about how to evaluate your source

  • Evaluate Sources

Source Evaluation Chart

  • Source Evaluation Chart If you find it helpful, use this chart to evaluate sources as you read them. This is just a sample of some of the questions you might ask.
  • Last Updated: Jan 26, 2024 7:14 AM
  • URL: https://guides.lib.utexas.edu/socialwork

Creative Commons License

  • Search Search
  • CN (Chinese)
  • DE (German)
  • ES (Spanish)
  • FR (Français)
  • JP (Japanese)
  • Open research
  • Booksellers
  • Peer Reviewers
  • Springer Nature Group ↗
  • Publish an article
  • Roles and responsibilities
  • Signing your contract
  • Writing your manuscript
  • Submitting your manuscript
  • Producing your book
  • Promoting your book
  • Submit your book idea
  • Manuscript guidelines
  • Book author services
  • Publish a book
  • Publish conference proceedings

How to peer review

Author tutorials 

For science to progress, research methods and findings need to be closely examined and verified, and from them a decision on the best direction for future research is made. After a study has gone through peer review and is accepted for publication, scientists and the public can be confident that the study has met certain standards, and that the results can be trusted.

What you will get from this course

When you have completed this course and the included quizzes, you will have gained the skills needed to evaluate another researcher’s manuscript in a way that will help a journal Editor make a decision about publication. Additionally, having successfully completed the quizzes will let you demonstrate that competence to the wider research community

Topics covered

How the peer review process works.

Journals use peer review to both validate the research reported in submitted manuscripts, and sometimes to help inform their decisions about whether or not to publish that article in their journal. 

If the Editor does not immediately reject the manuscript (a “desk rejection”), then the editor will send the manuscript to two or more experts in the field to review it. The experts—called peer reviewers—will then prepare a report that assesses the manuscript, and return it to the editor. After reading the peer reviewer's report, the editor will decide to do one of three things: reject the manuscript, accept the manuscript, or ask the authors to revise and resubmit the manuscript after responding to the peer reviewers’ feedback. If the authors resubmit the manuscript, editors will sometimes ask the same peer reviewers to look over the manuscript again to see if their concerns have been addressed. This is called re-review.

Some of the problems that peer reviewers may find in a manuscript include errors in the study’s methods or analysis that raise questions about the findings, or sections that need clearer explanations so that the manuscript is easily understood. From a journal editor’s point of view, comments on the importance and novelty of a manuscript, and if it will interest the journal’s audience, are particularly useful in helping them to decide which manuscripts to publish.

Will the authors know I am a reviewer? Will I know who the authors are? 

Traditionally, peer review worked in a way we now call “closed,” where the editor and the reviewers knew who the authors were, but the authors did not know who the reviewers were. In recent years, however, many journals have begun to develop other approaches to peer review. These include:

  • Closed peer review — where the reviewers are aware of the authors’ identities but the authors’ are never informed of the reviewers’ identities.
  • Double-blind peer review —where neither author nor reviewer is aware of each other’s identities.
  • Open peer review —where authors and reviewers are aware of each other’s identity. In some journals with open peer review the reviewers’ reports are published alongside the article.

The type of peer review used by a journal should be clearly stated in the invitation to review letter you receive and policy pages on the journal website. If, after checking the journal website, you are unsure of the type of peer review used or would like clarification on the journal’s policy you should contact the journal’s editors.

Why serve as a peer reviewer?

As your career advances, you are likely to be asked to serve as a peer reviewer.

As well as supporting the advancement of science, and providing guidance on how the author can improve their paper, there are also some benefits of peer reviewing to you as a researcher:

  • Serving as a peer reviewer looks good on your CV as it shows that your expertise is recognized by other scientists. (See the supplemental material about the Web of Science Reviewer Recognition Service to learn more about getting credit for the reviews you do. Also see the supplemental material about ORCiD iDs to learn how to connect your reviews to your unique ORCiD iD.) 
  • You will get to read some of the latest science in your field well before it is in the public domain.
  • The critical thinking skills needed during peer review will help you in your own research and writing.

Who does peer review benefit?

When performed correctly peer review helps improve the clarity, robustness and reproducibility of research.

When peer reviewing, it is helpful to think from the point of view of three different groups of people:

  • Authors . Try to review the manuscript as you would like others to review your work. When you point out problems in a manuscript, do so in a way that will help the authors to improve the manuscript. Even if you recommend to the editor that the manuscript be rejected, your suggested revisions could help the authors prepare the manuscript for submission to a different journal. 
  • Journal editors . Comment on the importance and novelty of the study. Editors will use your comments to assess whether the manuscript is of the right level of impact for the journal. Your comments and opinions on the paper are much more important that a simple recommendation; editors need to know why you think a paper should be published or rejected as your reasoning will help inform their decision.
  • Readers . Identify areas that need clarification to make sure other readers can easily understand the manuscript. As a reviewer, you can also save readers’ time and frustration by helping to keep unimportant or error filled research out of the published literature.

Writing a thorough, thoughtful review usually takes several hours or more. But by taking the time to be a good reviewer, you will be providing a service to the scientific community.  

Accepting an invitation to review

Editors invite you to review as they believe that you are an expert in a certain area. They would have judged this from your previous publication record or posters and/or sessions you have contributed to at conferences. You may find that the number of invitations to review increases as you progress in your career.

There are several questions to consider before you accept an invitation to review a paper.

  • Are you qualified? The editor has asked you to review the manuscript because he or she believes you are familiar with the specific topic or research method used in the paper. It will usually be okay if you can review some, but not all, aspects of a manuscript. Take as an example, if the study focused on a certain physiological process in an animal model you conduct your research on but used a technique that you have never used. In this case, simply review the parts of the manuscript that are in your area of expertise, and tell the editor which parts you cannot review. However, if the manuscript is too far outside your area, you should decline to review it.
  • Do you have time? If you know you will not be able to review the manuscript by the deadline, then you should not accept the invitation. Sending in a review long after the deadline will delay the publication process and frustrate the editor and authors. Keep in mind that reviewing manuscripts, like research and teaching, is a valuable contribution to science, and is worth making time for whenever possible.
  • The reported results could cause you to make or lose money, e.g., the authors are developing a drug that could compete with a drug you are working on.
  • The manuscript concerns a controversial question that you have strong feelings about (either agreeing or disagreeing with the authors).
  • You have strong positive or negative feelings about one of the authors, e.g., a former teacher who you admire greatly.
  • You have published papers or collaborated with one of the co-authors in recent years.

If you are not sure if you have a conflict of interest, discuss your circumstances with the editor.

Along with avoiding a conflict of interest, there are several other ethical guidelines to keep in mind as you review the manuscript. Manuscripts under review are highly confidential, so you should not discuss the manuscript – or even mention its existence – to others. One exception is if you would like to consult with a colleague about your review; in this case, you will need to ask the editor’s permission. It is normally okay to ask one of your students or postdocs to help with the review. However, you should let the editor know that you are being helped, and tell your assistant about the need for confidentiality. In some cases case, when the journal operates an open peer review policy they will allow the student or postdoc to co-sign the report with you should they wish.

It is very unethical to use information in the manuscript to make business decisions, such as buying or selling stock. Also, you should never plagiarize the content or ideas in the manuscript.

Next: Evaluating manuscripts

For further support

We hope that with this tutorial you have a clearer idea of how the peer review process works and feel confident in becoming a peer reviewer.

If you feel that you would like some further support with writing, reviewing, and publishing, Springer Nature offer some services which may be of help.

  • Nature Research Editing Service offers high quality  English language and scientific editing. During language editing , Editors will improve the English in your manuscript to ensure the meaning is clear and identify problems that require your review. With Scientific Editing experienced development editors will improve the scientific presentation of your research in your manuscript and cover letter, if supplied. They will also provide you with a report containing feedback on the most important issues identified during the edit, as well as journal recommendations.
  • Our affiliates American Journal Experts also provide English language editing* as well as other author services that may support you in preparing your manuscript.
  • We provide both online and face-to-face training for researchers on all aspects of the manuscript writing process.

* Please note, using an editing service is neither a requirement nor a guarantee of acceptance for publication. 

Test your knowledge

Take the Quiz!

Stay up to date

Here to foster information exchange with the library community

Connect with us on LinkedIn and stay up to date with news and development.

  • Tools & Services
  • Account Development
  • Sales and account contacts
  • Professional
  • Press office
  • Locations & Contact

We are a world leading research, educational and professional publisher. Visit our main website for more information.

  • © 2023 Springer Nature
  • General terms and conditions
  • Your US State Privacy Rights
  • Your Privacy Choices / Manage Cookies
  • Accessibility
  • Legal notice
  • Help us to improve this site, send feedback.
  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

What’s peer review? 5 things you should know before covering research

Is peer-reviewed research really superior? Why should journalists note in their stories whether studies have been peer reviewed? We explain.

peer review research journalists news coverage

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by Denise-Marie Ordway, The Journalist's Resource May 8, 2021

This <a target="_blank" href="https://journalistsresource.org/media/peer-review-research-journalists/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

As scholars and other experts rush to release new research aimed at better understanding the coronavirus pandemic, newsrooms must be more careful than ever in vetting the biomedical studies they choose to cover. One of the first steps journalists should take to gauge the quality of all types of research is answering this important question: Has the paper undergone peer review?

Peer review is a formal process through which researchers evaluate and provide feedback on one another’s work, ideally filtering out flawed and low-quality studies while strengthening others. Academic journals generally do not publish papers that have not survived the process. Researchers often share studies that have not been peer reviewed — usually referred to as working papers or preprints — by posting them to online servers and repositories.

It’s worth noting the world’s largest preprint servers for life sciences — bioRxiv — and health sciences — medRxiv — screen papers for plagiarism and content that is offensive, non-scientific or might pose a health or biosecurity risk. But there are preprint servers in other fields that do not apply the same level of scrutiny.

While peer review is intended for quality control, it is imperfect. For example, reviewers, who often are college faculty with expertise in the same field as the work they are examining, sometimes fail to detect fraud, data discrepancies and other problems. Even some of the most prestigious journals with the most rigorous peer-review processes have had to retract articles. Retractions are rare, however.

 “Only about four of every 10,000 papers are now retracted. And although the rate roughly doubled from 2003 to 2009, it has remained level since 2012,” Science magazine reported in 2018 .

As of early May 2021, a total of 108 papers about COVID-19 , the bulk of which appeared in journals, had been withdrawn, according to Retraction Watch , which maintains an online database of research retractions going back decades.

Despite its flaws, researchers, overall, seem confident in peer review. During a 2019 survey of more than 3,000 researchers across disciplines in multiple countries, 85% agreed or strongly agreed that without peer review, there is no control in scientific communication. The survey — conducted by Elsevier , one of the world’s largest journal publishers, and Sense about Science , a London-based nonprofit promoting public interest in science and evidence — also finds 90% of participating researchers agreed or strongly agreed that peer review improves the quality of research.

Several published studies present similar findings. A 2017 paper in Learned Publishing indicates early career researchers are “generally supportive of peer review” but complain the process is time-consuming and that reviewers, who typically work on a volunteer basis, should be rewarded with some sort of professional acknowledgement or payment.

Regardless of the type of research journalists cover, they should have at least a basic understanding of the peer-review process and its benefits and shortcomings.

Below, we explain some of the most important aspects with help from several experts, including Diane Sullenberger , executive editor of Proceedings of the National Academy of Sciences ; Miriam Lewis Sabin , a senior editor at The Lancet ; and John Inglis , executive director of Cold Spring Harbor Laboratory Press and co-founder of bioRxiv and medRxiv.

1. Peer reviewers are not fraud detectors. They also do not verify the accuracy of a research study.

The peer-review process is meant to validate research, not verify it. Reviewers typically do not authenticate the study’s data or make sure its authors actually followed the procedures they say they followed to reach their conclusions. Reviewers, sometimes called referees, also do not determine whether findings are correct, given the data and other evidence used to reach them.

Reviewers do examine academic papers to answer a range of relevant questions. They look at whether the research questions are clear, for example, and whether the study’s design, sampling methods and analysis are appropriate for answering those questions. They also assess whether the paper answers such questions as:

  • Is the study explained clearly enough and in enough detail that another researcher could replicate it?
  • How does the study challenge or add to the body of knowledge on this topic?
  • Does it fit the standards and scope of the journal to which it was submitted?
  • If the study involves humans or animals, did the authors acquire the required approvals and meet ethical standards?
  • Does it give proper attribution to earlier research?

When German theologian Henry Oldenburg created the first journal dedicated to science in 1665, he considered the key functions of a research journal to be registration, certification, dissemination and archiving, writes Robert Campbell , a senior publisher at Wiley-Blackwell Publishing, in the book Academic and Professional Publishing .

Peer review is considered the gold standard for assessing research content, Sullenberger explained in an email interview. But journalists must understand it is not infallible, she added.

“Science is self-correcting through replication and reproducibility, and research fraud can be difficult to detect in peer review,” she wrote.

2. Journalists can help the public recognize the value of peer review by noting whether the studies they cover have been peer reviewed.

Scholars, research organizations and others regularly criticize news outlets for failing to explain whether new research they report on or the older studies they incorporate into their stories have undergone peer review. It’s important that journalists differentiate between peer-reviewed research and preprint papers, which often present preliminary findings.

Sullenberger told JR : “Greater clarity when journalists cover unreviewed preprints is needed; they should not be reported as having the same validity and authority as peer-reviewed research papers. “

A recent study in the journal Health Communication finds that many of the news articles written about COVID-related preprints during the first four months of 2020 did not indicate the scientific uncertainty of that research. About 43% of the stories analyzed did not mention the research was a preprint, unreviewed, preliminary or in need of verification.

At the time of that study, however, many of the journalists drawn into reporting the frenzy of stories about the pandemic were unfamiliar with preprints, Inglis says. Today, he adds, journalists covering the coronavirus are much more likely to include phrases such as “not yet peer reviewed” to describe preprints.

Sense About Science urges the public to pay attention to whether a study being discussed in a government meeting or in the media has been peer reviewed. “The more we ask, ‘is it peer reviewed?’ the more obliged reporters will be to include this information,” the organization asserts in a leaflet it created to help the public scrutinize the scientific information featured in news stories.

Knowing whether research has been peer reviewed helps a person judge how much weight to give the claims being made by its authors, Tracey Brown , the managing director of Sense About Science, explained during an interview with The Scholarly Kitchen blog.

“We have to establish an understanding that the status of research findings is as important as the findings themselves,” Brown says in a prepared statement . “This understanding has the capacity to improve the decisions we make across all of society.”

3. Peer reviewers help decide a study’s fate.

Journal editors typically assign two or more reviewers to each research paper. Some also employ a statistical specialist.

While the selection process differs, journals choose reviewers based on factors such as expertise, reputation and the journal’s prior experience with the reviewer. While it can be difficult to recruit scientists willing to examine manuscripts because of the time required for proper scrutiny, many do it because of “a sense of duty to help advance their disciplines, as well as the need for reciprocity, knowing other researchers volunteer to peer review their manuscript submissions,” Science magazine reported earlier this year .

Reviewers can make recommendations about whether a journal should accept, reject or send a paper back for minor or major revisions. Reviewers usually submit reports offering their overall impressions of a paper and suggestions for improvements. Most often, though, the final decision lies with one or more of the journal’s editors or its editorial board.

Inglis, a former assistant editor of The Lancet who is now a publisher of five peer-reviewed journals, says a common criticism of the peer-review process is its lengthy timeline, which can span from weeks to a year or more. Another complaint: Sometimes, journals send a study back and notify the authors that they would be willing to accept or reconsider the paper for publication if the authors do more research.

“Sometimes, the demands made are completely unrealistic,” Inglis adds. “The criticism from the authors is that editors don’t know that when they say ‘Do this additional experiment,’ that’s another year [added to the timeline]. Meanwhile, the work is perfectly valid.”

Inglis says bioRxiv (pronounced “bio-archive”) and medRxiv (pronounced “med-archive”) were created so researchers could disseminate preliminary versions of their papers, allowing the scientific community to immediately use and start building on those findings and data.

4. The peer-review process varies significantly among academic journals.

There are several kinds of peer review, and journals often state on their websites which one they use. The most common are single-blinded peer review, which allows reviewers to know the authors’ identities while reviewers’ identities remain anonymous, and double-blinded peer review, in which authors and reviewers are unaware of each other’s identities.

Both have advantages. Advocates argue anonymity protects reviewers from retribution. It also helps shield authors from biases based on factors such as gender, nationality, language and affiliations with less prestigious institutions, Tony Ross-Hellauer , a postdoctoral researcher at the Know-Center in Austria, writes in “ What is Open Peer Review? A Systematic Review ,” published on the European open access platform F1000Research in 2017.

Keeping identities secret can create problems, however.

“At the editorial level, lack of transparency means that editors can unilaterally reject submissions or shape review outcomes by selecting reviewers based on their known preference for or aversion to certain theories and methods,” Ross-Hellauer writes. He adds that reviewers, “shielded by anonymity, may act unethically in their own interests by concealing conflicts of interest.”

A newer type of peer review, called open peer review , is not as prevalent. But the scientific community has ongoing discussions about whether its greater transparency might help improve research quality.

While there is no universally accepted definition of open peer review, also known as open identity peer review, the identities of both authors and reviewers typically are made known to each other. Ross-Hellauer notes that disclosing reviewers’ names may force them “to think more carefully about the scientific issues and to write more thoughtful reviews.”

A growing number of journals are posting not just the papers they accept but also the feedback peer reviewers gave the papers’ authors.

5. Peer review continues to evolve.

Some journals have started initiating peer review after a paper is published instead of beforehand, although this still is not common. MedEdPublish , an online scholarly journal, is one of those that employ post-publication peer review . Its papers undergo peer review on the website by members of the medical education community, which could include the journal’s editor, members of its editorial board or a panel of reviewers.

Under the MedEdPublish model, a paper has undergone formal peer review after at least two members of the journal’s review panel evaluate it. The paper can be critiqued and improved over time as a living document on the journal website.

“Post-publication peer review follows an open and transparent process, which aims to avoid editorial bias while increasing the speed of publication,” according to the website. “We use an ‘open identities’ principle, whereby all reviewers submit their feedback publicly, under their own name, and everyone visiting an article page can see all peer review reports, referee names, and comments, and can join the discussion if they wish.”

Another noteworthy shift: Some journals are working to diversify their pools of reviewers by ensuring women, racial and ethnic minorities, and scientists from other countries help appraise and select studies for publication.

Research indicates the overwhelming majority of experts chosen as reviewers are men. A study published earlier this year in Science Advances examines internal data for 145 scholarly journals across fields and finds that women comprised 21% of their reviewers between 2010 and 2016. At journals dedicated to biomedical and health research, 24.6% of reviewers were women.

The Lancet medical journal has set targets for increasing the number of women and scientists from low- and middle-income countries, Sabin, one of its senior editors, wrote in an email interview with JR . In 2019, The Lancet family of journals announced its Diversity Pledge .

“We track, monitor, and report representation of authors, reviewers, and editorial advisors by gender and across geography,” Sabin told JR in an e-mail.

She added that the journal formed a task force late last year to, among other things, examine its policies and processes to find ways to increase the representation of experts who are racial and ethnic minorities.

The Coalition for Diversity and Inclusion in Scholarly Communications has focused on the issue globally. More than 90 organizations have adopted the coalition’s Joint Statement of Principles , which aims to “promote involvement, innovation, and expanded access to leadership opportunities that maximize engagement across identity groups and professional levels.”

Identity groups include racial and sexual minorities, military veterans, pregnant women, parents and people from lower social classes and socioeconomic backgrounds.

The Journalist’s Resource would like to thank Rick Weiss, the director of SciLine, and Meredith Drosback, SciLine’s associate director of science, for their help in creating this tip sheet.

About The Author

' src=

Denise-Marie Ordway

  • Reference Manager
  • Simple TEXT file

People also looked at

Review article, peer assessment to promote self-regulated learning with technology in higher education: systematic review for improving course design.

how peer reviewed research works

  • 1 Innovation and Educational Improvement Research Group, Rey Juan Carlos University, Madrid, Spain
  • 2 Elkarrikertuz Research Group, University of the Basque Country, Donostia-San Sebastián, Spain

Peer assessment is one of the approaches to develop self-regulation of learning. When evaluating the work of peers, metacognitive strategies of critical reflection are employed. They improve their own learning especially if evaluative feedback and/or suggestions for modification are provided. The aim of this systematic review is to learn how technology can facilitate self-regulation of learning, using peer assessment activities. We focus on higher education. To achieve the objective, we searched WoS and Scopus, obtaining 15 publications that concatenate the four search terms: self-regulated learning, peer assessment, higher education, and technology. These four terms must appear in the title, abstract or keywords. In this way, we ensure that the topic to be reviewed is central to the publication. The results are analyzed using the model for systematic review, which has three phases: description, synthesis, and critique. A proposal has been made to improve the design of courses in virtual classrooms, focusing on Moodle, and to include peer evaluation to improve self-regulated learning. It highlights the possibility of virtual classrooms to configure a rubric to guide the evaluation, together with the request for mandatory comments to justify the evaluation. This helps the student reflect on what is wrong and why, and how to improve. It also highlights the facility to randomly assign a specific number of tasks per reviewer or per task, and to make the whole process completely anonymous. The technology allows short deadlines for submission and review times to be maintained for instant feedback, as it can be configured with a single click. Finally, and related to this, Moodle can reopen the submission phase, to send an improved version based on feedback, and the evaluation phase, to check that the proposed improvements have been made. This helps to a greater extent to apply metacognitive strategies.

1 Introduction

Lifelong learning has been included as one of the Sustainable Development Goals (SDGs) ( United Nations, 2023 ) to face the complex context in which we find ourselves in the 21st century. To this end, the European Commission proposed the Learning to Learn competence to achieve lifelong learni ng (Hoskins and Fredriksson, 2008) . And, as Lluch and Portillo (2018) state, self-regulated learning (SRL) is essential to develop it in higher education. SRL is a process composed of thoughts, emotions and planned actions aimed at achieving a personal goal, that is, a set of strategies that students can activate when working toward their goals ( Zimmerman, 2002 ). Thus, SRL enables students to manage their own learning process.

One of the ways of working on SRL is through Peer Assessment (PA), which refers to the analysis and assessment of the quality of a peer's product or performance, through a process of critical reflection ( Roberts, 2006 ; Topping, 2009 ). The level of reflection will depend on whether the peer assessment consists only of proposing a score on the quality of the work. This is known as summative PA, or if it includes feedback derived from that reflection, formative PA. Feedback is no longer seen only as a process of transmitting information. Thus, we now also find a new focus on learni ng (Winstone et al., 2022) . Black and Wiliam (2009) propose feedback in formative assessment as the information that enables students to advance in their learning. Considering that feedback is a key element in instruction because of its high effectiveness, different approaches have been proposed to study how to deal with feedback in the classroom.

Thus, adopting a formative approach to PA enables students to develop metacognitive skills, helping each other to identify strengths and weaknesses, and to plan and guide their learni ng (Topping, 2009) . Metacognition includes knowledge, related to process evaluation, and metacognitive skills, related to feedback mechanisms that facilitate action planning and performance evaluation ( Veenman et al., 2006 ). Metacognition has been shown to be a fundamental component of self-regulated learning, including processes such as goal setting, planning, progress monitoring and reflection ( Azevedo and Gašević, 2019 ).

Peer feedback in these activities refers mainly to the performance of the task, but also to the process and even formal aspects of writing. This leads to improvements in the task and in future learni ng (Ion et al., 2016) . Therefore, most studies tend to assume a formative PA ( Alqassab et al., 2023 ). In this case, feedback becomes feedforward, which can be positive or negative, and, if negative, must be accompanied by proposals for improvement ( Topping, 2018 ).

This results in students employing advanced-level metacognitive strategies to provide feedback during peer assessment, especially if they are asked to provide evaluative comments and/or suggestions for modification on the assessed work ( Liu and Lin, 2007 ). Furthermore, Van Helden et al. (2023) have been able to conclude that, in many cases, PA promotes a better understanding of the assessment criteria. Consequently, this improves judgement and quality of feedback comments. Thus, students can learn from the feedback provided by their peers, but also through metacognitive reflection by having to justify what they have done ( Liu and Carless, 2006 ).

For this reason, the application of metacognitive strategies during feedback facilitates SRL ( Butler and Winne, 1995 ; Winne, 1996 ). Moreover, this reflection will be enhanced if it is implemented together with a backward evaluation process, which consists of the evaluated student assessing the feedback received from his or her reviewer. This helps the student to reflect on their work and use it to improve the assessed product ( Misiejuk and Wasson, 2021 ).

At this point, the design of PA activities must take into account the results found so far in the literature. On the one hand, Van Zundert et al. (2010) have seen how the training and experience that students had when carrying out PA influenced the quality of the activities, so that some kind of training is necessary to be successful in this type of activities. In order to improve feedback processes, it is necessary to develop more effective processes based on teacher feedback literacy. With teacher feedback literacy, an approach based on shared responsibility between teachers and learners can be achieved ( Carless and Winstone, 2023 ). This is the only way to develop feedback literacy in students, so that they are able to deal adequately with task assessment.

Students' feedback literacy involves developing the ability to take advantage of feedback opportunities by actively participating in feedback processes ( Malecka et al., 2022 ). To this end, these authors propose three mechanisms to be taken into account in the curriculum: eliciting, processing, and enacting feedback. For example, it has been shown to be important for students to manage their perceptions and attitudes, as well as to have greater confidence and agency in the feedback process ( Little et al., 2024 ). One technique to achieve this is the co-assessment of examples that would help students develop feedback literacy ( Carless and Boud, 2018 ).

On the other hand, Panadero and Alqassab (2019) have concluded that, according to the studies reviewed, anonymous PA improves students' perception of the value of the learning provided through PA. This is because feedback is more critical and tends to lead to higher achievement, especially in higher education. If, in addition, authors are paired with reviewers with similar performance, self-regulation will be more effective ( Zhang and Schunn, 2023 ).

All this should have an impact on the improvement of the activity, not only for students to learn through the help they receive from their peers. The useful activity should be used for metacognitive reflection. This means asking what they need to learn in order to apply it to the activity, what is important, how they should apply it, why it can be useful to them, and so on. At this point, it is important to consider the possibilities of PA to improve the self-regulation of their own learning, so we will focus on reviewing the literature on PA as a resource for improving SRL.

Currently, technology can be a great ally for the use of PA, as different tools can be used such as dedicated web-based PA system, Learning Management Systems (LMS), social media or mobile application ( Zheng et al., 2019 ). LMSs, such as Moodle, are widely used in online university courses ( Gamage et al., 2022 ), but can be applied to any modality that wants to benefit from a virtual classroom. Although originally used as after-class tools, technology-facilitated PA activities are increasing within the classroom ( Fu et al., 2019 ).

Therefore, to ensure the benefits of PA, and to provide the necessary scaffolding, as Goh et al. (2019) argue, the Moodle workshop activity allows all these elements to be incorporated, by introducing examples in the workshop itself. It also provides assessment guidelines, such as the use of rubrics, which include the possibility of adding feedback comments with a formative approach. In addition, it facilitates the distribution of work among many students in a random and anonymous way.

In addition, to facilitate SRL, technology has enabled the development of the Open Learner Model (OML) ( Hooshyar et al., 2020 ). This design facilitates the organization, monitoring, and regulation of learning in virtual environments, thanks to internal feedback through self-assessment of their learning. And, additionally, through external feedback such as from the teacher or peers ( Chou and Zou, 2020 ). This is important because one of the weaknesses of higher education students in virtual environments is knowing how to identify the knowledge of objectives and assessment criteria ( Ortega-Ruipérez and Castellanos-Sánchez, 2023 ), which is necessary to provide good feedback in PA activities.

Good technology design can also help in improve the quality of skills such as argumentative writing, according to Noroozi et al. (2023) . This is because, if PA feedback is presented in an appropriate way, it can facilitate reflection to improve original work in situations with the backward assessment process ( Misiejuk and Wasson, 2021 ).

This systematic review aims to account for how technology can facilitate SRL, that is, the ability to reflect on tasks, through PA activities. For this reason, the research questions that define the focus of the research are related to how technology can support PA to facilitate SRL in higher education. Firstly, a question is posed about the current state of research on the topic, as we intend to focus solely on virtual environments. Secondly, we aim to collect and provide guidelines to guide the design of PA activities to facilitate SRL in technological environments.

2 Materials and methods

It is a systematic review because it follows a specific protocol, uses an explicit and reproducible method, and attempts to critically appraise and synthesize the subject matter. Specifically, this review includes a narrative synthesis, an approach to systematic review that attempts to synthesize the findings of multiple studies ( Popay et al., 2006 ). As confirmed by these authors, a systematic review with a narrative synthesis usually contains a limited number of publications, unlike other approaches such as meta-analysis. The possibility of focusing on a smaller number of publications makes it possible to select only those that best address the topic for a more adecuate critical analysis. For this systematic review of the most relevant literature on the topic addressed, a five-stage analysis protocol was followed, similar to that of other systematic reviews on educational innovation topics ( Ramírez and Lugo, 2020 ; Gros and Cano, 2021 ).

In phase 1 the research questions have been posed, concerning the analysis of how technology can support PA to facilitate SRL in higher education. Firstly, a question is posed about the current state of research on the topic, and secondly, a question is posed to answer with appropriate guidelines to guide the design of PA activities to facilitate SRL.

RQ1: what is the state of the literature on how peer assessment facilitates self-regulated learning?

RQ2: what design guidelines for peer assessment activities can we follow for our students to enhance their self-regulated learning?

In phase 2, the search process was established. Using the Web of Science and Scopus databases, the search was limited to articles from the last 10 years (2014–2023). A time frame of 10 years was considered appropriate because we believe that this last decade can be considered the inclusion and popularization of virtual classrooms in higher education. A Search String (SS) has been performed for the search in “Article title, Abstract, Keywords” combining the selected words, and adding a new filter in each iteration: Peer assessment/Peer feedback + Metacognition/Self-regulated learning/Self-regulation of learning + Higher education/Tertiary education/University + Technology/Moodle.

It is important to start the search by combining two keywords (Peer assessment + Self-regulated learning), as our study focuses on how the former can benefit the latter. In this way, SS1 has collected the following search criteria: (“peer assessment” OR “peer feedback”) AND (“self-regulated learning” OR “self-regulation of learning” OR metacognition).

Subsequently, the educational stage has been added because it is understood that the design of the activities may be different depending on the age of the students. Thus, SS2 includes the above search criteria plus the one relating to higher education: (“peer assessment” OR “peer feedback”) AND (“self-regulated learning” OR “self-regulation of learning” OR metacognition) AND (“higher education” OR “tertiary education” OR university).

Finally, the keyword on technology is included to obtain only those studies that incorporate it. In this way, the Search String was finally three. The final SS3 search has therefore included all important search criteria to answer the research questions: (“peer assessment” OR “peer feedback”) AND (“self-regulated learning” OR “self-regulation of learning” OR metacognition) AND (“higher education” OR “tertiary education” OR university) AND (technology OR Moodle).

The summary of the articles found in each one can be seen in Table 1 . The result was 27 articles. Before continuing with the next phase, we proceeded to detect the articles that were repeated in both databases, detecting a total of 9, so that the number of articles for the first review was finally 18.

www.frontiersin.org

Table 1 . Summary of the number of selected papers.

In phase 3, two additional criteria were defined for the inclusion or exclusion of articles after reading the abstracts. We proceeded to (1) exclude articles that were not relevant to the object of study, as they had other objectives and in which self-regulation of learning through PA was not the central element. After this screening, 15 articles were left. And (2) only those related to the use of technological tools that allow the design of activities similar to those allowed by an LMS, such as Moodle, were selected, i.e. the results can be applied to any virtual classroom. In this case, it has not been necessary to eliminate any article, as it has been possible to draw some procedure or conclusion for the review of all of them. The evaluation of the selected studies has not always followed the same approach. However, all of them address the use of technology to facilitate peer assessment in higher education as their main theme, and all of them show the positive aspects to be taken into acount for an optimal use of technology. Therefore, the final number of articles analyzed was 15 ( Figure 1 ).

www.frontiersin.org

Figure 1 . Procedure for the final selection of papers.

It is important to note here that this is a systematic review that attempts to address a very specific topic. Therefore, we have preferred to have fewer articles, but to ensure that the articles reviewed allow us to fully answer the research questions. Thus, these 15 articles allow us to answer how a technological tool in higher education can simplify peer assessment to facilitate self-regulated learning. This is considered a sufficient number of articles as a starting point to move forward on this topic. In line with Popay et al. (2006) , by including a narrative synthesis of the systematic review, a small number of articles is proposed. This allows to focus on the publications that best address the two research questions posed on the topic.

The selected articles are representative and of high quality, as the search has been carried out only in the most reliable databases: web of science and Scopus. In addition, all these publications have passed a rigorous blind peer review process, in which experts in the field have decided that these publications add value to the topic of peer review. Therefore, we did not want to discard any of them, regardless of whether they are journal articles, book chapters or conference papers. In the case of conference papers, not only an abstract of an experience is found, also the experience is expanded in the selected publication through results that demonstrate the usefulness of the experience.

In phase 4, data selection and extraction was done in an Excel document (omitted for blind review, data in figshare) , trying to systematize the information around some important questions for the consideration and generalizability of the results of the reviewed articles: sample size, duration (< 1 week, 2–5, 6–10, more than 10), technology (web, LMS or social media), assignment (system, professor, students), evaluation method (quantitative, qualitative, both), with or without scaffolding, organization (group, individual), number of evaluators per task, number of tasks per evaluator, course modality (in-person, blended, online). No other variables were considered relevant given the narrative nature of this systematic review.

Finally, in phase 5, the tripartite model for systematic review ( Daniel and Harland, 2017 ) was applied. First, a description of the results of each of the 15 selected contributions was made, then a synthesis of the most important contributions was elaborated, and finally, a critique of applications to compile guidelines for the design of PA activities with technology, specifically oriented to a LMS, such as Moodle. The critique is presented as the discussion of results, as the guidelines obtained relate to the results of previous research.

3.1 Description

First, a summary table ( Table 2 ) has been prepared with the basic information of the 15 selected articles. Secondly, after a detailed reading each of the publications, the most important information provided by each article for our purpose, i.e., how PA can facilitate SRL in students, is described. Also, in some cases, the procedure of how the PA has been carried out has been included in the synthesis, as it has been considered important to consider some key aspects of the design of PA activities that have proved to be useful for SRL.

www.frontiersin.org

Table 2 . Summary of contributions on our topic in each publication.

Janson et al. (2014) propose the design of their PA focused on supports interaction for awareness and reflection, and thus, improving learning outcomes. Regarding the procedure, after preparing the material with flipped classroom, students propose solutions in groups, and they have to comment on the proposals of the other groups. After receiving feedback from the other groups, each student must reflect individually on the strengths and weaknesses of their proposals, in order to revise and improve them based on the feedback.

García-Jiménez (2015) makes a proposal based on the literature on how the teacher should guide reflection on learning at the beginning, giving more and more protagonism to the students. After this first scaffolding step, peers guide and monitor the process of student' reflection on their learning and its outcomes, to provide feedback on whether the reflection is sufficient and appropriate.

García-Jiménez et al. (2015) review and discuss how the PA helps students to understand what is required of them in the task, as it is necessary analyze and discuss the elements of the task in order to assess. If teachers allow students to participate in the design of assessment tasks, criteria and benchmarks, it improves their understanding so that they can assess with quality. In other words, teachers should not impose assessment criteria, but listen to how students interpret what they mean until they understand them. This is very important for the development of SRL, i.e., that they understand that building feedback for their peers is more important than receiving it. Constructing feedback that allows the peer to progress in their learning, feedforward, will help the learner to identify strategies to improve their learning. In addition, receiving good feedback will encourage the learner to maintain or modify their effort on the task. At this point, technology can facilitate an appropriate interaction, a dialogue, between assessor and assessed, so that personalized feedback is achieved to promote SRL. Finally, they point out that technology can also provide feedback in different formats in addition to written feedback, which can facilitate its reception.

Hsu and Huang (2015) found that, in PA, grading was quite similar to teacher grading, more realistic than self-assessment. PA was positively valued in two ways: when students receive peer evaluation, even when the feedback is negative, it helps them to be reasonable and appreciate the possibilities for improvement; and in relation to SRL, when students evaluate, it helps them to compare with their work, to know where not to make mistakes and to improve their work. Moreover, to improve SRL, PA feedback is better than giving a mark, but it should be guided so that they learn to reflect well on what is expected from the task. Written feedback can be misinterpreted, so it is recommended to accompany it with face-to-face feedback.

Marín and Pérez (2016) used PA in preservice teacher training, using the Moodle tool to facilitate PA, with formative assessment strategies and feedback management. In the last phase of the Moodle workshop activity, they added an activity in which students had to self-assess and reflect on the feedback received in their e-portfolios. By reflecting on their work from the perspective of others, they were able to become aware of how to improve their work, a phase in which they work from an SRL perspective. Furthermore, in this experience, a weight was assigned to take account in the course grade, as the average of the evaluations of 3 peers is quite close to the one given by the teacher. One aspect that they consider necessary to implement in future proposals is a new “Conferencing” phase ( Reinholz, 2016 ) so that they can discuss with their peer evaluators. These authors do not consider technology in this new proposed phase, so it would be necessary to see how to make the first part of the PA anonymous, and then know the identity of their assessors for the new phase.

Ng (2016) proposed the PA for the evaluation of wikis created by working groups, which were presented in a class to the rest of the groups. A representative from each assessing group was asked to give at least one positive observation and one suggestion for improvement via Moodle. Moreover, each student had to complete a rubric within 3 days of the presentation. Afterwards, each group reviewed all the feedback received to improve their work. In conclusion, although some did not find Moodle a good setting for providing feedback, they did note that in direct interactions they were unwilling to challenge themselves, so anonymous interaction through technology did help to provide more critical feedback.

Raposo-Rivas and Gallego-Arrufat (2016) highlighted, like other studies reviewed, a greater understanding of the evaluation process when carrying out PA. In this case, a tool is used to assess competences and knowledge of other group members, but it is not done anonymously, so the comments reveal an assessment based on cronyism rather than criticism.

Albano et al. (2017) consider that PA has contributed to strengthening the development of students' explanation and argumentation processes. By grading, they are also assessing their own learning. On the other hand, by using a method of triangulating the grades of 3 peers beyond the arithmetic mean (giving more weight to the most similar grades), the quality of the grading is usually adequate. In some specific cases, the teacher must intervene to provide high quality assessments, easily applicable in Moodle.

Blau and Shamir-Inbal (2017) proposed a PA in which the procedure included three phases: first reviewing an example in pairs critically, then assessing their task based on evaluation criteria, and finally evaluating the results of their peers, proposing questions and suggesting improvements. Metacognitive thinking thus occurred before PA, as a way of monitoring of learning. During PA, SRL was also produced by applying critical thinking during the analysis of other tasks, which they used to learn. At first they had difficulty keeping up and did not learn, but later they learned to self-organize in order to learn independently and flexibly.

Fernández-Ferrer and Cano (2019) carried out PA activities in all the topics of the subject and observed an improvement in quality after each iteration. They conclude that both their PA and the feedback received from peers have been useful for their own learning, as they have improved the relevance of what is requested in the assignment.

Roman et al. (2020) developed a tool for PA that allows comments to be added next to the assessed content, making it easier to know what each comment refers to. In this tool, different assessors evaluate a task, over several iterations. Being able to receive multiple perspectives on the work over several iterations, helped to further challenge the content and the task, resulting in more useful feedback to apply to future tasks (feedforward). An improvement over an LMS is that students must incorporate feedback from all peers, one by one (in LMS they paid more attention to some comments than others).

Swartz (2020) proposed solving 2 ill-defined problems, and the PA consists of metacognitively reflecting on the partner's proposed outcome and helping the partner to continue solving. The lack of scaffolding meant that many students focused on figuring out how to use the tool, or how or when to provide feedback, and could not focus on reflecting and helping to solve the ill-defined problem. According to the results of the feedback collected from participants, in order to improve the “assessment as learning” approach so that SRL could be fostered, a second round of feedback and further extension of learning should be included.

Wang (2020) conducted two phases of the study: in the first phase, each student was required to provide feedback to the groups presenting the project (in the middle of the project and at the end of the project) by submitting a comment for discussion at the end of the presentation. In the second, feedback came from the learning journals anonymously. Students felt that the first phase was more useful for self-regulating their learning, as it was instantaneous. However, they also mentioned that the feedback in the second phase could be more critical and comprehensive because it was anonymous. The third phase, with instant and anonymous feedback, was the most highly rated, as it was the most helpful for self-reflection.

Zhu et al. (2023) sought to exploit the metacognitive advantages of PA, which is assessed immediately after task sumision, thus providing critical and comprehensive feedback. It was especially useful for lower-achieving students, in whom a greater improvement was observed. In designing the programme, the first update was to be able to keep the students in the same working groups, so that iterations could be applied gradually until the final product was achieved.

Lluch and Cano (2023) decided to include different activity options in Moodle for their PA activity. In addition to the workshop, the activity par excellence for PA in Moodle, they included forums to discuss the assessment criteria, and to improve understanding of why and for what purpose PA is introduced; open questionnaires to encourage self-regulation (objectives, planning, etc.); forms to integrate changes in the activities; and questionnaires to explain actions on the following phases. They end the activity with a reflection phase that enhances SRL after PA, including a final task, with the new version of the activity including improvements based on the feedback. After their experience, they conclude that SRL, associated with the Learning to Learn competence, should be developed throughout higher education, and that it is necessary to plan self-assessment and PA experiences to develop it. They propose to adapt these experiences for different levels of SRL during the progress of different courses, starting with scaffolded activities up to performing these tasks autonomously.

3.2 Synthesis

Peer feedback can be highly relevant in improving students' learning. This improvement is especially evident in lower-achieving students, where a greater improvement is observed after reflecting on the feedback received ( Zhu et al., 2023 ).

Firstly, receiving good peer feedback helps students to be reasonable about failures, appreciating that there is room for improvement ( Hsu and Huang, 2015 ). Receiving good feedback, which allows one to reflect on the comments to improve for the future, is known as feedforward. If the feedback is constructive, the learner should be able to reflect individually on their strengths and weaknesses, so that they can revise and improve the task based on the feedback ( Janson et al., 2014 ).

In addition, the use of PA activities facilitates the teacher's work in situations where he/she has many students and cannot give personalized feedback to each student. The grades provided in PA are often quite similar to those of the teacher, rather than the students' own self-assessment grades ( Hsu and Huang, 2015 ). If, in addition, an average grade of 3 peers' assessments is used, the grade is quite similar to that of the teacher ( Marín and Pérez, 2016 ). Even triangulation methods can be used, which are very easy to apply with technology, and the quality of the grade is very high ( Albano et al., 2017 ). As these authors point out, in some cases the teacher intervention may be necessary to adjust the grades, which is easy to implement with technology such as Moodle.

It can also be positive for students to know if they are doing it correctly if they can combine group and individual feedback, as group feedback helps them to better understand how to do it ( Ng, 2016 ). This can be done in different ways, for example, in phases, with a group phase first and an individual phase afterwards to apply what they have learned ( Janson et al., 2014 ), or according to the type of feedback, with group feedback being qualitative and individual feedback being quantitative with a rubric ( Ng, 2016 ).

Secondly, and much more important for PA activities to benefit students' SRL, is the provision of feedback to their peers. Having to provide feedback helps students to understand the learning and task objectives, as they must analyse the task elements for assessment ( García-Jiménez et al., 2015 ) and the assessment process itself ( Raposo-Rivas and Gallego-Arrufat, 2016 ).

In this sense, understanding how to provide feedback is more important for SRL than receiving it in order to implement improvements, since generating a good feedforward helps the student him/herself to identify the strategies that will improve his/her learni ng (García-Jiménez et al., 2015) . In addition to improving the processes of explanation and argumentation that facilitate deep learni ng (Albano et al., 2017) . By evaluating peers, they evaluate their own learning, as they must compare both tasks to know where they should not make mistakes ( Hsu and Huang, 2015 ).

In addition, if students are involved in the design of the assessment tasks, defining the criteria and reference levels, their understanding of the objectives of the tasks improves, and thus they can assess with greater precision and quality ( García-Jiménez et al., 2015 ).

And, if possible, it is very beneficial for learners to maintain a dialogue between assessor and assessed, i.e., to facilitate several iterations that help to personalize the feedback, so that it properly understood and integrated, especially promoting SRL ( García-Jiménez et al., 2015 ).

Thirdly, there is scaffolding, which has been found to be essential for students to learn how to provide good feedback based on the proposed assessment criteria, as they must learn to reflect on what is expected from the task ( Hsu and Huang, 2015 ). If scaffolding is not provided, it is very likely that students will not know how to provide good feedback, as in the case of Swartz (2020) where students focused more on how to use the tool or perform the task, rather than reflecting on the content to help the peer improve their task.

The first and most common way to create this scaffolding is with the help of the teacher, who should guide the reflection at the beginning, and gradually give more of a leading role to the students ( García-Jiménez, 2015 ). The second way to create scaffolding is with the support of peers. Peers can guide and monitor the reflection process to give feedback on whether the reflection is sufficient ( García-Jiménez, 2015 ).

They can also conduct an analysis of examples in pairs/groups. For example, they start by analyzing an example in pairs and their own work developed individually, before conducting the PA ( Blau and Shamir-Inbal, 2017 ). By following this procedure, they ensure that they employ metacognitive thinking by monitoring their own learning before the PA, but also during the PA because they apply critical thinking by analyzing other tasks, and being able to compare them with their own, which helps them to learn.

In this second case, it may be an extra effort for them to keep up, because they are not able to organize themselves and do not have a teacher as a reference point, but in the end they achieve independent learni ng (Blau and Shamir-Inbal, 2017) .

It is also important to note that this procedure is learned and improved with practice, both by assessing peers and receiving their evaluations, which improves learning and understanding of the task and objectives, as the relevance of the requested content is improved ( Fernández-Ferrer and Cano, 2019 ).

An important point to highlight is that the SRL enabled by PA is related to the Learning to Learn competence, which must be developed throughout the entire higher education stage, as it is an essential competence for lifelong learni ng (Lluch and Portillo, 2018) . To this end, PA and self-assessment experiences should be planned progressively in the different higher education courses, starting with some kind of scaffolding until autonomous completion by students is achieved ( Lluch and Cano, 2023 ).

Focusing on how technology can help in the design of PA activities to facilitate SRL, it is worth noting that some key elements of PA have clearly benefited from the use of technology.

Technology makes it easier for learners to have more than one iteration ( García-Jiménez et al., 2015 ), as a single iteration may not be to self-regulate future learni ng (Swartz, 2020) . Several iterations produce a great improvement over a traditional feedforward, as feedback is better understood, becoming more useful, allows for deeper questioning of content and improving learni ng (Roman et al., 2020) . In fact, it is best to perform several iterations until the delivery of the final product, in the same working groups (evaluator-evaluees), to better apply the feedback received ( Zhu et al., 2023 ).

However, it is necessary to rethink how to add a discussion with the evaluators in an appropriate way through the technology itself, as doing so without technology will allow the identity of the evaluators to be known ( Marín and Pérez, 2016 ), which can be counterproductive. Anonymous PA allows for more critical feedback ( Ng, 2016 ), as non-anonymous PA is based on cronyism ( Raposo-Rivas and Gallego-Arrufat, 2016 ). Thus, the introduction of forums or other activity formats could be considered to maintain anonymity.

In addition to anonymity, instant feedback is necessary to improve self-regulation ( Wang, 2020 ). Therefore, if we want it to be instantaneous yet anonymous, technology plays a crucial role. In addition to facilitating the improvement of one's own learning by receiving it instantaneously, providing it right at the end of the task helps assessors to better reflect on the task, thus providing more critical and comprehensive feedback ( Zhu et al., 2023 ).

As we have said, technology offers the possibility of using different formats, such as video, audio, etc., beyond a written format that can be misinterpreted ( Hsu and Huang, 2015 ), without the need to use a face-to-face format that makes anonymity disappear, thus favoring the reception of feedback ( García-Jiménez et al., 2015 ). LMSs, such as Moodle, have different activities in addition to the workshop. The workshop activity is designed for PA, but it may be insufficient. It is worth highlighting the proposal by Lluch and Cano (2023) in which they add different types depending on the objective. Firstly, forums to involve students in the design of assessment criteria and to improve understanding of the task. Secondly, open-ended questionnaires to improve the metacognitive phases for self-regulation of learning (goal identification, planning, monitoring, and self-assessment). Thirdly, individual forms and tasks to hand in assignments with the improvements introduced thanks to feedforward.

Being able to complement different types of activities in the same technological tool facilitates self-regulation, as students can comfortably self-assess and reflect on the feedback received, for example, in an e-portfolio ( Marín and Pérez, 2016 ).

Technology has the potential to be updated with improvements when necessary. For example, in certain tools, such as the one designed by Roman et al. (2020) , it is proposed that students incorporate all comments to improve their work. A dynamic for introducing this into LMSs would need to be explored, as students often only take into account the comments that are easy for them to include and ignore the others.

4 Discussion: critique

This discussion section provides the third phase of the tripartite model for systematic review ( Daniel and Harland, 2017 ). It sets out guidelines for designing courses in virtual classrooms or similar technologies. Following these guidelines, teachers can be design peer review workshops that facilitate students' own self-regulated learning.

First, it is recommended that feedback is provided in different formats to improve its comprehensibility. In this way, learners do not rely solely on written feedback that can be misinterpreted ( Hsu and Huang, 2015 ). For this purpose, written feedback can be used in addition to a rubric, which is easily configurable in the virtual classroom. A file in any format can also be added, e.g., a short one-minute video, with a reflection on what they have learned from the first submission to the last. Other types of tasks such as forums, open-ended questionnaires or individual forms and tasks can also be used ( Lluch and Cano, 2023 ).

On the other hand, the grades proposed by the assessors can be used, although it is recommended that at least the average of 3 grades is obtained ( Marín and Pérez, 2016 ). Thus, the quality of the assessment is very high, as suggested by Albano et al. (2017) , and if there are cases where the assessments are very disparate, the teacher should review the assignment and provide their own feedback. In this case, the Moodle workshop averages the assessors' grades. In addition, the quality of the assessment is graded, depending on whether the mark awarded is like that of the other assessors. It is recommended that this assessor grading is considered to ensure that students assess their peers well. This activity also allowed the teacher to modify the marks if the final average mark is not considered adequate.

As we have seen, it is also important that the AP process is anonymous, in order to achieve more critical and comprehensive feedback ( Ng, 2016 ). Otherwise, it will be based on cronyism and the tasks of friends will not be critically questioned ( Raposo-Rivas and Gallego-Arrufat, 2016 ). In the specific case of Moodle, we have a specific configuration so that both the activity is assessed anonymously, and the identity of the assessors is unknown. It is also more useful for feedback to be instantaneous, both for the assessor, who can provide more complete feedback, and for the assessed, who can apply the comments on the spot, improving the regulation of learni ng (Wang, 2020) . The use of technology makes it possible to propose short timelines to ensure that feedback is instantaneous, as fixed deadlines can be set for each phase.

In addition, for peer assessment to truly facilitate self-regulated learning, students must be taught to provide feedback. This constructive feedback allows the student to reflect on the comments and apply them in a new and improved version of the task ( Janson et al., 2014 ). For, as Malecka et al. (2022) argue, it is necessary to include the processing and application of the feedback received. Thus, they identify strategies to improve their own learni ng (García-Jiménez et al., 2015) , and evaluate their learni ng (Hsu and Huang, 2015) . Technology can be used to create questionnaires with good and bad examples of feedforward, and discuss the results with them in class to justify why it is or is not constructive.

The use of feedforward will be useful if the evaluation consists of several iterations, allowing a dialogue between evaluator and evaluated ( García-Jiménez et al., 2015 ), helping to better understand the feedback and integrate it properly into the final product. The technology facilitates work with several iterations ( García-Jiménez et al., 2015 ). Thus, the technology makes it possible to reopen phases that have already been completed in order to carry out a new submission, and, subsequently, a new evaluation. In the specific case of Moodle, the tool facilitates that the evaluators of a task are always the same in the different iterations, as recommended by Zhu et al. (2023) . Furthermore, it is recommended to include a final task after the whole AP process, in which the learner can apply the knowledge developed through the reflection of the feedback ( Janson et al., 2014 ). In this way, in addition to the PA activity, a task can be created in the virtual classroom for the student to hand in the final version of their work, which is assessed by the teacher. Another option is the re-evaluation of modified submissions by the assessors themselves, who can focus on the improvements included to revise the grading of the rubric.

Secondly, in addition to the characteristics that must be taken into account for good feedback, students must learn to evaluate their peers from a learning perspective. As proposed by Carless and Winstone (2023) , literacy feedback from the teacher is necessary for proper scaffolding. Therefore, scaffolding is required, either by the teacher as a guide or by practicing with examples in pairs or small groups. This scaffolding enables a focus on building useful and relevant feedback on the objectives ( Hsu and Huang, 2015 ). In the Moodle workshop, the teacher can include already corrected examples with feedback comments. These comments can be used to teach students what is expected at each point ( García-Jiménez, 2015 ). In addition, these examples can also be reviewed in pairs or groups of three to ensure that they understand how they should approach and create the feedback ( Blau and Shamir-Inbal, 2017 ). As Carless and Boud (2018) explain, the use of examples is ideal for developing feedback literacy.

Therefore, it is advisable to combine group feedback with individual feedback. They start with group feedback to discuss and reflect on what the feedback should look like ( Ng, 2016 ). Applying this idea to the use of technology, the first PA activity can be done in pairs. In the workshop activity, by including part rubric and part open-ended feedback, it is possible to discuss in pairs at which level of the rubric the assessed activities fall. Afterwards, they individually justify their decision in the comments. In addition, the pair can review the comments to discuss how to improve them.

It is also essential that the use of peer assessment and self-assessment is planned progressively, as they will learn to assess little by little. Finally, they will be able to develop the Learning to Learn competence, which is necessary for lifelong learni ng (Lluch and Cano, 2023) . Thus, it is recommended to carry out several AP activities, approximately one per month or up to four in a four-month period. It is also recommended to support these activities with self-assessment, in the Moodle workshop the option can be enabled for them to assess their own work based on the assessment criteria, forms or tasks with short audio or video files can be used for them to reflect on their progress.

Finally, it should be noted that in this scaffolding process, it is advisable to involve students in the development of the assessment criteria and reference levels. In this way, they can check their interpretation of the objectives, so that they can assess the task more accurately ( García-Jiménez et al., 2015 ). In the days prior to the PA activity, a forum can be opened to discuss the assessment criteria, for students to review and propose modifications. They can also be asked to give an example of what it would mean to be assessed at one of the benchmark levels of a criterion in a rubric. To ensure their participation in the forum, they can be asked to participate in pairs or groups of three in class or with a video call tool, and then discuss the forum comments together to construct a final rubric.

Thus, we can see, as Little et al. (2024) state, how all these scaffolding aids will help learners to manage their perceptions and attitudes toward the feedback process. In this way, they will then be able to improve their confidence and feedback agency, developing their feedback literacy.

5 Conclusions

Peer assessment facilitates metacognitive reflection, thanks to the use of formative feedback, which is used in most of the experiences reviewed ( Alqassab et al., 2023 ). In order to achieve quality feedback, we start from the importance of students developing literacy feedback ( Carless and Boud, 2018 ). This is due to the fact that in recent years an approach to feedback that focuses on learning, rather than just transmission has begun to be considered ( Winstone et al., 2022 ). This helps them to plan and guide their own learni ng (Topping, 2009) , as they need to understand the assessment criteria ( Van Helden et al., 2023 ) to justify the feedback ( Liu and Carless, 2006 ). Moreover, metacognition is especially applied when receiving it ( Liu and Lin, 2007 ), as Ng (2016) appreciates that they reflect on it in order to apply it later in their work, according to Misiejuk and Wasson (2021) .

Fernández-Ferrer and Cano (2019) confirm how experience improves the application of PA ( Van Zundert et al., 2010 ), so they should train as proposed by Lluch and Cano (2023) , or support with scaffolding, as suggested by García-Jiménez (2015) and Blau and Shamir-Inbal (2017) . On the other hand, Marín and Pérez (2016) , Ng (2016) , and Raposo-Rivas and Gallego-Arrufat (2016) mention the importance of the anonymity they achieve with technology, according to Panadero and Alqassab (2019) , very easy to implement with technology.

We can conclude that, if the design guidelines drawn in this review are followed, it is possible to develop SRL with PA activities. As we have seen in all the studies included in this article, peer assessment activities provide more than just a benefit from the feedback received. Students, as reviewers, can gain a greater understanding of the task and improve their knowledge of the topic they are to assess. The reflection required during the assessment is conducive for them to regulate their own learning.

As limitations we can highlight the few articles that meet all the established search criteria. If the higher education and/or technology criteria were removed, the result would be much higher, and more design recommendations could have been achieved. It is therefore recommended that future literature reviews be conducted with a more open search.

Author contributions

BO-R: Data curation, Formal analysis, Methodology, Writing—original draft. JC-G: Conceptualization, Supervision, Validation, Writing—review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

This research has been developed thanks to the research stay carried out at the University of the Basque Country (UPV/EHU), with the research group Elkarrikertuz, in relation to the research project Trayectorias de aprendizajes de jóvenes universitarios: concepciones, estrategias, tecnologías y contextos (Learning trajectories of young university students: conceptions, strategies, technologies and contexts). TRAY-AP. 2020/2023 (Ministry of Science and Innovation, PID2019-108696RB-I00. 2020-2022).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Albano, G., Capuano, N., and Pierri, A. (2017). Adaptive peer grading and formative assessment. J. e-Learn. Knowl. Soc. 13, 1. doi: 10.20368/1971-8829/159

Crossref Full Text | Google Scholar

Alqassab, M., Strijbos, J. W., Panadero, E., Ruiz, J. F., Warrens, M., and To, J. (2023). A systematic review of peer assessment design elements. Educ. Psychol. Rev. 35, 18. doi: 10.1007/s10648-023-09723-7

Azevedo, R., and Gašević, D. (2019). Analyzing multimodal multichannel data about self-regulated learning with advanced learning technologies: issues and challenges. Comput. Human Behav. 96, 207–210. doi: 10.1016/j.chb.2019.03.025

Black, P., and Wiliam, D. (2009). Developing the theory of formative assessment. Educ. Assessm. Evaluat. Accountab. 21, 5–31. doi: 10.1007/s11092-008-9068-5

Blau, I., and Shamir-Inbal, T. (2017). Re-designed flipped learning model in an academic course: the role of co-creation and co-regulation. Comput. Educ. 115, 69–81. doi: 10.1016/j.compedu.2017.07.014

Butler, D. L., and Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65, 245–281. doi: 10.3102/00346543065003245

Carless, D., and Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessm. Evaluat. Higher Educ. 43, 1315–1325. doi: 10.1080/02602938.2018.1463354

Carless, D., and Winstone, N. (2023). Teacher feedback literacy and its interplay with student feedback literacy. Teach. Higher Educ. 28, 150–163. doi: 10.1080/13562517.2020.1782372

Chou, C. Y., and Zou, N. B. (2020). An analysis of internal and external feedback in self-regulated learning activities mediated by self-regulated learning tools and open learner models. Int. J. Educ. Technol. High. Educ. 17, 1–27. doi: 10.1186/s41239-020-00233-y

Daniel, B. K., and Harland, T. (2017). Higher Education Research Methodology: A Step-by-Step Guide to the Research Process . London: Routledge.

Google Scholar

Fernández-Ferrer, M., and Cano, E. (2019). Feedback experiences to improve the continuous assessment: the use of Twitter as an emerging technology. Educar 55, 437–455. doi: 10.5565/rev/educar.872

Fu, Q. K., Lin, C. J., and Hwang, G. J. (2019). Research trends and applications of technology-supported peer assessment: a review of selected journal publications from 2007 to 2016. J. Comp. Educ. 6, 191–213. doi: 10.1007/s40692-019-00131-x

Gamage, S. H. P. W., Ayres, J. R., and Behrend, M. B. (2022). A systematic review on trends in using Moodle for teaching and learning. Int. J. STEM Educ. 9, 9. doi: 10.1186/s40594-021-00323-x

PubMed Abstract | Crossref Full Text | Google Scholar

García-Jiménez, E. (2015). Assessment of learning: From feedback to self-regulation. The role of technologies. Elect. J. Educ. Res. Assessm. Evaluat. 21, 2. doi: 10.7203/relieve.21.2.7546

García-Jiménez, E., Gallego-Noche, B., and Gómez-Ruiz, M. Á. (2015). “Feedback and self-regulated learning: How feedback can contribute to increase students' autonomy as learners,” in Sustainable Learning in Higher Education: Developing Competencies for the Global Marketplace , eds. M. Peris-Ortiz and J. M. Merigó (Cham: Springer), 113–130.

Goh, C. F., Tan, O. K., Rasli, A., and Choi, S. L. (2019). Engagement in peer review, learner-content interaction and learning outcomes. Int. J. Inf. Learn. Technol . 36, 423–433. doi: 10.1108/ijilt-04-2018-0038

Gros, B., and Cano, E. (2021). Procesos de feedback para fomentar la autorregulación con soporte tecnológico en la educación superior: Revisión sistemática. RIED. Revista Iberoamericana de Educación a Distancia 24, 107–125. doi: 10.5944/ried.24.2.28886

Hooshyar, D., Pedaste, M., Saks, K., Leijen, Ä., Bardone, E., and Wang, M. (2020). Open learner models in supporting self-regulated learning in higher education: A systematic literature review. Comput. Educ. 154, 103878. doi: 10.1016/j.compedu.2020.103878

Hoskins, B., and Fredriksson, U. (2008). Learning to Learn: What is it and Can it Be Measured. New York: JRC Publications Repository. OPOCE.

Hsu, P. L., and Huang, K. H. (2015). “Evaluating online peer assessment as an educational tool for promoting self-regulated learning,” in Multidisciplinary Social Networks Research: Second International Conference. Proceedings 2 (Berlin: Springer Berlin Heidelberg), 161–173.

Ion, G., Barrera-Corominas, A., and Tomàs-Folch, M. (2016). Written peer-feedback to enhance students' current and future learning. Int. J. Educ. Technol. High. Educ. 13, 1–11. doi: 10.1186/s41239-016-0017-y

Janson, A., Ernst, S. J., Lehmann, K., and Leimeister, J. M. (2014). “Creating awareness and reflection in a large-scale is lecture – the application of a peer assessment in a flipped classroom scenario,” in 4th Workshop on Awareness and Reflection in Technology-Enhanced Learning (ARTEL 2014) to be held in the context of EC-TEL 2014 (Graz: ARTEL).

Little, T., Dawson, P., Boud, D., and Tai, J. (2024). Can students' feedback literacy be improved? A scoping review of interventions. Assessm. Eval. Higher Educ. 49, 39–52. doi: 10.1080/02602938.2023.2177613

Liu, E. Z. F., and Lin, S. S. J. (2007). Relationship between peer feedback, cognitive and metacognitive strategies, and achievement in networked peer assessment. Br. J. Educ. Technol. 38, 1122–1125. doi: 10.1111/j.1467-8535.2007.00702.x

Liu, N. F., and Carless, D. (2006). Peer feedback: the learning element of peer assessment. Teach. High. Educ. 11, 279–290. doi: 10.1080/13562510600680582

Lluch, L., and Cano, E. (2023). How to embed SRL in online learning settings? Design through learning analytics and personalized learning design in moodle. J. New Approaches Educ. Res. 12, 120–138. doi: 10.7821/naer.2023.1.1127

Lluch, L., and Portillo, M. C. (2018). La competencia de aprender a aprender en el marco de la educación superior. Revista Iberoamericana de Educación 78, 59–76. doi: 10.35362/rie7823183

Malecka, B., Boud, D., and Carless, D. (2022). Eliciting, processing and enacting feedback: mechanisms for embedding student feedback literacy within the curriculum. Teach. Higher Educ. 27, 908–922. doi: 10.1080/13562517.2020.1754784

Marín, V. I., and Pérez, A. (2016). “Collaborative e-assessment as a strategy for scaffolding self-regulated learning in higher education,” in Formative Assessment, Learning Data Analytics and Gamification (Cambridge: Academic Press), 3–24.

Misiejuk, K., and Wasson, B. (2021). Backward evaluation in peer assessment: a scoping review. Comput. Educ. 175, 104319. doi: 10.1016/j.compedu.2021.104319

Ng, E. M. (2016). Fostering pre-service teachers' self-regulated learning through self-and peer assessment of wiki projects. Comput. Educ. 98, 180–191. doi: 10.1016/j.compedu.2016.03.015

Noroozi, O., Banihashem, S. K., Biemans, H. J., Smits, M., Vervoort, M. T., and Verbaan, C. L. (2023). Design, implementation, and evaluation of an online supported peer feedback module to enhance students' argumentative essay quality. Educ. Inf. Technol. 28, 12757–12784. doi: 10.1007/s10639-023-11683-y

Ortega-Ruipérez, B., and Castellanos-Sánchez, A. (2023). Guidelines for instructional design of courses for the development of self-regulated learning for teachers. S Afr J Educ. 43, 1–13. doi: 10.15700/saje.v43n3a2202

Panadero, E., and Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assess. Eval. High Educ. 44, 1253–1278. doi: 10.1080/02602938.2019.1600186

Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., et al. (2006). “Guidance on the conduct of narrative synthesis in systematic reviews,” in A product from the ESRC Methods Programme. Version 1, b92 (London: Institute for Health Research).

Ramírez,;, M. S, and Lugo, J. (2020). Revisión sistemática de métodos mixtos en el marco de la innovación educativa. Comunicar 65, 9–20. doi: 10.3916/C65-2020-01

Raposo-Rivas, M., and Gallego-Arrufat, M. J. (2016). University students' perceptions of electronic rubric-based assessment. Digit. Educ. Rev. 30, 220–233. doi: 10.1344/der.2016.30.220-233

Reinholz, D. (2016). The assessment cycle: a model for learning through peer assessment. Assess. Eval. High. Educ. 41:301–315. doi: 10.1080/02602938.2015.1008982

Roberts, T. S. (2006). Self, Peer, and Group Assesment in E-Learning . Hershey, PA: Information Science Publishing.

Roman, T. A., Callison, M., Myers, R. D., and Berry, A. H. (2020). Facilitating authentic learning experiences in distance education: Embedding research-based practices into an online peer feedback tool. TechTrends 64, 591–605. doi: 10.1007/s11528-020-00496-2

Swartz, B. (2020). “‘Assessment as Learning' as a tool to prepare engineering students to manage ill-defined problems in industry,” in 2020 IFEES World Engineering Education Forum-Global Engineering Deans Council (WEEF-GEDC) (Cape Town: IEEE), 1–5.

Topping, K. J. (2009). Peer assessment. Theory Pract. 48, 20–27. doi: 10.1080/00405840802577569

Topping, K. J. (2018). Using Peer Assessment to Inspire Reflection and Learning . New York: Routledge.

United Nations (2023). “Goal 4,” in Ensure Inclusive and Equitable Quality Education and Promote Lifelong Learning Opportunities for all. SDGS . Available online at: https://sdgs.un.org/goals/goal4 (accessed January 24, 2024).

Van Helden, G., Van Der Werf, V., Saunders-Smits, G. N., and Specht, M. M. (2023). The use of digital peer assessment in higher education–an umbrella review of literature. IEEE Access 11, 22948–22960. doi: 10.1109/ACCESS.2023.3252914

Van Zundert, M., Sluijsmans, D., and Van Merriënboer, J. (2010). Effective peer assessment processes: research findings and future directions. Learn Instr. 20, 270–279. doi: 10.1016/j.learninstruc.2009.08.004

Veenman, M. V. J., Van Hout-Wolters, H. A. M., and Afflerbach, P. (2006). Metacognition and learning: conceptual and methodological considerations. Metacognit. Learn. 1, 3–14. doi: 10.1007/s11409-006-6893-0

Wang, Y. H. (2020). Design-based research on integrating learning technology tools into higher education classes to achieve active learning. Comput. Educ. 156, 103935. doi: 10.1016/j.compedu.2020.103935

Winne, P. H. (1996). A metacognitive view of individual differences in self-regulated learning. Learn. Individ. Differ. 8, 327–353. doi: 10.1016/S1041-6080(96)90022-9

Winstone, N., Boud, D., Dawson, P., and Heron, M. (2022). From feedback-as-information to feedback-as-process: a linguistic analysis of the feedback literature. Assessm. Evaluat. Higher Educ. 47, 213–230. doi: 10.1080/02602938.2021.1902467

Zhang, Y., and Schunn, C. D. (2023). Self-regulation of peer feedback quality aspects through different dimensions of experience within prior peer feedback assignments. Contemp. Educ. Psychol. 74, 102210. doi: 10.1016/j.cedpsych.2023.102210

Zheng, L., Chen, N. S., Cui, P., and Zhang, X. (2019). A systematic review of technology-supported peer assessment research: an activity theory approach. Int. Rev. Res. Open Dis. 20, 168–191. doi: 10.19173/irrodl.v20i5.4333

Zhu, H., Li, N., Rai, N. K., and Carroll, J. M. (2023). SmartGroup: a tool for small-group learning activities. Future Int. 15, 7. doi: 10.3390/fi15010007

Zimmerman, B. J. (2002). Becoming a self-regulated learner: an overview. Theory Pract. 41, 64–70. doi: 10.1207/s15430421tip4102_2

Keywords: peer assessment, self-regulated learning, higher education, Moodle, technology

Citation: Ortega-Ruipérez B and Correa-Gorospe JM (2024) Peer assessment to promote self-regulated learning with technology in higher education: systematic review for improving course design. Front. Educ. 9:1376505. doi: 10.3389/feduc.2024.1376505

Received: 25 January 2024; Accepted: 26 March 2024; Published: 08 April 2024.

Reviewed by:

Copyright © 2024 Ortega-Ruipérez and Correa-Gorospe. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Beatriz Ortega-Ruipérez, beatriz.ortega@urjc.es

This article is part of the Research Topic

Fostering self-regulated learning

  • Open access
  • Published: 03 April 2024

Perception, practice, and barriers toward research among pediatric undergraduates: a cross-sectional questionnaire-based survey

  • Canyang Zhan 1 &
  • Yuanyuan Zhang 2  

BMC Medical Education volume  24 , Article number:  364 ( 2024 ) Cite this article

160 Accesses

Metrics details

Scientific research activities are crucial for the development of clinician-scientists. However, few people pay attention to the current situation of medical research in pediatric medical students in China. This study aims to assess the perceptions, practices and barriers toward medical research of pediatric undergraduates.

This cross-sectional study was conducted among third-year, fourth-year and fifth-year pediatric students from Zhejiang University School of Medicine in China via an anonymous online questionnaire. The questionnaires were also received from fifth-year students majoring in other medicine programs [clinical medicine (“5 + 3”) and clinical medicine (5-year)].

The response rate of pediatric undergraduates was 88.3% (68/77). The total sample of students enrolled in the study was 124, including 36 students majoring in clinical medicine (“5 + 3”) and 20 students majoring in clinical medicine (5-year). Most students from pediatrics (“5 + 3”) recognized that research was important. Practices in scientific research activities are not satisfactory. A total of 51.5%, 35.3% and 36.8% of the pediatric students participated in research training, research projects and scientific article writing, respectively. Only 4.4% of the pediatric students contributed to publishing a scientific article, and 14.7% had attended medical congresses. None of them had given a presentation at a congress. When compared with fifth-year students in the other medicine program, the frequency of practices toward research projects and training was lower in the pediatric fifth-year students. Lack of time, lack of guidance and lack of training were perceived as the main barriers to scientific work. Limited English was another obvious barrier for pediatric undergraduates. Pediatric undergraduates preferred to participate in clinical research (80.9%) rather than basic research.

Conclusions

Although pediatric undergraduates recognized the importance of medical research, interest and practices in research still require improvement. Lack of time, lack of guidance, lack of training and limited English were the common barriers to scientific work. Therefore, research training and English improvement were recommended for pediatric undergraduates.

Peer Review reports

Medical education includes the learning of basic clinical medical knowledge and the cultivation of scientific research abilities. Scientific research, an essential part of medical education, is increasingly important, as it can greatly improve medical care [ 1 , 2 ]. Scientific research activities are crucial for the development of clinician-scientists, who have key roles in clinical research and translational medicine. Therefore, medical education is increasingly emphasizing the cultivation of scientific research abilities. Strengthening scientific research training helps students to develop independent critical thinking, improve the ability of observation, and foster the problem-solving skills. It is suggested that developing undergraduate research benefits the students, the faculty mentors, the university or institution, and eventually society [ 2 , 3 ]. As a result, there is a growing trend to integrate scientific research training into undergraduate medical education. Early exposure to scientific research was recommended in undergraduate medical students [ 4 , 5 ]. In fact, an international questionnaire study showed that among 1625 responses collected from 38 countries, less than half (42.7%) agree/strongly agree that their medical schools provided “sufficient training in medical research” [ 6 ]. The training or practices about medical research in undergraduates is not universal. In China, few people pay attention to the current situation of medical research in undergraduates, especially for pediatric medical students.

Due to changes in China’s birth policy (two-child policy in 2016 and the three-child policy in 2021), child health needs are increasing [ 7 ]. The shortage of pediatricians is alarming in China. Therefore, numerous policies have been implemented to meet the challenges of the shortage of pediatricians, including reinstating pediatrics as an independent discipline in medical school enrollment and increasing the enrollment of pediatrics. The number of pediatricians has increased year by year. The number of pediatricians in China increased from 118,500 in 2015 (0.52 pediatricians per 1000 children under the age of 14) to 206,000 in 2021 (0.78 pediatricians per 1000 children under the age of 14). With the increase in pediatric enrollment, pediatric medical education is facing new challenges. It is urgent to study the current situation of cultivation of pediatric medical students, one of which is the scientific research abilities [ 8 , 9 ]. However, as the particular background of pediatrics, very little is known about the perception, practice and barriers toward medical research in pediatric undergraduates. The purpose of this study was to address the gap by assessing the practices, perceptions and barriers toward medical research of pediatric undergraduates at Zhejiang University. The results can help to improve the mode of cultivating scientific research abilities among pediatric medical students.

The study was conducted from March to April 2023. The study was approved by the Ethics Review Committee of the Children’s Hospital of Zhejiang University School of Medicine and was undertaken according to the Helsinki declaration. Participants provided written informed consent upon applying to participate in the study.

Study design and setting

This is a cross-sectional study conducted via an online questionnaire and the questionnaire was done simultaneously in all students. The study aimed to investigate the perception, practices and barriers toward research in pediatric undergraduates from Zhejiang University School of Medicine, and to investigate the differences in research among undergraduate students from clinical medicine (“5 + 3” integrated program, pediatrics) [pediatrics (“5 + 3”)], clinical medicine (“5 + 3” integrated program) [clinical medicine (“5 + 3”)] and clinical medicine (5-year).

The clinical medicine of Zhejiang University School of Medicine (ZUSM) includes a 5-year program, a “5 + 3” integrated program, and a 8-year MD. Program. The clinical medicine (5-year) program is the basis of clinical medicine education.Graduates need to complete 3 years of standardized residency training to become doctors. The clinical medicine (“5 + 3”) model combines the 5-year medical undergraduate education, 3-year standardized residency training and postgraduate education. Since 2015, 20 to 30 students who are interested in pediatrics were selected from second-year undergraduate students of clinical medicine (“5 + 3”) to continue studies as pediatrics (“5 + 3”) every year. Since 2019, ZUSM established pediatrics (“5 + 3”) program. 20–30 students have been enrolled independently every year.

Participants

All of the third-, fourth-, and fifth-year undergraduate students in pediatrics (“5 + 3”) and some of the fifth-year undergraduate students from clinical medicine (“5 + 3”) and clinical medicine (5-year) who expressed an interest in participating in the study were enrolled.

Data collection

The questionnaire was self-designed after reviewing the literature and consulting senior faculty. For the purpose of testing its clarity and reliability, the questionnaire was pilot tested among 36 undergraduate students. Their feedback was mainly related to the structure of the questionnaire. To address these comments, the questionnaire was modified to reach the final draft, which was distributed to the student sample included in the study. The reliability coefficient was assessed by Cronbach’s alpha, and the validity was evaluated by Kaiser-Meyer-Olkin (KMO).

There are four sections of the questionnaire used in this study:

The first part covered 3 statements (gender, grade and major).

The second part examined the participants’ perceptions of medical research, including 5 statements (importance, enhancement of competitiveness, practising thinking ability, solving clinical problems, and being interesting).

The third part examined practices in medical research, including 6 statements (project, training, write paper, publish paper, attend academic conference and conference communication).

The barriers to medical research were assessed in the last part, including 7 statements.

Perception and barriers toward medical research were evaluated using a five-point Likert scale ranging from 1 to 5 (1 = strongly disagree; 2 = disagree, 3 = uncertain, 4 = agree, 5 = strongly agree).

Statistical analysis

Categorical data are represented as numbers and frequencies. For ease of reporting and analyzing data, the responses of “agree” and “strongly agree” were grouped and reported as agreements, and “disagree” and “strongly disagree” were grouped as disagreements. The chi-square test was used to test the difference in the frequency of participation in research practices. The student’s perception score based on grades was analyzed using Fisher’s exact test, and attitude between the year of study was analyzed by ANOVA or a nonparametric test (Kruskal-Wallis H test). The statistical analysis was performed using IBM SPSS version 26. P  < 0.05 was considered significant.

The reliability coefficient of the questionnaire was assessed by Cronbach’s alpha; it was 0.73 for perception and 0.78 for barriers. KMO was 0.80 for perception (Bartlett’s sphericity test: χ2 = 200.4, p  < 0.001) and 0.73 for barriers (Bartlett’s sphericity test: χ2 = 278.4, p  < 0.001), indicating the appropriateness of the factor analysis. The factor analysis was carried out using the principal component analysis with varimax rotation. For perception, one factor explains 58.2% of the variance. For barriers, two-factor solution explains 60.2% of the variance.

The response rate was 79.2% (19/24) in the third year, 88% (22/25) in the fourth year and 96.4% (27/28) in the fifth year students in pediatrics (“5 + 3”), and the total response rate was 88.3% (68/77). The number of fifth-year students majoring in clinical medicine (“5 + 3”) and clinical medicine (5-year) was 36 and 20, respectively. Thus, a total of 124 students participated in the questionnaire. Among the participants, approximately 46% were male and 54% were female.

Perception regarding scientific research among the students majoring in pediatrics (“5 + 3”)

The majority of students in pediatrics (“5 + 3”) recognized that research was important (92.6%), such as increasing competitiveness, solving clinical problems and improving thinking (Fig.  1 ). Approximately half of the students in pediatrics (“5 + 3”) were interested in the research.

figure 1

Perception regarding scientific research among the students majoring in pediatrics

Among the third-, fourth-, and fifth-year students in pediatrics (“5 + 3”), there was a significant difference in the effect of research on thinking ability (Table  1 ). A stronger understanding of the importance of research for thinking abilities was found in students from the fifth year.

Comparing the perception of medical research among the fifth-year students from the different medicine programs, there was a significant difference in the interest in research (Table  2 ). The fifth-year undergraduates from clinical medicine (5-year) received the highest score for interest in scientific research, followed by pediatrics (“5 + 3”).

Practices regarding scientific research among students majoring in pediatrics (“5 + 3”)

More than half of the students in pediatrics (“5 + 3”) participated in research training. Approximately 36.8% of them were involved in writing scientific articles, and 35.3% participated in research projects (Table  3 ). Only 4.4% of the students in pediatrics (“5 + 3”) contributed to publishing a scientific article, and 14.7% of the students in pediatrics (“5 + 3”) had attended medical congresses. However, none of the students had made a presentation at congresses.

A statistically significant difference was observed among different grades in the pediatrics (“5 + 3”) program, with fifth-year students having a much higher rate of participation in conferences. However, no significant differences were observed in other forms of medical research practices.

When compared with fifth-year students from other programs (clinical medicine “5 + 3” or 5-year), the students in pediatrics (“5 + 3”) had a lower rate of participation in the projects (Table  4 ). The rate of participation in the research training of the pediatric students was lower than that of clinical medicine (5-year) (44.44% vs. 75%). There were no significant differences in other research practices, such as writing articles and attending congress.

Barriers regarding scientific research among the students majoring in pediatrics (“5 + 3”)

The most common barriers to research work for pediatric students were lack of training (85.3%), lack of time (83.9%), and lack of mentorship (82.4%).

However, the top three barriers to research work in fifth-year pediatric students were lack of training (96.3%), limited English (88.89%) and lack of time (88.89%). We found that the barrier of “lack of training” became increasingly apparent with grade, which was significantly obvious in fifth-year pediatric students compared with other grades (Table  5 ). The other barriers had no significant differences among the three grades from the pediatrics (“5 + 3”) program.

When compared with fifth-year students from other programs (clinical medicine “5 + 3” or 5-year), the rate of agreement about the barrier of “limited English” was significantly higher in fifth-year students from the pediatrics (“5 + 3”) program. There were no significant differences in other barriers among fifth-year students from different majors (Table  6 ).

The type of research activities willing to involve in the future among the students majoring in pediatrics (“5 + 3”)

A total of 88.2% of students in pediatrics (“5 + 3”) wanted to participate in the training of scientific research activities. Furthermore, when asked about the type of future scientific research activities, 80.9% of students wanted to participate in clinical research, and only 19.1% of students wanted to be involved in basic research. There was no significant difference in the different grades of the students from the pediatrics (“5 + 3”) program (Fig.  2 A).

figure 2

Types of research activities that students majoring in pediatrics are willing to be involved with in the future ( A ). Types of research activities that the students from different programs are willing to be involved with in the future ( B ). When compared with students in clinical medicine (“5 + 3”), fifth-year students in pediatrics (“5 + 3”) were significantly less likely to participate in basic research (* P  = 0.001)

Compared with students in clinical medicine (“5 + 3”), fifth-year students in pediatrics (“5 + 3”) were significantly less likely to participate in basic research (Fig.  2 B).

In China, to solve the shortage of pediatricians, pediatric programs have resumed in some medical schools, including Zhejiang University, in recent years. In this study, we focused on the perceptions, practices and barriers to scientific research in pediatric undergraduates from Zhejiang University.

With global progress, more research is required to advance knowledge and innovation in all fields. Likewise, at the present time, research activities are a highly important skill for medical practitioner. Medical students were encouraged to take active part in scientific research and prepare for today’s knowledge-driven world [ 2 ]. In the current study, we found an overall positive perception of scientific research in pediatric undergraduates. More than 90% of pediatric students agreed (“strongly agree” and “agree”) that scientific research was important, which could make them more competitive and improve their thinking.

Although the students had a positive perception of medical research, their practice of conducting research remained unsatisfactory. When compared with the fifth-year undergraduates from clinical medicine (“5 + 3”) (66.67%) and clinical medicine (5-year) (75%), only 33.33% of the fifth-year undergraduates in pediatrics (“5 + 3”) have participated in scientific research projects. The number of paper publications was very small (third-year of Pediatric (“5 + 3”) 0, fourth-year 4.5% and fifth-year 7.4%). It was significantly less than the publication rate of final-year students in the United States (46.5%) and Australia (roughly one-third) [ 10 , 11 ]. In another study in Romania, 31% of fifth-year students declared that they had prepared a scientific presentation for a medical congress at least once [ 12 ]. Moreover, none of the students in the study presented their paper in the scientific forum. A study in India also found that the undergraduate students’ experience of presenting paper in scientific forums was only 5% and publication 5.6% [ 13 ]. As part of the curriculum, some Indian universities require postgraduates to present papers and submit manuscripts for publication. Nevertheless, the practices regarding scientific research of undergraduates is still relatively poor. Lack of time, lack of guidance and lack of training for research careers were found to be the major obstacles in medical research for both pediatric students and others, which is consistent with previous reports [ 5 , 14 , 15 ]. The questionnaire in residents also found that lack of time was a critical problem for scientific research [ 16 ]. There is no common practice about how to solve this difficulty. In the literature, it was usually recommended that integration of scientific research training into the curricular requirements for undergraduates or residency programs for residents should be implemented [ 7 , 14 , 17 , 18 ]. An increasing number of medical schools have individual projects as a component of their curriculum or mandatory medical research projects to develop research competencies [ 19 , 20 ].

Interestingly, in fifth-year pediatric undergraduates (“5 + 3”), English limitations were found to be one of the most common barriers. The barrier of the limitation of English was increasingly better as the grades increased in pediatric students. We speculated that this was related to the increasing awareness of the importance of scientific research and participation in scientific research activities, increasing demand for reading English literature and writing English articles. Furthermore, the English limitation barrier for pediatric students was more obvious than that for students from clinical medicine (“5 + 3”) and clinical medicine (5-year). They are worried about academic English. Horwitz et al. first proposed “foreign language anxiety” [ 21 ]. Deng and Zhou explored medical students’ medical English anxiety in Sichuan, China. They found that 85.2% of the students surveyed suffered moderate above medical English anxiety [ 22 ]. In the questionnaire, 88.89% of the fifth-year pediatric students believed that limited English was one of the most important barriers for scientific research. Currently, English is the chief language of communication in the field of medical science, including correspondence, conferences, writing scientific articles, and reading literature. Ma Y noted that medical English should be the most important component of college English teaching for medical students [ 23 ]. At Zhejiang University, all of the students, including those majoring in pediatrics (“5 + 3”), clinical medicine (“5 + 3”) and clinical medicine (5-year), had a medical English course during the undergraduate period. Thus, the course could not satisfy the demands for scientific research, such as reading English literature, writing English paper and oral presentation in English. To solve this barrier, it was suggested to understand the requirements of pediatric students for medical English learning and offer more courses about medical English or English writing training for pediatric students. Furthermore, undergraduates should be encouraged to participate in local, regional or national conferences that are not in English but in Chinese language, which can increase the interest in participating in scientific research.

Most of the pediatric students tended to choose clinical research, while only 19.1% wanted to attend basic research. The proportion of fifth-year students in pediatrics (“5 + 3”) choosing basic research was much lower than the students from the clinical medicine (“5 + 3”) program. It is speculated that pediatrics usually have heavier clinical work with relative poor scientific practice in China, compare with doctors from other clinical department. They are likely to concern the clinical research. The students in pediatrics might not obtain sufficient scientific guidance from their clinician teachers compared with those from other medicine program. According to the data, the Pediatric College could conduct more scientific research training directed at clinical research, such as the design, conduct and administration of clinical trials. The simulation-based clinical research curriculum is considered to be a better approach training of clinician-scientists compared with traditional clinical research teaching [ 24 ]. On the other hand, we might need to do more to improve the interest in basic research for pediatric undergraduates.

The major limitation of the present study is the small sample size. Only 20 to 30 students have been enrolled in pediatrics (“5 + 3”) of ZUSM every year. Therefore, multicenter studies (multiple medical schools) might be better to understand the perception, practice, and barriers of medical research among pediatric undergraduates. Even so, the findings in this study indicate that lack of time, lack of guidance, lack of training and limited English might be the common barriers to scientific work for pediatric undergraduates. Furthermore, the questionnaire for teachers and administrators would be performed to offer some concrete solutions in future.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Zhejiang University School of Medicine

Kaiser-Meyer-Olkin

Hanney SR, González-Block MA. Health research improves healthcare: now we have the evidence and the chance to help the WHO spread such benefits globally. Health Res Policy Syst. 2015;13:12.

Article   Google Scholar  

Adebisi YA. Undergraduate students’ involvement in research: values, benefits, barriers and recommendations. Ann Med Surg (Lond). 2022;81:104384.

Google Scholar  

Petrella JK, Jung AP. Undergraduate research: importance, benefits, and challenges. Int J Exerc Sci. 2008;1(3):91–5.

Stone C, Dogbey GY, Klenzak S, Van Fossen K, Tan B, Brannan GD. Contemporary global perspectives of medical students on research during undergraduate medical education: a systematic literature review. Med Educ Online. 2018;23(1):1537430.

El Achi D, Al Hakim L, Makki M, Mokaddem M, Khalil PA, Kaafarani BR, et al. Perception, attitude, practice and barriers towards medical research among undergraduate students. BMC Med Educ. 2020;17(1):195.

Funston G, Piper RJ, Connell C, Foden P, Young AM, O’Neill P. Medical student perceptions of research and research-orientated careers: an international questionnaire study. Med Teach. 2016;38(10):1041–8.

Tatum M. China’s three-child policy. Lancet. 2021;397:2238.

Rivkees SA, Kelly M, Lodish M, Weiner D. The Pediatric Medical Student Research Forum: fostering interest in Pediatric Research. J Pediatr. 2017;188:3–4.

Barrett KJ, Cooley TM, Schwartz AL, Hostetter MK, Clapp DW, Permar SR. Addressing gaps in Pediatric Scientist Development: the Department Chair View of 2 AMSPDC-Sponsored Programs. J Pediatr. 2020;222:7–e124.

Jacobs CD, Cross PC. The value of medical student research: the experience at Stanford University School of Medicine. Med Educ. 1995;29(5):342–6.

Muhandiramge J, Vu T, Wallace MJ, Segelov E. The experiences, attitudes and understanding of research amongst medical students at an Australian medical school. BMC Med Educ. 2021;21(1):267.

Pop AI, Lotrean LM, Buzoianu AD, Suciu SM, Florea M. Attitudes and practices regarding Research among Romanian Medical Undergraduate Students. Int J Environ Res Public Health. 2022;19(3):1872.

Pallamparthy S, Basavareddy A. Knowledge, attitude, practice, and barriers toward research among medical students: a cross-sectional questionnaire-based survey. Perspect Clin Res. 2019;10:73–8.

Assar A, Matar SG, Hasabo EA, Elsayed SM, Zaazouee MS, Hamdallah A, et al. Knowledge, attitudes, practices and perceived barriers towards research in undergraduate medical students of six arab countries. BMC Med Educ. 2022;22(1):44.

Kharraz R, Hamadah R, AlFawaz D, Attasi J, Obeidat AS, Alkattan W, et al. Perceived barriers towards participation in undergraduate research activities among medical students at Alfaisal University-College of Medicine: a Saudi Arabian perspective. Med Teach. 2016;38(Suppl 1):S12–8.

Fournier I, Stephenson K, Fakhry N, Jia H, Sampathkumar R, Lechien JR, et al. Barriers to research among residents in Otolaryngology - Head & Neck surgery around the world. Eur Ann Otorhinolaryngol Head Neck Dis. 2019;136(3S):S3–7.

Abu-Zaid A, Alkattan K. Integration of scientific research training into undergraduate medical education: a reminder call. Med Educ Online. 2013;18:22832.

Eyigör H, Kara CO. Otolaryngology residents’ attitudes, experiences, and barriers regarding the Medical Research. Turk Arch Otorhinolaryngol. 2021;59(3):215–22.

Möller R, Shoshan M. Medical students’ research productivity and career preferences; a 2-year prospective follow-up study. BMC Med Educ. 2017;17(1):51.

Laidlaw A, Aiton J, Struthers J, Guild S. Developing research skills in medical students: AMEE Guide 69. Med Teach. 2012;34(9):e754–71.

Horwitz EK, Horwitz MBH, Cope J. Foreign Language Classroom anxiety. Mod Lang J. 1986;70(2):125–32.

Deng J, Zhou K, Al-Shaibani GKS. Medical English anxiety patterns among medical students in Sichuan, China. Front Psychol. 2022;13:895117.

Ma Y. Exploring medical English curriculum and teaching from the perspective of ESP-A case study of a medical English teaching. Technol Enhan Lang Educ. 2009;125(1):60–3.

Yan S, Huang Q, Huang J, Wang Y, Li X, Wang Y, et al. Clinical research capability enhanced for medical undergraduates: an innovative simulation-based clinical research curriculum development. BMC Med Educ. 2022;22(1):543.

Download references

Acknowledgements

The authors thank all the students who participated as volunteers for their contribution to the study.

This work was supported by grants from the “14th Five-Year Plan” teaching reform project of an ordinary undergraduate university in Zhejiang Province (jg20220041) and project of graduate education research in Zhejiang University (20210317).

Author information

Authors and affiliations.

Department of Neonatology, Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China

Canyang Zhan

Department of Pulmonology, Children’s Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China

Yuanyuan Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

CZ designed and supervised the study progress. CZ and YZ wrote the manuscript and collected and analyzed the questionnaire data. All the authors have read and approved the manuscript prior to submission.

Corresponding author

Correspondence to Yuanyuan Zhang .

Ethics declarations

Ethics approval and consent to participate.

Our study was approved by the Ethics Review Committee of the Children’s Hospital of Zhejiang University School of Medicine and was undertaken according to the Helsinki declaration. Written informed consent was obtained from each participant upon their application to the work.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zhan, C., Zhang, Y. Perception, practice, and barriers toward research among pediatric undergraduates: a cross-sectional questionnaire-based survey. BMC Med Educ 24 , 364 (2024). https://doi.org/10.1186/s12909-024-05361-x

Download citation

Received : 14 October 2023

Accepted : 27 March 2024

Published : 03 April 2024

DOI : https://doi.org/10.1186/s12909-024-05361-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Undergraduate research
  • Medical research

BMC Medical Education

ISSN: 1472-6920

how peer reviewed research works

NEFMC

April 2024 Applying State Space Models Research Track Peer Review

6. Presentation

6a. Draft Report of the Research Track Work Group

6b. Peer Review Panel Report

Special Report

U.S. Diversified Commercial Property/Casualty Insurers – Peer Review March 2024

Tue 09 Apr, 2024 - 12:32 PM ET

Size, Mix Boost Company Profile: The group of diversified commercial property/casualty (P/C) insurers demonstrate operating scale through large premium bases and substantial levels of equity capital. Broad underwriting expertise and distribution capabilities foster diversification by geography, account size and product. This group is likely to have further diversification internationally or through personal lines operations. Profit Fundamentals Support Ratings: Financial performance is a highly important credit factor for this group. Sources of earnings volatility for these companies are derived from catastrophe losses on property business and cyclical claims experience in casualty and liability lines. Pricing trends across U.S. commercial lines remain favorable, but are somewhat offset by rising inflation and growing economic uncertainty.

how peer reviewed research works

IMAGES

  1. Peer Review

    how peer reviewed research works

  2. How peer review works? From article submission to publishing

    how peer reviewed research works

  3. Understand the peer review process

    how peer reviewed research works

  4. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    how peer reviewed research works

  5. My Complete Guide to Academic Peer Review: Example Comments & How to

    how peer reviewed research works

  6. Peer Review Process

    how peer reviewed research works

VIDEO

  1. When Peer Pressure Works In Your Favor @tradesbysci

  2. Difference between Research paper and a review. Which one is more important?

  3. Peer Reviewed Research

  4. Alstrom Heart Series Episode 5

  5. Join the Academic Revolution

  6. What is Peer Review

COMMENTS

  1. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  2. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  3. Scrutinizing science: Peer review

    Scrutinizing science: Peer review. In science, peer review helps provide assurance that published research meets minimum standards for scientific quality. Peer review typically works something like this: A group of scientists completes a study and writes it up in the form of an article. They submit it to a journal for publication.

  4. Understanding Peer Review in Science

    The manuscript peer review process helps ensure scientific publications are credible and minimizes errors. Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps ...

  5. Peer review

    Abstract. Peer review has a key role in ensuring that information published in scientific journals is as truthful, valid and accurate as possible. It relies on the willingness of researchers to give of their valuable time to assess submitted papers, not just to validate the work but also to help authors improve its presentation before publication.

  6. Peer review process

    The peer review process can be single-blind, double-blind, open or transparent. You can find out which peer review system is used by a particular journal in the journal's 'About' page. N. B. This diagram is a representation of the peer review process, and should not be taken as the definitive approach used by every journal. Advertisement.

  7. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  8. Demystifying the process of scholarly peer-review: an ...

    The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory ...

  9. Research Methods: How to Perform an Effective Peer Review

    Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1,2 It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3 In 2012, there were more than 28 000 scholarly ...

  10. Reviewers

    Reviewers play a pivotal role in scholarly publishing. The peer review system exists to validate academic work, helps to improve the quality of published research, and increases networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation and has continued ...

  11. What is Peer Review?

    Peer review is 'a process where scientists ("peers") evaluate the quality of other scientists' work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.' You can learn more in this explainer from the Social Science Space. Peer review brings academic research to publication in the following ways: Evaluation -

  12. How to Write a Peer Review

    Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom. Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript ...

  13. Peer review

    Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work ( peers ). [1] It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility.

  14. Peer review: What is it and why do we do it?

    Peer review is a quality control measure for medical research. It is a process in which professionals review each other's work to make sure that it is accurate, relevant, and significant ...

  15. Peer Review in Science: the pains and problems

    In fact, a study shows that a typical academic who works on reviews completes about 4.73 reviews per year. While this number may seem small, each review takes about four to five hours to complete; globally, the total time spent on peer reviews was over 100 million hours in 2020—equivalent to over 15 thousand years!

  16. Peer Review

    Peer review is a formal process used for reviewing and editing scientific writing for publication. It is an essential component of the scholarly publishing process. When done correctly, it is a vital part of building trust and maintaining high standards of quality for published research. Here are the main steps in the process:

  17. Peer Review & Evaluation

    Peer Review is a critical part of evaluating information. It is a process that journals use to ensure the articles they publish represent the best scholarship currently available, and articles from peer reviewed journal are often grounded in empirical research. When an article is submitted to a peer reviewed journal, the editors send it out to ...

  18. How to Peer Review

    When peer reviewing, it is helpful to think from the point of view of three different groups of people: Authors. Try to review the manuscript as you would like others to review your work. When you point out problems in a manuscript, do so in a way that will help the authors to improve the manuscript. Even if you recommend to the editor that the ...

  19. What's peer review? 5 things you should know before covering research

    1. Peer reviewers are not fraud detectors. They also do not verify the accuracy of a research study. The peer-review process is meant to validate research, not verify it. Reviewers typically do not authenticate the study's data or make sure its authors actually followed the procedures they say they followed to reach their conclusions.

  20. NSF tests ways to improve research security without disrupting peer review

    The National Science Foundation is testing a new approach to research security by reviewing proposals in quantum information science, which may use facilities such as IBM's quantum computer. IBM. The U.S. National Science Foundation (NSF) is spending $571 million to build the Vera C. Rubin Observatory in Chile so astronomers can survey the ...

  21. Frontiers

    Peer assessment is one of the approaches to develop self-regulation of learning. When evaluating the work of peers, metacognitive strategies of critical reflection are employed. They improve their own learning especially if evaluative feedback and/or suggestions for modification are provided. The aim of this systematic review is to learn how technology can facilitate self-regulation of ...

  22. Perception, practice, and barriers toward research among pediatric

    Background Scientific research activities are crucial for the development of clinician-scientists. However, few people pay attention to the current situation of medical research in pediatric medical students in China. This study aims to assess the perceptions, practices and barriers toward medical research of pediatric undergraduates. Methods This cross-sectional study was conducted among ...

  23. Nirmatrelvir for Vaccinated or Unvaccinated Adult Outpatients with

    In this phase 2-3 trial, we randomly assigned adults who had confirmed Covid-19 with symptom onset within the past 5 days in a 1:1 ratio to receive nirmatrelvir-ritonavir or placebo every 12 ...

  24. Sustainability

    This research aims to explore the complex interplay between supply chain resilience (SCR), digital supply chain (DSC), and sustainability, focusing on the moderating influence of supply chain dynamism. The goal is to understand how these elements interact within the framework of contemporary supply chain management and how they collectively contribute to enhancing sustainability outcomes. The ...

  25. April 2024 Applying State Space Models Research Track Peer Review

    April 2024 Applying State Space Models Research Track Peer Review. 6. Presentation. 6a. Draft Report of the Research Track Work Group. 6b. Peer Review Panel Report.

  26. U.S. Diversified Commercial Property/Casualty Insurers

    U.S. Diversified Commercial Property/Casualty Insurers - Peer Review March 2024. Tue 09 Apr, 2024 - 12:32 PM ET. Size, Mix Boost Company Profile: The group of diversified commercial property/casualty (P/C) insurers demonstrate operating scale through large premium bases and substantial levels of equity capital.