The peer review process

The peer review process can be broadly summarized into 10 steps, although these steps can vary slightly between journals. Explore what’s involved, below.

Editor Feedback: “Reviewers should remember that they are representing the readers of the journal. Will the readers of this particular journal find this informative and useful?”

Peer Review Process

1. Submission of Paper

The corresponding or submitting author submits the paper to the journal. This is usually via an online system such as ScholarOne Manuscripts. Occasionally, journals may accept submissions by email.

2. Editorial Office Assessment

The Editorial Office checks that the paper adheres to the requirements described in the journal’s Author Guidelines. The quality of the paper is not assessed at this point.

3. Appraisal by the Editor-in-Chief (EIC)

The EIC checks assesses the paper, considering its scope, originality and merits. The EiC may reject the paper at this stage.

4. EIC Assigns an Associate Editor (AE)

Some journals have Associate Editors ( or equivalent ) who handle the peer review. If they do, they would be assigned at this stage.

5. Invitation to Reviewers

The handling editor sends invitations to individuals he or she believes would be appropriate reviewers. As responses are received, further invitations are issued, if necessary, until the required number of reviewers is secured– commonly this is 2, but there is some variation between journals.

6. Response to Invitations

Potential reviewers consider the invitation against their own expertise, conflicts of interest and availability. They then accept or decline the invitation to review. If possible, when declining, they might also suggest alternative reviewers.

7. Review is Conducted

The reviewer sets time aside to read the paper several times. The first read is used to form an initial impression of the work. If major problems are found at this stage, the reviewer may feel comfortable rejecting the paper without further work. Otherwise, they will read the paper several more times, taking notes to build a detailed point-by-point review. The review is then submitted to the journal, with the reviewer’s recommendation (e.g. to revise, accept or reject the paper).

8. Journal Evaluates the Reviews

The handling editor considers all the returned reviews before making a decision. If the reviews differ widely, the editor may invite an additional reviewer so as to get an extra opinion before making a decision.

9. The Decision is Communicated

The editor sends a decision email to the author including any relevant reviewer comments. Comments will be anonymous if the journal follows a single-anonymous or double-anonymous peer review model. Journals with following an open or transparent peer review model will share the identities of the reviewers with the author(s).

10. Next Steps

An editor's perspective.

Listen to a podcast from Roger Watson, Editor-in-Chief of Journal of Advanced Nursing, as he discusses 'The peer review process'.

If accepted , the paper is sent to production. If the article is rejected or sent back for either major or minor revision , the handling editor should include constructive comments from the reviewers to help the author improve the article. At this point, reviewers should also be sent an email or letter letting them know the outcome of their review. If the paper was sent back for revision , the reviewers should expect to receive a new version, unless they have opted out of further participation. However, where only minor changes were requested this follow-up review might be done by the handling editor.

How to Write an Effective Journal Article Review

  • First Online: 01 January 2012

Cite this chapter

article review process

  • Dennis Drotar PhD 2 ,
  • Yelena P. Wu PhD 3 &
  • Jennifer M. Rohan MA 4  

5621 Accesses

2 Citations

The experience of reviewing manuscripts for scientific journals is an important one in professional development. Reviewing articles gives trainees familiarity with the peer review process in ways that facilitate their writing. For example, reviewing manuscripts can help students and early career psychologists understand what reviewers and editors look for in a peer-reviewed article and ways to critique and enhance a manuscript based on peer review. Experiences in review can facilitate early career faculty with early entry into and experience being a reviewer for a professional journal. The experience of journal reviews also gives students a broader connection to the field of science in areas of their primary professional interest. At the same time reviewing articles for scientific journals poses a number of difficult challenges (see Hyman, 1995; Drotar, 2000a, 2009a, 2009b, 2009c, 2009d, 2010, 2011; Lovejoy, Revenson, & France, 2011). The purpose of this chapter is to provide an introduction to the review process and give step by step guidance in conducting reviews for scientific journals. Interested readers might wish to read Lovejoy et al.’s (2011) primer for manuscript review, which contains annotated examples of reviews and an editor’s decision letter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association.

Google Scholar  

American Psychological Association Science Student Council. (2007). A graduate students’ guide to involvement in the peer review process. Retrieved July 15, 2011, from http://www.apa.org/research/publishing/

APA Publications and Communications Working Group on Journal Article Reporting Standards. (2008). Reporting standards for research in psychology. Why do we need them? What do they need to be? American Psychologist, 63 , 839–851.

Article   Google Scholar  

Cumming, G., & Finch, S. (2008). Putting research in context: Understanding confidence intervals from one or more studies. Journal of Pediatric Psychology, 34 (9), 903–916.

PubMed   Google Scholar  

Drotar, D. (2000a). Reviewing and editing manuscripts for scientific journals. In D. Drotar (Ed.), Handbook of research methods in clinical child and pediatric psychology (pp. 409–425). New York: Kluwer Academic/Plenum.

Chapter   Google Scholar  

Drotar, D. (2000b). Training professional psychologists to write and publish. The utility of a writer’s workshop seminar. Professional Psychology: Research and Practice, 31 , 453–457.

Drotar, D. (2009a). Editorial: How to write effective reviews for the Journal of Pediatric Psychology . Journal of Pediatric Psychology, 34 , 113–117.

Article   PubMed   Google Scholar  

Drotar, D. (2009b). Editorial: Thoughts in improving the quality of manuscripts submitted to the Journal of Pediatric Psychology: How to write a convincing introduction. Journal of Pediatric Psychology, 34 , 1–3.

Drotar, D. (2009c). Editorial: How to report methods in the Journal of Pediatric Psychology . Journal of Pediatric Psychology, 34 , 227–230.

Drotar, D. (2009d). How to write an effective results and discussion section for the Journal of Pediatric Psychology . Journal of Pediatric Psychology, 34 , 339–343.

Drotar, D. (2010). Editorial: Guidance for submission and review of multiple publications derived from the same study. Journal of Pediatric Psychology, 35 , 225–230.

Drotar, D. (2011). Editorial: How to write more effective, user friendly reviews for the Journal of Pediatric Psychology . Journal of Pediatric Psychology, 36 , 1–3.

Durlak, J. A. (2009). How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology, 34 , 917–928.

Fiske, D. W., & Fogg, L. (1990). But the reviewers are making different criticisms of my paper: Diversity and uniqueness in reviewer comments. American Psychologist, 40 , 591–598.

Holmbeck, G. N., & Devine, K. A. (2009). Editorial: An author’s checklist for measure development and validation manuscripts. Journal of Pediatric Psychology, 34 (7), 691–696.

Hyman, R. (1995). How to critique a published article. Psychological Bulletin, 118 , 178–182.

Journal of Pediatric Psychology mentoring policy & suggestions for conducting mentored reviews (2009). Retrieved July 15, 2011, from http://www.oxfordjournals.org/our_journals/jpepsy/for_authors/msprep_submission.html

Lovejoy, T. I., Revenson, T. A., & France, C. R. (2011). Reviewing manuscripts for peer-review journals: A primer for novice and seasoned reviewers. Annals of Behavioral Medicine, 42 , 1–13.

Palermo, T. M. (2010). Editorial: Exploring ethical issues in peer review for the Journal of Pediatric Psychology. Journal of Pediatric Psychology, 35 (3), 221–224.

Routh, D. K. (1995). Confessions of an editor, including mistakes I have made. Journal of Clinical Child Psychology, 24 , 236–241.

Sternberg, R. J. (Ed.). (2006). Reviewing scientific works in psychology . Washington, DC: American Psychological Association.

Stinson, J. N., McGrath, P. J., & Yamada, J. T. (2003). Clinical trials in the Journal of Pediatric Psychology : Applying the CONSORT statement. Journal of Pediatric Psychology, 28 , 159–167.

Weller, A. C. (2001). Editorial peer review: Its strengths and weaknesses . Medford, NY: Information Today, Inc.

Wu, Y. P., Nassau, J. H., & Drotar, D. (2011). Mentoring reviewers: The Journal of Pediatric Psychology experience. Journal of Pediatric Psychology, 36 , 258–264.

Download references

Author information

Authors and affiliations.

Division of Behavioral Medicine and Clinical Psychology, Cincinnati Children’s Hospital Medical Center, MLC 7039, 3333 Burnet Avenue, Cincinnati, OH, 45229-3039, USA

Dennis Drotar PhD

Division of Behavioral Medicine and Clinical Psychology, Cincinnati Children’s Hospital Medical Center, University of Cincinnati, Cincinnati, OH, 45229-3039, USA

Yelena P. Wu PhD

Division of Behavioral Medicine and Clinical Psychology, Department of Psychology, Cincinnati Children’s Hospital Medical Center, University of Cincinnati, Cincinnati, OH, 45229-3039, USA

Jennifer M. Rohan MA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dennis Drotar PhD .

Editor information

Editors and affiliations.

, Department of Psychology, University of North Carolina at Chapel H, Davie Hall, Campus Box 3270, Chapel Hill, 27599-3270, North Carolina, USA

Mitchell J. Prinstein

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Drotar, D., Wu, Y.P., Rohan, J.M. (2013). How to Write an Effective Journal Article Review. In: Prinstein, M. (eds) The Portable Mentor. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3994-3_11

Download citation

DOI : https://doi.org/10.1007/978-1-4614-3994-3_11

Published : 25 July 2012

Publisher Name : Springer, New York, NY

Print ISBN : 978-1-4614-3993-6

Online ISBN : 978-1-4614-3994-3

eBook Packages : Behavioral Science Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 12 November 2021

Demystifying the process of scholarly peer-review: an autoethnographic investigation of feedback literacy of two award-winning peer reviewers

  • Sin Wang Chong   ORCID: orcid.org/0000-0002-4519-0544 1 &
  • Shannon Mason 2  

Humanities and Social Sciences Communications volume  8 , Article number:  266 ( 2021 ) Cite this article

2718 Accesses

6 Citations

18 Altmetric

Metrics details

  • Language and linguistics

A Correction to this article was published on 26 November 2021

This article has been updated

Peer reviewers serve a vital role in assessing the value of published scholarship and improving the quality of submitted manuscripts. To provide more appropriate and systematic support to peer reviewers, especially those new to the role, this study documents the feedback practices and experiences of two award-winning peer reviewers in the field of education. Adopting a conceptual framework of feedback literacy and an autoethnographic-ecological lens, findings shed light on how the two authors design opportunities for feedback uptake, navigate responsibilities, reflect on their feedback experiences, and understand journal standards. Informed by ecological systems theory, the reflective narratives reveal how they unravel the five layers of contextual influences on their feedback practices as peer reviewers (micro, meso, exo, macro, chrono). Implications related to peer reviewer support are discussed and future research directions are proposed.

Similar content being viewed by others

article review process

The transformative power of values-enacted scholarship

What matters in the cultivation of student feedback literacy: exploring university efl teachers’ perceptions and practices.

article review process

Writing impact case studies: a comparative study of high-scoring and low-scoring case studies from REF2014

Introduction.

The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory if not always in practice) that only research that has been conducted according to methodological and ethical principles be published in reputable journals and other dissemination outlets (Starck, 2017 ). On the other hand, it is seen as an opportunity to improve the quality of manuscripts, as peers identify errors and areas of weakness, and offer suggestions for improvement (Kelly et al., 2014 ). Whether or not peer review is actually successful in these areas is open to considerable debate, but in any case it is the “critical juncture where scientific work is accepted for publication or rejected” (Heesen and Bright, 2020 , p. 2). In contemporary academia, where higher education systems across the world are contending with decreasing levels of public funding, there is increasing pressure on researchers to be ‘productive’, which is largely measured by the number of papers published, and of funding grants awarded (Kandiko, 2010 ), both of which involve peer review.

Researchers are generally invited to review manuscripts once they have established themselves in their disciplinary field through publication of their own research. This means that for early career researchers (ECRs), their first exposure to the peer-review process is generally as an author. These early experiences influence the ways ECRs themselves conduct peer review. However, negative experiences can have a profound and lasting impact on researchers’ professional identity. This appears to be particularly true when feedback is perceived to be unfair, with feedback tone largely shaping author experience (Horn, 2016 ). In most fields, reviewers remain anonymous to ensure freedom to give honest and critical feedback, although there are concerns that a lack of accountability can result in ‘bad’ and ‘rude’ reviews (Mavrogenis et al., 2020 ). Such reviews can negatively impact all researchers, but disproportionately impact underrepresented researchers (Silbiger and Stubler, 2019 ). Regardless of career phase, no one is served well by unprofessional reviews, which contribute to the ongoing problem of bullying and toxicity prevalent in academia, with serious implications on the health and well-being of researchers (Keashly and Neuman, 2010 ).

Because of its position as the central process through which research is vetted and refined, peer review should play a similarly central role in researcher training, although it rarely features. In surveying almost 3000 researchers, Warne ( 2016 ) found that support for reviewers was mostly received “in the form of journal guidelines or informally as advice from supervisors or colleagues” (p. 41), with very few engaging in formal training. Among more than 1600 reviewers of 41 nursing journals, only one third received any form of support (Freda et al., 2009 ), with participants across both of these studies calling for further training. In light of the lack of widespread formal training, most researchers learn ‘on the job’, and little is known about how researchers develop their knowledge and skills in providing effective assessment feedback to their peers. In this study, we undertake such an investigation, by drawing on our first-hand experiences. Through a collaborative and reflective process, we look to identify the forms and forces of our feedback literacy development, and seek to answer specifically the following research questions:

What are the exhibited features of peer reviewer feedback literacy?

What are the forces at work that affect the development of feedback literacy?

Literature review

Conceptualisation of feedback literacy.

The notion of feedback literacy originates from the research base of new literacy studies, which examines ‘literacies’ from a sociocultural perspective (Gee, 1999 ; Street, 1997 ). In the educational context, one of the most notable types of literacy is assessment literacy (Stiggins, 1999 ). Traditionally, assessment literacy is perceived as one of the indispensable qualities of a successful educator, which refers to the skills and knowledge for teachers “to deal with the new world of assessment” (Fulcher, 2012 , p. 115). Following this line of teacher-oriented assessment literacy, recent attempts have been made to develop more subject-specific assessment literacy constructs (e.g., Levi and Inbar-Lourie, 2019 ). Given the rise of student-centred approaches and formative assessment in higher education, researchers began to make the case for students to be ‘assessment literate’; comprising of such knowledge and skills as understanding of assessment standards, the relationship between assessment and learning, peer assessment, and self-assessment skills (Price et al., 2012 ). Feedback literacy, as argued by Winstone and Carless ( 2019 ), is essentially a subset of assessment literacy because “part of learning through assessment is using feedback to calibrate evaluative judgement” (p. 24). The notion of feedback literacy was first extensively discussed by Sutton ( 2012 ) and more recently by Carless and Boud ( 2018 ). Focusing on students’ feedback literacy, Sutton ( 2012 ) conceptualised feedback literacy as a three-dimensional construct—an epistemological dimension (what do I know about feedback?), an ontological dimension (How capable am I to understand feedback?), and a practical dimension (How can I engage with feedback?). In close alignment with Sutton’s construct, the seminal conceptual paper by Carless and Boud ( 2018 ) further illustrated the four distinctive abilities of feedback literate students: the abilities to (1) understand the formative role of feedback, (2) make informed and accurate evaluative judgement against standards, (3) manage emotions especially in the face of critical and harsh feedback, and (4) take action based on feedback. Since the publication of Carless and Boud ( 2018 ), student and teacher feedback literacy has been in the limelight of assessment research in higher education (e.g., Chong 2021b ; Carless and Winstone 2020 ). These conceptual contributions expand the notion of feedback literacy to consider not only the manifestations of various forms of effective student engagement with feedback but also the confluence of contexts and individual differences of students in developing students’ feedback literacy by drawing upon various theoretical perspectives (e.g., ecological systems theory; sociomaterial perspective) and disciplines (e.g., business and human resource management). Others address practicalities of feedback literacy; for example, how teachers and students can work in synergy to develop feedback literacy (Carless and Winstone, 2020 ) and ways to maximise student engagement with feedback at a curricular level (Malecka et al., 2020). In addition to conceptualisation, advancement of the notion of feedback literacy is evident in the recent proliferation of primary studies. The majority of these studies are conducted in the field of higher education, focusing mostly on student feedback literacy in classrooms (e.g., Molloy et al., 2019 ; Winstone et al., 2019 ) and in the workplace (Noble et al., 2020 ), with a handful focused on teacher feedback literacy (e.g., Xu and Carless 2016 ). Some studies focusing on student feedback literacy adopt a qualitative case study research design to delve into individual students’ experience of engaging with various forms of feedback. For example, Han and Xu ( 2019 ) analysed the profiles of feedback literacy of two Chinese undergraduate students. Findings uncovered students’ resistance to engagement with feedback, which relates to the misalignment between the cognitive, social, and affective components of individual students’ feedback literacy profiles. Others reported interventions designed to facilitate students’ uptake of feedback, focusing on their effectiveness and students’ perceptions. Specifically, affordances and constraints of educational technology such as electronic feedback portfolio (Chong, 2019 ; Winstone et al., 2019 ) are investigated. Of particular interest is a recent study by Noble et al. ( 2020 ), which looked into student feedback literacy in the workplace by probing into the perceptions of a group of Australian healthcare students towards a feedback literacy training programme conducted prior to their placement. There is, however, a dearth of primary research in other areas where elicitation, process, and enactment of feedback are vital; for instance, academics’ feedback literacy. In the ‘publish or perish’ culture of higher education, academics, especially ECRs, face immense pressure to publish in top-tiered journals in their fields and face the daunting peer-review process, while juggling other teaching and administrative responsibilities (Hollywood et al., 2019 ; Tynan and Garbett 2007 ). Taking up the role of authors and reviewers, researchers have to possess the capacity and disposition to engage meaningfully with feedback provided by peer reviewers and to provide constructive comments to authors. Similar to students, researchers have to learn how to manage their emotions in the face of critical feedback, to understand the formative values of feedback, and to make informed judgements about the quality of feedback (Gravett et al., 2019 ). At the same time, feedback literacy of academics also resembles that of teachers. When considering the kind of feedback given to authors, academics who serve as peer reviewers have to (1) design opportunities for feedback uptake, (2) maintain a professional and supportive relationship with authors, and (3) take into account the practical dimension of giving feedback (e.g., how to strike a balance between quality of feedback and time constraints due to multiple commitments) (Carless and Winstone 2020 ). To address the above, one of the aims of the present study is to expand the application of feedback literacy as a useful analytical lens to areas outside the classroom, that is, scholarly peer-review activities in academia, by presenting, analysing, and synthesising the personal experiences of the authors as successful peer reviewers for academic journals.

Conceptual framework

We adopt a feedback literacy of peer reviewers framework (Chong 2021a ) as an analytical lens to analyse, systemise, and synthesise our own experiences and practices as scholarly peer reviewers (Fig. 1 ). This two-tier framework includes a dimension on the manifestation of feedback literacy, which categorises five features of feedback literacy of peer reviewers, informed by student and teacher feedback literacy frameworks by Carless and Boud ( 2018 ) and Carless and Winstone ( 2020 ). When engaging in scholarly peer review, reviewers are expected to be able to provide constructive and formative feedback, which authors can act on in their revisions ( engineer feedback uptake ). Besides, peer reviewers who are usually full-time researchers or academics lead hectic professional lives; thus, when writing reviewers’ reports, it is important for them to consider practically and realistically the time they can invest and how their various degrees of commitment may have an impact on the feedback they provide ( navigate responsibilities ). Furthermore, peer reviewers should consider the emotional and relational influences their feedback exert on the authors. It is crucial for feedback to be not only informative but also supportive and professional (Chong, 2018 ) ( maintain relationships ). Equally important, it is imperative for peer reviewers to critically reflect on their own experience in the scholarly peer-review process, including their experience of receiving and giving feedback to academic peers, as well as the ways authors and editors respond to their feedback ( reflect on feedback experienc e). Lastly, acting as gatekeepers of journals to assess the quality of manuscripts, peer reviewers have to demonstrate an accurate understanding of the journals’ aims, remit, guidelines and standards, and reflect those in their written assessments of submitted manuscripts ( understand standards ). Situated in the context of scholarly peer review, this collaborative autoethnographic study conceptualises feedback literacy not only as a set of abilities but also orientations (London and Smither, 2002 ; Steelman and Wolfeld, 2016 ), which refers to academics’ tendency, beliefs, and habits in relation to engaging with feedback (London and Smither, 2002 ). According to Cheung ( 2000 ), orientations are influenced by a plethora of factors, namely experiences, cultures, and politics. It is important to understand feedback literacy as orientations because it takes into account that feedback is a convoluted process and is influenced by a plethora of contextual and personal factors. Informed by ecological systems theory (Bronfenbrenner, 1986 ; Neal and Neal, 2013 ) and synthesising existing feedback literacy models (Carless and Boud, 2018 ; Carless and Winstone, 2020 ; Chong, 2021a , 2021b ), we consider feedback literacy as a malleable, situated, and emergent construct, which is influenced by the interplay of various networked layers of ecological systems (Neal and Neal, 2013 ) (Fig. 1 ). Also important is that conceptualising feedback literacy as orientations avoids dichotomisation (feedback literate vs. feedback illiterate), emphasises the developmental nature of feedback literacy, and better captures the multifaceted manifestations of feedback engagement.

figure 1

The outer ring of the figure shows the components of feedback literacy while the inner ring concerns the layers of contexts (ecosystems) which influence the manifestation of feedback literacy of peer reviewers.

Echoing recent conceptual papers on feedback literacy which emphasises the indispensable role of contexts (Chong 2021b ; Boud and Dawson, 2021 ; Gravett et al., 2019 ), our conceptual framework includes an underlying dimension of networked ecological systems (micro, meso, exo, macro, and chrono), which portrays the contextual forces shaping our feedback orientations. Informed by the networked ecological system theory of Neal and Neal ( 2013 ), we postulate that there are five systems of contextual influence, which affect the feedback experience and development of feedback literacy of peer reviewers. The five ecological systems refer to ‘settings’, which is defined by Bronfenbrenner ( 1986 ) as “place[s] where people can readily engage in social interactions” (p. 22). Even though Bronfenbrenner’s ( 1986 ) somewhat dated definition of ‘place’ is limited to ‘physical space’, we believe that ‘places’ should be more broadly defined in the 21st century to encompass physical and virtual, recent and dated, closed and distanced locations where people engage; as for ‘interactions’, from a sociocultural perspective, we understand that ‘interactions’ can include not only social, but also cognitive and emotional exchanges (Vygotsky, 1978 ). Microsystem refers to a setting where people, including the focal individual, interact. Mesosystem , on the other hand, means the interactions between people from different settings and the influence they exert on the focal individual. An exosystem , similar to a microsystem, is understood as a single setting but this setting excludes the focal individual but it is likely that participants in this setting would interact with the focal individual. The remaining two systems, macrosystem and chronosystem, refer not only to ‘settings’ but ‘forces that shape the patterns of social interactions that define settings’ (Neal and Neal, 2013 , p. 729). Macrosystem is “the set of social patterns that govern the formation and dissolution of… interactions… and thus the relationship among ecological systems” (ibid). Some examples of macrosystems given by Neal and Neal ( 2013 ) include political and cultural systems. Finally, chronosystem is “the observation that patterns of social interactions between individuals change over time, and that such changes impact on the focal individual” (ibid, p. 729). Figure 2 illustrates this networked ecological systems theory using a hypothetical example of an early career researcher who is involved in scholarly peer review for Journal A; at the same time, they are completing a PhD and are working as a faculty member at a university.

figure 2

This is a hypothetical example of an early career researcher who is involved in scholarly peer review for Journal A.

From the reviewed literature on the construct of feedback literacy, the investigation of feedback literacy as a personal, situated, and unfolding process is best done through an autoethnographic lens, which underscores critical self-reflection. Autoethnography refers to “an approach to research and writing that seeks to describe and systematically analyse (graphy) personal experience (auto) in order to understand cultural experience (ethno)” (Ellis et al., 2011 , p. 273). Autoethnography stems from research in the field of anthropology and is later introduced to the fields of education by Ellis and Bochner ( 1996 ). In higher education research, autoethnographic studies are conducted to illuminate on topics related to identity and teaching practices (e.g., Abedi Asante and Abubakari, 2020 ; Hains-Wesson and Young 2016 ; Kumar, 2020 ). In this article, a collaborative approach to autoethnography is adopted. Based on Chang et al. ( 2013 ), Lapadat ( 2017 ) defines collaborative autoethnography (CAE) as follows:

… an autobiographic qualitative research method that combines the autobiographic study of self with ethnographic analysis of the sociocultural milieu within which the researchers are situated, and in which the collaborating researchers interact dialogically to analyse and interpret the collection of autobiographic data. (p. 598)

CAE is not only a product but a worldview and process (Wall, 2006 ). CAE is a discrete view about the world and research, which straddles between paradigmatic boundaries of scientific and literary studies. Similar to traditional scientific research, CAE advocates systematicity in the research process and consideration is given to such crucial research issues as reliability, validity, generalisability, and ethics (Lapadat, 2017 ). In closer alignment with studies on humanities and literature, the goal of CAE is not to uncover irrefutable universal truths and generate theories; instead, researchers of CAE are interested in co-constructing and analysing their own personal narratives or ‘stories’ to enrich and/or challenge mainstream beliefs and ideas, embracing diverse rather than canonical ways of behaviour, experience, and thinking (Ellis et al., 2011 ). Regarding the role of researchers, CAE researchers openly acknowledge the influence (and also vulnerability) of researchers throughout the research process and interpret this juxtaposition of identities between researchers and participants of research as conducive to offering an insider’s perspective to illustrate sociocultural phenomena (Sughrua, 2019 ). For our CAE on the scholarly peer-review experiences of two ECRs, the purpose is to reconstruct, analyse, and publicise our lived experience as peer reviewers and how multiple forces (i.e., ecological systems) interact to shape our identity, experience, and feedback practice. As a research process, CAE is a collaborative and dynamic reflective journey towards self-discovery, resulting in narratives, which connect with and add to the existing literature base in a personalised manner (Ellis et al., 2011 ). The collaborators should go beyond personal reflection to engage in dialogues to identify similarities and differences in experiences to throw new light on sociocultural phenomena (Merga et al., 2018 ). The iterative process of self- and collective reflections takes place when CAE researchers write about their own “remembered moments perceived to have significantly impacted the trajectory of a person’s life” and read each other’s stories (Ellis et al., 2011 , p. 275). These ‘moments’ or vignettes are usually written retrospectively, selectively, and systematically to shed light on facets of personal experience (Hughes et al., 2012 ). In addition to personal stories, some autoethnographies and CAEs utilise multiple data sources (e.g., reflective essays, diaries, photographs, interviews with co-researchers) and various ways of expressions (e.g., metaphors) to achieve some sort of triangulation and to present evidence in a ‘systematic’ yet evocative manner (Kumar, 2020 ). One could easily notice that overarching methodological principles are discussed in lieu of a set of rigid and linear steps because the process of reconstructing experience through storytelling can be messy and emergent, and certain degree of flexibility is necessary. However, autoethnographic studies, like other primary studies, address core research issues including reliability (reader’s judgement of the credibility of the narrator), validity (reader’s judgement that the narratives are believable), and generalisability (resemblance between the reader’s experience and the narrative, or enlightenment of the reader regarding unfamiliar cultural practices) (Ellis et al., 2011 ). Ethical issues also need to be considered. For example, authors are expected to be honest in reporting their experiences; to protect the privacy of the people who ‘participated’ in our stories, pseudonyms need to be used (Wilkinson, 2019 ). For the current study, we follow the suggested CAE process outlined by Chang et al. ( 2013 ), which includes four stages: deciding on topic and method , collecting materials , making meaning , and writing . When deciding on the topic, we decided to focus on our experience as scholarly peer reviewers because doing peer review and having our work reviewed are an indispensable part of our academic lives. The next is to collect relevant autoethnographic materials. In this study, we follow Kumar ( 2020 ) to focus on multiple data sources: (1) reflective essays which were written separately through ‘recalling’, which is referred to by Chang et al. ( 2013 ) as ‘a free-spirited way of bringing out memories about critical events, people, place, behaviours, talks, thoughts, perspectives, opinions, and emotions pertaining to the research topic’ (p. 113), and (2) discussion meetings. In our reflective essays, we included written records of reflection and excerpts of feedback in our peer-review reports. Following material collection is meaning making. CAE, as opposed to autoethnography, emphasises the importance of engaging in dialogues with collaborators and through this process we identify similarities and differences in our experiences (Sughrua, 2019 ). To do so, we exchanged our reflective essays; we read each other’s reflections and added questions or comments on the margins. Then, we met online twice to share our experiences and exchange views regarding the two reflective essays we wrote. Both meetings lasted for approximately 90 min, were audio-recorded and transcribed. After each meeting, we coded our stories and experiences with reference to the two dimensions of the ecological framework of feedback literacy (Fig. 1 ). With regards to coding our data, we followed the model of Miles and Huberman ( 1994 ), which comprises four stages: data reduction (abstracting data), data display (visualising data in tabular form), conclusion-drawing, and verification. The coding and writing processes were done collaboratively on Google Docs and care was taken to address the aforesaid ethical (e.g., honesty, privacy) and methodological issues (e.g., validity, reliability, generalisability). As a CAE study, the participants are the researchers themselves, that is, the two authors of this paper. We acknowledge that research data are collected from human subjects (from the two authors), such data are collected in accordance with the standards and guidelines of the School Research Ethics Committee at the School of Social Sciences, Education and Social Work, Queen’s University Belfast (Ref: 005_2021). Despite our different experiences in our unique training and employment contexts, we share some common characteristics, both being ECRs (<5 years post-PhD), working in the field of education, active in the scholarly publication process as both authors and peer reviewers. Importantly for this study, we were both recipients of Reviewer of the Year Award 2019 awarded jointly by the journal, Higher Education Research & Development and the publisher , Taylor & Francis. This award in recognition of the quality of our reviewing efforts, as determined by the editorial board of a prestigious higher education journal, provided a strong impetus for this study, providing an opportunity to reflect on our own experiences and practices. The extent of our peer-review activities during our early career leading up to the time of data collection is summarised in Table 1 .

Findings and discussion

Analysis of the four individual essays (E1 and E2 for each participant) and transcripts of the two subsequent discussions (D1 and D2) resulted in the identification of multiple descriptive codes and in turn a number of overarching themes (Supplementary Appendix 1). Our reporting of these themes is guided by our conceptual framework, where we first focus on the five manifestations of feedback literacy to highlight the experiences that contribute to our growth as effective and confident peer reviewers. Then, we report on the five ecological systems to unravel how each contextual layer develops our feedback literacy as peer reviewers. (Note that the discussion of the chronosystem has been necessarily incorporated into each of the four others dimensions: microsystem , mesosystem , exosystem , and macrosystem in order to demonstrate temporal changes). In particular, similarities and differences will be underscored, and connections with manifested feedback beliefs and behaviours will be made. We include quotes from both Author 1 (A1) and Author 2 (A2), in order to illustrate our findings, and to show the richness and depth of the data collected (Corden and Sainsbury, 2006 ). Transcribed quotes may be lightly edited while retaining meaning, for example through the removal of fillers and repetitions, which is generally accepted practice to ensure readability ( ibid ).

Manifestations of feedback literacy

Engineering feedback uptake.

The two authors have a strong sense of the purpose of peer review as promoting not only research quality, but the growth of researchers. One way that we engineer author uptake is to ensure that feedback is ‘clear’ (A2,E1), ‘explicit’ (A2,E1), ‘specific’ (A1,E1), and importantly ‘actionable… to ensure that authors can act on this feedback so that their manuscripts can be improved and ultimately accepted for publication’ (A1,E1). In less than favourable author outcomes, we ensure that there is reference to the role of the feedback in promoting the development of the manuscript, which A1 refers to as ‘promotion of a growth mindset’ (A1,E1). For example, after requesting a second round of major revisions, A2 ‘acknowledged the frustration that the author might have felt on getting further revisions by noting how much improvement was made to the paper, but also making clear the justification for sending it off for more work’ (A2,E1). We both note that we tend to write longer reviews when a rejection is the recommended outcome, as our ultimate goal is to aid in the development of a manuscript.

Rejections doesn’t mean a paper is beyond repair. It can still be fixed and improved; a rejection simply means that the fix may be too extensive even for multiple review cycles. It is crucial to let the authors whose manuscripts are rejected know that they can still act on the feedback to improve their work; they should not give up on their own work. I think this message is especially important to first-time authors or early career researchers. (A1,E1)

In promoting a growth mindset and in providing actionable feedback, we hope to ‘show the authors that I’m not targeting them, but their work’ (A1,D1). We particularly draw on our own experiences as ECRs, with first-hand understanding that ‘everyone takes it personally when they get rejected. Yeah. Moreover, it is hard to separate (yourself from the paper)’ (A2,D1).

Navigating responsibilities

As with most academics, the two authors have multiple pressures on their time, and there ‘isn’t much formal recognition or reward’ (A1,E1) and ‘little extrinsic incentive for me to review’ (A2,E1). Nevertheless we both view our roles as peer reviewers as ‘an important part of the process’ (A2,E1), ‘a modest way for me to give back to the academic community’ (A1,E1). Through peer review we have built a sense of ‘identity as an academic’ (A1,D1), through ‘being a member of the academic community’ (A2,D1). While A1 commits to ‘review as many papers as possible’ (A1,E1) and A2 will usually accept offers to review, there are still limits on our time and therefore we consider the topic and methods employed when deciding whether or not to accept an invitation, as well as the journal itself, as we feel we can review more efficiently for journals with which we are more familiar. A1 and A2 have different processes for conducting their review that are most efficient for their own situations. For A1, the process begins with reading the whole manuscript in one go, adding notes to the pdf document along the way, which he then reviews, and makes a tentative decision, including ‘a few reasons why I have come to this decision’ (A1,E1). After waiting at least one day, he reviews all of the notes and begins writing the report, which is divided into the sections of the paper. He notes it ‘usually takes me 30–45 min to write a report. I then proofread this report and submit it to the system. So it usually takes me no more than three hours to complete a review’ (A1,E1). For A2, the process for reviewing and structuring the report is quite different, with a need to ‘just find small but regular opportunities to work on the review’ (A2,E1). As was the case during her Ph.D, which involved juggling research and raising two babies, ‘I’ve trained myself to be able to do things in bits’ (A2,D1). So A2 also begins by reading the paper once through, although generally without making initial comments. The next phase involves going through the paper at various points in time whenever possible, and at the same time building up the report, making the report structurally slightly different to that of A1.

What my reviews look like are bullet points, basically. And they’re not really in a particular order. They generally… follow the flow (of the paper). But I mean, I might think of something, looking at the methods and realise, hey, you haven’t defined this concept in the literature review so I’ll just add you haven’t done this. And so I will usually preface (the review)… Here’s a list of suggestions. Some of them are minor, some of them are serious, but they’re in no particular order. (A1,D1)

As such, both reviewers engage in personalised strategies to make more effective use of their time. Both A1 and A2 give explicit but not exhaustive examples of an area of concern, and they also pose questions for the author to consider, in both cases placing the onus back on the author to take action. As A1 notes, ‘I’m not going to do a summary of that reference for you. I’m just going to include that there. If you’d like you can check it out’ (A1,D1). For A2, a lack of adequate reporting of the methods employed in a study makes it difficult to proceed, and in such cases will not invest further time, sending it back to the editor, because ‘I can’t even comment on the findings… I can’t go on. I’m not gonna waste my time’ (A2,D1). In cases where the authors may be ‘on the fence’ about a particular review, they will use the confidential comments to the editor to help work through difficult cases as ‘they are obviously very experienced reviewers’ (A1,D1). Delegating tasks to the expertise of the editorial teams when appropriate also ensures time is used more prudently.

Maintaining relationships

Except in a few cases where A2 has reviewed for journals with a single-blind model, the vast majority of the reviews that we have completed have been double-blind. This means that we are unaware of the identity of the author/s, and we are unknown to them. However, ‘even with blind-reviews I tend to think of it as a conversation with a person’ (A2,E1). A1 talks about the need to have respect for the author and their expertise and effort ‘regardless of the quality of the submission (which can be in some cases subjective)’ (A1,E1). A2 writes similarly about the ‘privilege’ and ‘responsibility’ of being able to review manuscripts that authors ‘have put so much time and energy into possibly over an extended period’ (A2,E1). In this way it is possible to develop a sort of relationship with an author even without knowing their identity. In trying to articulate the nature of that relationship (which we struggle to do so definitively), we note that it is more than just a reviewer, and A2 reflected on a recent review, which went through a number of rounds of resubmission where ‘it felt like we were developing a relationship, more like a mentor than a reviewer’ (A2,E1).

I consider this role as a peer reviewer more than giving helpful and actionable feedback; I would like to be a supporter and critical friend to the authors, even though in most cases I don’t even know who they are or what career stage they are at (A1,E1).
In any case, as A1 notes, ‘we don’t even need to know who that person is because we know that people like encouragement’ (A1,D1), and we are very conscious of the emotional impact that feedback can have on authors, and the inherent power imbalance in the relationship. For this reason, A1 is ‘cautious about the way I write so that I don’t accidentally make the authors the target of my feedback’. As A2 notes ‘I don’t want authors feeling depressed after reading a review’ (A2,E1). While we note that we try to deliver our feedback with ‘respect’ (A1,E1; A1,E2; A2,D1) ‘empathy’ (A1,E1), and ‘kindness’ (A2,D1), we both noted that we do not ‘sugar coat’ our feedback and A1 describes himself as ‘harsh’ and ‘critical’ (A1,E1) while A2 describes herself as ‘pretty direct’ (A2,E1). In our discussion, we tried to delve into this seeming contradiction:… the encouragement, hopefully is to the researcher, but the directness it should be, I hope, is related directly to whatever it is, the methods or the reporting or the scope of the literature review. It’s something specific about the manuscript itself. And I know myself, being an ECR and being reviewed, that it’s hard to separate yourself from your work… And I want to make it really explicit. If it’s critical, it’s not about the person. It’s about the work, you know, the weakness of the work, but not the person. (A2,D1)

A1 explains that at times his initial report may be highly critical, and at times he will ‘sit back and rethink… With empathy, I will write feedback, which is more constructive’ (A1,E1). However, he adds that ‘I will never try to overrate a piece or sugar-coat my comments just to sound “friendly”’ (A1,E1), with the ultimate goal being to uphold academic rigour. Thus, honesty is seen as the best strategy to maintain a strong, professional relationship with reviewers. Another strategy employed by A2 is showing explicit commitment to the review process. One way this is communicated is by prefacing a review with a summary of the paper, not only ‘to confirm with the author that I am interpreting the findings in the way that they intended, but also importantly to show that I have engaged with the paper’ (A2,E1). Further, if the recommendation is for a further round of review, she will state directly to the authors ‘that I would be happy to review a revised manuscript’ (A2,E1).

Reflecting on feedback experience

As ECRs we have engaged in the scholarly publishing process initially as authors, subsequently as reviewers, and most recently as Associate Editors. Insights gained in each of these roles have influenced our feedback practices, and have interacted to ‘develop a more holistic understanding of the whole review process’ (A1,E1).

We reflect on our experiences as authors beginning in our doctoral candidatures, with reviews that ranged from ‘the most helpful to the most cynical’ (A1,E1). A2 reflected on two particular experiences both of which resulted in rejection, one being ‘snarky’ and ‘unprofessional’ with ‘no substance’, the other providing ‘strong encouragement … the focus was clearly on the paper and not me personally’ (A2,E1). It was this experience that showed the divergence between the tone and content of review despite the same outcome, and as result A2 committed to being ‘ the amazing one’. A1 also drew from a negative experience noting that ‘I remember the least useful feedback as much as I do with the most constructive one’ (A1,E1). This was particularly the case when a reviewer made apparently politically-motivated judgements that A1 ‘felt very uncomfortable with’ and flagged with the editor (A1,E1). Through these experiences both authors wrote in their essays about the need to focus on the work and not on the individual, with an understanding that a review ‘can have a really serious impact’ (A2,D1) on an author.

It is important to note that neither authors have been involved in any formal or informal training on how to conduct peer review, although A1 expresses appreciation of the regular practice of one journal for which he reviews, where ‘the editor would write an email to the reviewers giving feedback on the feedback we have given’ (A1,E1). For A2, an important source of learning is in comparing her reviews with that of others who have reviewed the same manuscript, the norm for some journals being to send all reports to all reviewers along with the final decision.

I’m always interested to see how [my] review compares with others. Have I given the same recommendation? Have I identified the same areas of weakness? Have I formatted my review in the same way? How does the tone of delivery differ? I generally find that I give a similar if not the same response to other reviews, and I’m happy to see that I often pick up the same issues with methodology. (A2,E1)

For A2 there is comfort in seeing reviews that are similar to others, although we both draw on experiences where our recommendation diverged from others, with a source of assurance being the ultimate decision of the editor.

So it’s like, I don’t think it can be published and that [other] reviewer thinks it’s excellent. So usually, what the editor would do in this instance is invite the third one. Right, yeah. But then this editor told me… that they decided to go with my decision to reject because they find that my comments are more convincing. (A1,D1)

A2 also was surprised to read another report of the same manuscript she reviewed, that raised similar concerns and gave the same recommendation for major revisions, but noted the ‘wording is soooo snarky. What need?’ (A2,E1). In one case that A1 detailed in our first discussion, significant but improbable changes made to the methodology section of a resubmitted paper caused him to question the honesty of the reporting, making him ‘uncomfortable’ and as a result reported his concerns to the editor. In this case the review took some time to craft, trying to balance the ‘fine line between catering for the emotion [of the author], right, and upholding the academic standards’ (A1,D1). While he conceded initially his report was ‘kind of too harsh… later I think I rephrased it a little bit, I kind of softened (it)’.

While the role of Associate Editor is very new to A2 and thus was yet unable to comment, for A1 the ‘opportunity to read various kinds of comments given by reviewers’ (A1,E1) is viewed favourably. This includes not only how reviewers structure their feedback, but also how they use the confidential comments to the editors to express their thoughts more openly, providing important insights into the process that are largely hidden.

Understanding standards

While our reviewing practices are informed more broadly ‘according to more general academic standards of the study itself, and the clarity and fullness of the reporting’ (A2,E1), we look in the first instance to advice and guidelines from journals to develop an understanding of journal-specific standards, although A2 notes that a lack of review guidelines for one of the earliest journals she reviewed led her to ‘searching Google for standard criteria’ (A2,E1). However, our development in this area seems to come from developing a familiarity with a journal, particularly through engagement with the journal as an author.

In addition to reading the scope and instructions for authors to obtain such basic information as readership, length of submissions, citation style, the best way for me to understand the requirements and preferences of the journals is my own experience as an author. I review for journals which I have published in and for those which I have not. I always find it easier to make a judgement about whether the manuscripts I review meet the journal’s standards if I have published there before. (A1,E1)

Indeed, it seems that journal familiarity is connected closely to our confidence in reviewing, and while both authors ‘review for journals which I have published in and for those which I have not’ (A1,E1), A2 states that she is reluctant to ‘readily accept an offer to review for a journal that I’m not familiar with’, and A1 takes extra time to ‘do more preparatory work before I begin reading the manuscript and writing the review’ when reviewing for an unfamiliar journal.

Ecological systems

Microsystem.

Three microsystems exert influence on A1’s and A2’s development of feedback literacy: university, journal community, and Twitter.

In regards to the university, we are full-time academics in research-intensive universities in the UK and Japan where expectations for academics include publishing research in high-impact journals ‘which is vital to promotion’ (A1,E2). It is especially true in A2’s context where the national higher education agenda is to increase world rankings of universities. Thus, ‘there is little value placed on peer review, as it is not directly related to the broader agenda’ (A2,E2). When considering his recent relocation to the UK together with the current pandemic, A1 navigated his responsibilities within the university context and decided to allocate more time to his university-related responsibilities, especially providing learning and pastoral support to his students, who are mostly international students. Besides, A2 observed that there is a dearth of institution-wide support on conducting peer review although ‘there are a lot of training opportunities related to how to write academic papers in English, how to present at international conferences, how to write grant applications’, etc. (A2,E2). As a result, she ‘struggled for a couple of years’ because of the lack of institutional support for her development as a peer reviewer’ (A2,D2); but this helplessness also motivated her to seek her own ways to learn how to give feedback, such as ‘seeing through glimpses of other reviews, how others approach it, in terms of length, structure, tone, foci etc.’ (A2,E2). A1 shares the same view that no training is available at his institution to support his development as a peer reviewer. However, his postgraduate supervision experiences enabled him to reflect on how his feedback can benefit researchers. In our second online discussion, A1 shared that he held individual advising sessions with some postgraduate students, which made him realise that it is important for feedback to serve the function to inspire rather than to ‘give them right answers’ (A1,D2).

Because of the lack of formal training provided by universities, both authors searched for other professional communities to help us develop our expertise in giving feedback as peer reviewers, with journal communities being the next microsystem. We found that international journals provide valuable opportunities for us to understand more about the whole peer-review process, in particular the role of feedback. For A1, the training which he received from the editor-in-chief when he took up the associate editorship of a language education journal two years ago was particularly useful. A1 benefited greatly from meetings with the editor who walked him through every stage in the review process and provided ‘hands-on experience on how to handle delicate scenarios’ (A1,E2). Since then, A1 has had plenty of opportunities to oversee various stages of peer review and read a large number of reviewers’ reports which helped him gain ‘a holistic understanding of the peer-review process’ (A1,E2) and gradually made him become more cognizant of how he wants to give feedback. Although there was no explicit instruction on the technical aspect of giving feedback, A1 found that being an associate editor has developed his ‘consciousness’ and ‘awareness’ of giving feedback as a peer reviewer (A1,D2). Further, he felt that his editorial experiences provided him the awareness to constantly refine and improve his ways of giving feedback, especially ways to make his feedback ‘more structured, evidence-based, and objective’ (A1,E2). Despite not reflecting from the perspective of an editor, A2 recalled her experience as an author who received in-depth and constructive feedback from a reviewer, which really impacted the way she viewed the whole review process. She understood from this experience that even though the paper under review may not be particularly strong, peer reviewers should always aim to provide formative feedback which helps the authors to improve their work. These positive experiences of the two authors are impactful on the ways they give feedback as peer reviewers. In addition, close engagement with a specific journal has helped A2 to develop a sense of belonging, making it ‘much more than a journal, but also a way to become part of an academic community’ (A2,E2). With such a sense of belonging, it is more likely for her to be ‘pulled towards that journal than others’ when she can only review a limited number of manuscripts (A2,D2).

Another professional community in which we are both involved is Twitter. We regard Twitter as a platform for self-learning, reflection, and inspiration. We perceive Twitter as a space where we get to learn from others’ peer-review experiences and disciplinary practices. For example, A1 found the tweets on peer-review informative ‘because they are written by different stakeholders in the process—the authors, editors, reviewers’ and offer ‘different perspectives and sometimes different versions of the same story’ (A1,E2). A2 recalled a tweet she came across about the ‘infamous Reviewer 2’ and how she learned to not make the same mistakes (A2,D2). Reading other people’s experiences helps us reconsider our own feedback practices and, more broadly, the whole peer-review system because we ‘get a glimpse of the do’s and don’ts for peer reviewers’ (A1,E2).

Further to our three common microsystems, A2 also draws on a unique microsystem, that of her former profession as a teacher, which shapes her feedback practices in three ways. First, in her four years of teacher training, a lot of emphasis was placed on assessment and feedback such as ‘error correction’; this understanding related to giving feedback to students and was solidified through ‘learning on the job’ (A2,D2). Second, A2 acknowledges that as a teacher, she has a passion to ‘guide others in their knowledge and skill development… and continue this in our review practices’ (A2,E2). Finally, her teaching experience prepared her to consider the authors’ emotional responses in her peer-review feedback practices, constantly ‘thinking there’s a person there who’s going to be shattered getting a rejection’ (A2,D2).

Mesosystem considers the confluence of our interactions in various microsystems. Particularly, we experienced a lack of support from our institutions, which pushed us to seek alternative paths to acquire the art of giving feedback. This has made us realise the importance of self-learning in developing feedback literacy as peer reviewers, especially in how to develop constructive and actionable feedback. Both authors self-learn how to give feedback by reading others’ feedback. A1 felt ‘fortunate to be involved in journal editing and Twitter’ because he gets ‘a glimpse of how other peer reviewers give feedback to authors’ (A1,E2). A2, on the other hand, learned through her correspondences with a journal editor who made her stop ‘looking for every word’ and move away from ‘over proofreading and over editing’ (A2,D2).

Focusing on the chronosystem, it is noticed that both authors adjusted how they give feedback over time because of the aggregated influence of their microsystems. What stands out is that they have become more strategic in giving feedback. One way this is achieved is through focusing their comments on the arguments of the manuscripts instead of burning the midnight oil with error-correcting.

Exosystem concerns the environment where the focal individuals do not have direct interactions with the people in it but have access to information about. In his case, A1’s understanding of advising techniques promoted by a self-access language learning centre is conducive to the cultivation of his feedback literacy. Although A1 is not a part of the language advising team, he has a working relationship with the director. A1 was especially impressed by the learner-centeredness of an advising process:

The primary duty of the language advisor is not to be confused with that of a language teacher. Language teachers may teach a lecture on a linguistic feature or correct errors on an essay, but language advisors focus on designing activities and engaging students in dialogues to help them reflect on their own learning needs… The advisors may also suggest useful resources to the students which cater to their needs. In short, language advisors work in partnership with the students to help them improve their language while language teachers are often perceived as more authoritative figures (A1, E2).

His understanding of advising has affected how A1 provides feedback as a peer reviewer in a number of ways. First, A1 places much more emphasis on humanising his feedback, for example, by considering ‘ways to work in partnership with the authors and making this “partnership mindset” explicit to the authors through writing’ (A1,E2). One way to operationalise this ‘partnership mindset’ in peer review is to ‘ask a lot of questions’ and provide ‘multiple suggestions’ for the authors to choose from (A1,E2). Furthermore, his knowledge of the difference between feedback as giving advice and feedback as instruction has led him to include feedback, which points authors to additional resources. Below is a feedback point A1 gave in one of his reviews:

The description of the data analysis process was very brief. While we are not aiming at validity and reliability in qualitative studies, it is important for qualitative researchers to describe in detail how the data collected were analysed (e.g. iterative coding, inductive/deductive coding, thematic analysis) in order to ascertain that the findings were credible and trustworthy. See Johnny Saldaña’s ‘The Coding Manual for Qualitative Researchers’.

Another exosystem that we have knowledge about is formal peer-review training courses provided by publishers. These online courses are usually run asynchronously. Even though we did not enrol in these courses, our interest in peer review has led us to skim the content of these courses. Both of us questioned the value of formal peer-review training in developing feedback literacy of peer reviewers. For example, A2 felt that opportunities to review are more important because they ‘put you in that position where you have responsibility and have to think critically about how you are going to respond’ (A2,D2). To A1, formal peer-review training mostly focuses on developing peer reviewers’ ‘understanding of the whole mechanism’ but not providing ‘training on how to give feedback… For example, do you always ask a question without giving the answers you know? What is a good suggestion?’ (A1,D2).

Macrosystem

The two authors have diverse sociocultural experiences because of their family backgrounds and work contexts. When reflecting on their sociocultural experiences, A1 focused on his upbringing in Hong Kong where both of his parents are school teachers and his professional experience as a language teacher in secondary and tertiary education in Hong Kong while A2 discussed her experience of working in academia in Japan as an anglophone.

Observing his parents’ interactions with their students in schools, A1 was immersed in an Asian educational discourse characterised by ‘mutual respect and all sorts of formality’ (A1,E2). After he finished university, A1 became a school teacher and then a university lecturer (equivalent to a teaching fellow in the UK), getting immersed continuously in the etiquette of educational discourse in Hong Kong. Because of this, A1 knows that being professional means to be ‘formal and objective’ and there is a constant expectation to ‘treat people with respect’ (A1,E2). At the same time, his parents are unlike typical Asian parents; they are ‘more open-minded’, which made him more willing to listen and ‘consider different perspectives’ (A1,D2). Additionally, social hierarchy also impacted his approach to giving feedback as a peer reviewer. A1 started his career as a school teacher and then a university lecturer in Hong Kong with no formal research training. After obtaining his BA and MA, it is not until recently that A1 obtained his PhD by Prior Publication. Perhaps because of his background as a frontline teacher, A1 did not regard himself as ‘a formally trained researcher’ and perceived himself as not ‘elite enough to give feedback to other researchers’ (A1,E2). Both his childhood and his self-perceived identity have led to the formation of two feedback strategies: asking questions and providing a structured report mimicking the sections in the manuscript. A1 frequently asks questions in his reports ‘in a bid to offset some of the responsibilities to the authors’ (A1,E2). A1 struggles to decide whether to address authors using second- or third-person pronouns. A1 consistently uses third-person pronouns in his feedback because he wants to sound ‘very formal’ (A1,D2). However, A1 shared that he has recently started using second-person pronouns to make his feedback more interactive.

A2, on the other hand, pondered upon her sociocultural experiences as a school teacher in Australia, her position as an anglophone in a Japanese university, and her status as first-generation high school graduate. Reflecting on her career as a school teacher, A2 shared that her students had high expectations on her feedback:

So if you give feedback that seems unfair, you know … they’ll turn around and say, ‘What are you talking about’? They’re going to react back if your feedback is not clear. I think a lot of them [the students] appreciate the honesty. (A2,D2)

A2 acknowledges that her identity as a native English speaker has given her the advantage to publish extensively in international journals because of her high level of English proficiency and her access to ‘data from the US and from Australia which are more marketable’ (A2,D2). At the same time, as a native English speaker, she has empathy for her Japanese colleagues who struggle to write proficiently in English and some who even ‘pay thousands of dollars to have their work translated’ (A2,D2). Therefore, when giving feedback as a peer reviewer, she tries not to make a judgement on an author’s English proficiency and will not reject a paper based on the standard of English alone. Finally, as a first-generation scholar without any previous connections to academia, she struggles with belonging and self-confidence. As a result she notes that it usually takes her a long time to complete a review because she would like to be sure what she is saying is ‘right or constructive and is not on the wrong track’ (A2,D2).

Implications and future directions

In investigating the manifestations of the authors’ feedback literacy development, and the ecological systems in which this development occurs, this study unpacks the various sources of influence behind our feedback behaviours as two relatively new but highly commended peer reviewers. The findings show that our feedback literacy development is highly personalised and contextualised, and the sources of influence are diverse and interconnected, albeit largely informal. Our peer-review practices are influenced by our experiences within academia, but influences are much broader and begin much earlier. Peer-review skills were enhanced through direct experience not only in peer review but also in other activities related to the peer-review process, and as such more hands-on, on-site feedback training for peer reviewers may be more appropriate than knowledge-based training. The authors gain valuable insights from seeing the reviews of others, and as this is often not possible until scholars take on more senior roles within journals, co-reviewing is a potential way for ECRs to gain experience (McDowell et al., 2019 ). We draw practical and moral support from various communities, particularly online to promote “intellectual candour”, which refers to honest expressions of vulnerability for learning and trust building (Molloy and Bearman, 2019 , p. 32); in response to this finding we have developed an online community of practice, specifically as a space for discussing issues related to peer review (a Twitter account called “Scholarly Peers”). Importantly, our review practices are a product not only of how we review, but why we review, and as such training should not focus solely on the mechanics of review, but extend to its role within academia, and its impact not only on the quality of scholarship, but on the growth of researchers.

The significance of this study is its insider perspective, and the multifaceted framework that allows the capturing of the complexity of factors that influence individual feedback literacy development of two recognised peer reviewers. It must be stressed that the findings of this study are highly idiosyncratic, focusing on the experiences of only two peer reviewers and the educational research discipline. While the research design is such that it is not an attempt to describe a ‘typical’ or ‘expected’ experience, the scope of the study is a limitation, and future research could be expanded to studies of larger cohorts in order to identify broader trends. In this study, we have not included the reviewer reports themselves, and these reports provide a potentially rich source of data, which will be a focus in our continued investigation in this area. Further research could also investigate the role that peer-review training courses play in the feedback literacy development and practices of new and experienced peer reviewers. Since journal peer review is a communication process, it is equally important to investigate authors’ perspectives and experiences, especially pertaining to how authors interpret reviewers’ feedback based on the ways that it is written.

Data availability

Because of the sensitive nature of the data these are not made available.

Change history

26 november 2021.

A Correction to this paper has been published: https://doi.org/10.1057/s41599-021-00996-3

Abedi Asante L, Abubakari Z (2020) Pursuing PhD by publication in geography: a collaborative autoethnography of two African doctoral researchers. J Geogr High Educ 45(1):87–107. https://doi.org/10.1080/03098265.2020.1803817

Article   Google Scholar  

Boud D, Dawson P (2021). What feedback literate teachers do: An empirically-derived competency framework. Assess Eval High Educ. Advanced online publication. https://doi.org/10.1080/02602938.2021.1910928

Bronfenbrenner U (1986) Ecology of the family as a context for human development. Res Perspect Dev Psychol 22:723–742. https://doi.org/10.1037/0012-1649.22.6.723

Carless D, Boud D (2018) The development of student feedback literacy: enabling uptake of feedback. Assess Eval High Educ 43(8):1315–1325. https://doi.org/10.1080/02602938.2018.1463354

Carless D, Winstone N (2020) Teacher feedback literacy and its interplay with student feedback literacy. Teach High Educ, 1–14. https://doi.org/10.1080/13562517.2020.1782372

Chang H, Ngunjiri FW, Hernandez KC (2013) Collaborative autoethnography. Left Coast Press

Cheung D (2000) Measuring teachers’ meta-orientations to curriculum: application of hierarchical confirmatory factor analysis. The J Exp Educ 68(2):149–165. https://doi.org/10.1080/00220970009598500

Chong SW (2021a) Improving peer-review by developing peer reviewers’ feedback literacy. Learn Publ 34(3):461–467. https://doi.org/10.1002/leap.1378

Chong SW (2021b) Reconsidering student feedback literacy from an ecological perspective. Assess Eval High Educ 46(1):92–104. https://doi.org/10.1080/02602938.2020.1730765

Chong SW (2019) College students’ perception of e-feedback: a grounded theory perspective. Assess Eval High Educ 44(7):1090–1105. https://doi.org/10.1080/02602938.2019.1572067

Chong SW (2018) Interpersonal aspect of written feedback: a community college students’ perspective. Res Post-Compul Educ 23(4):499–519. https://doi.org/10.1080/13596748.2018.1526906

Corden A, Sainsbury R (2006) Using verbatim quotations in reporting qualitative social research: the views of research users. University of York Social Policy Research Unit

Ellis C, Adams TE, Bochner AP (2011) Autoethnography: An Overview. Historical Soc Res, 12:273–290

Ellis C, Bochner A (1996) Composing ethnography: Alternative forms of qualitative writing. Sage

Freda MC, Kearney MH, Baggs JG, Broome ME, Dougherty M (2009) Peer reviewer training and editor support: results from an international survey of nursing peer reviewers. J Profession Nurs 25(2):101–108. https://doi.org/10.1016/j.profnurs.2008.08.007

Fulcher G (2012) Assessment literacy for the language classroom. Lang Assess Quart 9(2):113–132. https://doi.org/10.1080/15434303.2011.642041

Gee JP (1999) Reading and the new literacy studies: reframing the national academy of sciences report on reading. J Liter Res 3(3):355–374. https://doi.org/10.1080/10862969909548052

Gravett K, Kinchin IM, Winstone NE, Balloo K, Heron M, Hosein A, Lygo-Baker S, Medland E (2019) The development of academics’ feedback literacy: experiences of learning from critical feedback via scholarly peer review. Assess Eval High Educ 45(5):651–665. https://doi.org/10.1080/02602938.2019.1686749

Hains-Wesson R, Young K (2016) A collaborative autoethnography study to inform the teaching of reflective practice in STEM. High Educ Res Dev 36(2):297–310. https://doi.org/10.1080/07294360.2016.1196653

Han Y, Xu Y (2019) Student feedback literacy and engagement with feedback: a case study of Chinese undergraduate students. Teach High Educ, https://doi.org/10.1080/13562517.2019.1648410

Heesen R, Bright LK (2020) Is Peer Review a Good Idea? Br J Philos Sci, https://doi.org/10.1093/bjps/axz029

Hollywood A, McCarthy D, Spencely C, Winstone N (2019) ‘Overwhelmed at first’: the experience of career development in early career academics. J Furth High Educ 44(7):998–1012. https://doi.org/10.1080/0309877X.2019.1636213

Horn SA (2016) The social and psychological costs of peer review: stress and coping with manuscript rejection. J Manage Inquiry 25(1):11–26. https://doi.org/10.1177/1056492615586597

Hughes S, Pennington JL, Makris S (2012) Translating Autoethnography Across the AERA Standards: Toward Understanding Autoethnographic Scholarship as Empirical Research. Educ Researcher, 41(6):209–219

Kandiko CB(2010) Neoliberalism in higher education: a comparative approach. Int J Art Sci 3(14):153–175. http://www.openaccesslibrary.org/images/BGS220_Camille_B._Kandiko.pdf

Keashly L, Neuman JH (2010) Faculty experiences with bullying in higher education-causes, consequences, and management. Adm Theory Prax 32(1):48–70. https://doi.org/10.2753/ATP1084-1806320103

Kelly J, Sadegieh T, Adeli K (2014) Peer review in scientific publications: benefits, critiques, & a survival guide. J Int Fed Clin Chem Labor Med 25(3):227–243. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975196/

Google Scholar  

Kumar KL (2020) Understanding and expressing academic identity through systematic autoethnography. High Educ Res Dev, https://doi.org/10.1080/07294360.2020.1799950

Lapadat JC (2017) Ethics in autoethnography and collaborative autoethnography. Qual Inquiry 23(8):589–603. https://doi.org/10.1177/1077800417704462

Levi T, Inbar-Lourie O (2019) Assessment literacy or language assessment literacy: learning from the teachers. Lang Assess Quarter 17(2):168–182. https://doi.org/10.1080/15434303.2019.1692347

London MS, Smither JW (2002) Feedback orientation, feedback culture, and the longitudinal performance management process. Hum Res Manage Rev 12(1):81–100. https://doi.org/10.1016/S1053-4822(01)00043-2

Malecka B, Boud D, Carless D (2020) Eliciting, processing and enacting feedback: mechanisms for embedding student feedback literacy within the curriculum. Teach High Educ, 1–15. https://doi.org/10.1080/13562517.2020.1754784

Mavrogenis AF, Quaile A, Scarlat MM (2020) The good, the bad and the rude peer-review. Int Orthopaed 44(3):413–415. https://doi.org/10.1007/s00264-020-04504-1

McDowell GS, Knutsen JD, Graham JM, Oelker SK, Lijek RS (2019) Co-reviewing and ghostwriting by early-career researchers in the peer review of manuscripts. ELife 8:e48425. https://doi.org/10.7554/eLife.48425

Article   CAS   PubMed   PubMed Central   Google Scholar  

Merga MK, Mason S, Morris JE (2018) Early career experiences of navigating journal article publication: lessons learned using an autoethnographic approach. Learn Publ 31(4):381–389. https://doi.org/10.1002/leap.1192

Miles MB, Huberman AM (1994) Qualitative data analysis: An expanded sourcebook (2nd edn.). Sage

Molloy E, Bearman M (2019) Embracing the tension between vulnerability and credibility: ‘Intellectual candour’ in health professions education. Med Educ 53(1):32–41. https://doi.org/10.1111/medu.13649

Article   PubMed   Google Scholar  

Molloy E, Boud D, Henderson M (2019) Developing a learning-centred framework for feedback literacy. Assess Eval High Educ 45(4):527–540. https://doi.org/10.1080/02602938.2019.1667955

Neal JW, Neal ZP (2013) Nested or networked? Future directions for ecological systems theory. Soc Dev 22(4):722–737. https://doi.org/10.1111/sode.12018

Noble C, Billett S, Armit L, Collier L, Hilder J, Sly C, Molloy E (2020) “It’s yours to take”: generating learner feedback literacy in the workplace. Adv Health Sci Educ Theory Pract 25(1):55–74. https://doi.org/10.1007/s10459-019-09905-5

Price M, Rust C, O’Donovan B, Handley K, Bryant R (2012) Assessment literacy: the foundation for improving student learning. Oxford Centre for Staff and Learning Development

Silbiger NJ, Stubler AD (2019) Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. PeerJ 7:e8247. https://doi.org/10.7717/peerj.8247

Article   PubMed   PubMed Central   Google Scholar  

Starck JM (2017) Scientific peer review: guidelines for informative peer review. Springer Spektrum

Steelman LA, Wolfeld L (2016) The manager as coach: the role of feedback orientation. J Busi Psychol 33(1):41–53. https://doi.org/10.1007/s10869-016-9473-6

Stiggins RJ (1999) Evaluating classroom assessment training in teacher education programs. Educ Meas: Issue Pract 18(1):23–27. https://doi.org/10.1111/j.1745-3992.1999.tb00004.x

Street B (1997) The implications of the ‘new literacy studies’ for literacy Education. Engl Educ 31(3):45–59. https://doi.org/10.1111/j.1754-8845.1997.tb00133.x

Sughrua WM (2019) A nomenclature for critical autoethnography in the arena of disciplinary atomization. Cult Stud Crit Methodol 19(6):429–465. https://doi.org/10.1177/1532708619863459

Sutton P (2012) Conceptualizing feedback literacy: knowing, being, and acting. Innov Educ Teach Int 49(1):31–40. https://doi.org/10.1080/14703297.2012.647781

Article   MathSciNet   Google Scholar  

Tynan BR, Garbett DL (2007) Negotiating the university research culture: collaborative voices of new academics. High Educ Res Dev 26(4):411–424. https://doi.org/10.1080/07294360701658617

Vygotsky LS (1978) Mind in society: The development of higher psychological processes. Harvard University Press

Wall S (2006) An autoethnography on learning about autoethnography. Int J Qual Methods 5(2):146–160. https://doi.org/10.1177/160940690600500205

Article   ADS   MathSciNet   Google Scholar  

Warne V (2016) Rewarding reviewers-sense or sensibility? A Wiley study explained. Learn Publ 29:41–40. https://doi.org/10.1002/leap.1002

Wilkinson S (2019) The story of Samantha: the teaching performances and inauthenticities of an early career human geography lecturer. High Educ Res Dev 38(2):398–410. https://doi.org/10.1080/07294360.2018.1517731

Winstone N, Carless D (2019) Designing effective feedback processes in higher education: a learning-focused approach. Routledge

Winstone NE, Mathlin G, Nash RA (2019) Building feedback literacy: students’ perceptions of the developing engagement with feedback toolkit. Front Educ 4:1–11. https://doi.org/10.3389/feduc.2019.00039

Xu Y, Carless D (2016) ‘Only true friends could be cruelly honest’: cognitive scaffolding and social-affective support in teacher feedback literacy. Assess Eval High Educ 42(7):1082–1094. https://doi.org/10.1080/02602938.2016.1226759

Download references

Author information

Authors and affiliations.

Queen’s University Belfast, Belfast, UK

Sin Wang Chong

Nagasaki University, Nagasaki, Japan

Shannon Mason

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sin Wang Chong .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

We acknowledge that research data are collected from human subjects (from the two authors), such data are collected in accordance with the standards and guidelines of the School Research Ethics Committee at the School of Social Sciences, Education and Social Work, Queen’s University Belfast (Ref: 005_2021).

Informed consent

Since the participants are the two authors, there is no informed consent form.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material file #1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chong, S.W., Mason, S. Demystifying the process of scholarly peer-review: an autoethnographic investigation of feedback literacy of two award-winning peer reviewers. Humanit Soc Sci Commun 8 , 266 (2021). https://doi.org/10.1057/s41599-021-00951-2

Download citation

Received : 02 August 2021

Accepted : 12 October 2021

Published : 12 November 2021

DOI : https://doi.org/10.1057/s41599-021-00951-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

article review process

Understanding the peer-review process

The peer-review process is used to assess scholarly articles. Experts in a discipline similar to the author critique an article’s methodology, findings, and reasoning to evaluate it for possible publication in a scholarly journal. Editors of scholarly journals use the peer-review process to decide which articles to publish, and the academic world relies on the peer review process to validate scholarly articles.

Peer review process steps

  • A researcher writes an article and submits it for publication to a scholarly journal.
  • The journal editor gives the article an initial read to see whether it fits within the journal’s scope.
  • If the article passes this phase, the editor selects reviewers who are experts in the same field as that of the author (which is why they are called “peers”). The reviewers may also be referred to as referees because they make judgments about the article’s quality. The reviewers often do not know who the author is, and the author does not know who the reviewers are.
  • The reviewers evaluate the article on the basis of its quality, methodology, potential bias, ethical issues, and any other factors that would affect the research.
  • The reviewers make a recommendation on whether the article should be published, including whether the article needs major or minor revisions. The editor makes a final decision on whether the article should be rejected, rejected with the request for revisions, or accepted.
  • If an author is asked to make revisions, the author can resubmit the article after addressing the reviewers’ comments. This process may go through several rounds before an article is ultimately accepted.
  • If an article is published in a subscription-based journal, the article is available to subscribers of that journal. Subscribers are usually university and college libraries, as subscriptions are expensive. If an article is published in an open access journal, the article is available for anyone to read for free online.
  • The peer-review process may continue even after an article has been published. Authors and editors may make corrections to the article or even retract it if serious concerns arise.

Many professors will require you to use only peer-reviewed research for your assignments. View our guide to finding peer-reviewed articles to learn more about this process.

Peer review in 3 minutes

This video by the North Carolina State University Library  describes the peer-review process.

View Video on Peer Review Process

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game New
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • Critical Reviews

How to Write an Article Review (With Examples)

Last Updated: April 24, 2024 Fact Checked

Preparing to Write Your Review

Writing the article review, sample article reviews, expert q&a.

This article was co-authored by Jake Adams . Jake Adams is an academic tutor and the owner of Simplifi EDU, a Santa Monica, California based online tutoring business offering learning resources and online tutors for academic subjects K-College, SAT & ACT prep, and college admissions applications. With over 14 years of professional tutoring experience, Jake is dedicated to providing his clients the very best online tutoring experience and access to a network of excellent undergraduate and graduate-level tutors from top colleges all over the nation. Jake holds a BS in International Business and Marketing from Pepperdine University. There are 13 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 3,094,247 times.

An article review is both a summary and an evaluation of another writer's article. Teachers often assign article reviews to introduce students to the work of experts in the field. Experts also are often asked to review the work of other professionals. Understanding the main points and arguments of the article is essential for an accurate summation. Logical evaluation of the article's main theme, supporting arguments, and implications for further research is an important element of a review . Here are a few guidelines for writing an article review.

Education specialist Alexander Peterman recommends: "In the case of a review, your objective should be to reflect on the effectiveness of what has already been written, rather than writing to inform your audience about a subject."

Article Review 101

  • Read the article very closely, and then take time to reflect on your evaluation. Consider whether the article effectively achieves what it set out to.
  • Write out a full article review by completing your intro, summary, evaluation, and conclusion. Don't forget to add a title, too!
  • Proofread your review for mistakes (like grammar and usage), while also cutting down on needless information.

Step 1 Understand what an article review is.

  • Article reviews present more than just an opinion. You will engage with the text to create a response to the scholarly writer's ideas. You will respond to and use ideas, theories, and research from your studies. Your critique of the article will be based on proof and your own thoughtful reasoning.
  • An article review only responds to the author's research. It typically does not provide any new research. However, if you are correcting misleading or otherwise incorrect points, some new data may be presented.
  • An article review both summarizes and evaluates the article.

Step 2 Think about the organization of the review article.

  • Summarize the article. Focus on the important points, claims, and information.
  • Discuss the positive aspects of the article. Think about what the author does well, good points she makes, and insightful observations.
  • Identify contradictions, gaps, and inconsistencies in the text. Determine if there is enough data or research included to support the author's claims. Find any unanswered questions left in the article.

Step 3 Preview the article.

  • Make note of words or issues you don't understand and questions you have.
  • Look up terms or concepts you are unfamiliar with, so you can fully understand the article. Read about concepts in-depth to make sure you understand their full context.

Step 4 Read the article closely.

  • Pay careful attention to the meaning of the article. Make sure you fully understand the article. The only way to write a good article review is to understand the article.

Step 5 Put the article into your words.

  • With either method, make an outline of the main points made in the article and the supporting research or arguments. It is strictly a restatement of the main points of the article and does not include your opinions.
  • After putting the article in your own words, decide which parts of the article you want to discuss in your review. You can focus on the theoretical approach, the content, the presentation or interpretation of evidence, or the style. You will always discuss the main issues of the article, but you can sometimes also focus on certain aspects. This comes in handy if you want to focus the review towards the content of a course.
  • Review the summary outline to eliminate unnecessary items. Erase or cross out the less important arguments or supplemental information. Your revised summary can serve as the basis for the summary you provide at the beginning of your review.

Step 6 Write an outline of your evaluation.

  • What does the article set out to do?
  • What is the theoretical framework or assumptions?
  • Are the central concepts clearly defined?
  • How adequate is the evidence?
  • How does the article fit into the literature and field?
  • Does it advance the knowledge of the subject?
  • How clear is the author's writing? Don't: include superficial opinions or your personal reaction. Do: pay attention to your biases, so you can overcome them.

Step 1 Come up with...

  • For example, in MLA , a citation may look like: Duvall, John N. "The (Super)Marketplace of Images: Television as Unmediated Mediation in DeLillo's White Noise ." Arizona Quarterly 50.3 (1994): 127-53. Print. [9] X Trustworthy Source Purdue Online Writing Lab Trusted resource for writing and citation guidelines Go to source

Step 3 Identify the article.

  • For example: The article, "Condom use will increase the spread of AIDS," was written by Anthony Zimmerman, a Catholic priest.

Step 4 Write the introduction....

  • Your introduction should only be 10-25% of your review.
  • End the introduction with your thesis. Your thesis should address the above issues. For example: Although the author has some good points, his article is biased and contains some misinterpretation of data from others’ analysis of the effectiveness of the condom.

Step 5 Summarize the article.

  • Use direct quotes from the author sparingly.
  • Review the summary you have written. Read over your summary many times to ensure that your words are an accurate description of the author's article.

Step 6 Write your critique.

  • Support your critique with evidence from the article or other texts.
  • The summary portion is very important for your critique. You must make the author's argument clear in the summary section for your evaluation to make sense.
  • Remember, this is not where you say if you liked the article or not. You are assessing the significance and relevance of the article.
  • Use a topic sentence and supportive arguments for each opinion. For example, you might address a particular strength in the first sentence of the opinion section, followed by several sentences elaborating on the significance of the point.

Step 7 Conclude the article review.

  • This should only be about 10% of your overall essay.
  • For example: This critical review has evaluated the article "Condom use will increase the spread of AIDS" by Anthony Zimmerman. The arguments in the article show the presence of bias, prejudice, argumentative writing without supporting details, and misinformation. These points weaken the author’s arguments and reduce his credibility.

Step 8 Proofread.

  • Make sure you have identified and discussed the 3-4 key issues in the article.

article review process

You Might Also Like

Write a Feature Article

  • ↑ https://libguides.cmich.edu/writinghelp/articlereview
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548566/
  • ↑ Jake Adams. Academic Tutor & Test Prep Specialist. Expert Interview. 24 July 2020.
  • ↑ https://guides.library.queensu.ca/introduction-research/writing/critical
  • ↑ https://www.iup.edu/writingcenter/writing-resources/organization-and-structure/creating-an-outline.html
  • ↑ https://writing.umn.edu/sws/assets/pdf/quicktips/titles.pdf
  • ↑ https://owl.purdue.edu/owl/research_and_citation/mla_style/mla_formatting_and_style_guide/mla_works_cited_periodicals.html
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548565/
  • ↑ https://writingcenter.uconn.edu/wp-content/uploads/sites/593/2014/06/How_to_Summarize_a_Research_Article1.pdf
  • ↑ https://www.uis.edu/learning-hub/writing-resources/handouts/learning-hub/how-to-review-a-journal-article
  • ↑ https://writingcenter.unc.edu/tips-and-tools/editing-and-proofreading/

About This Article

Jake Adams

If you have to write an article review, read through the original article closely, taking notes and highlighting important sections as you read. Next, rewrite the article in your own words, either in a long paragraph or as an outline. Open your article review by citing the article, then write an introduction which states the article’s thesis. Next, summarize the article, followed by your opinion about whether the article was clear, thorough, and useful. Finish with a paragraph that summarizes the main points of the article and your opinions. To learn more about what to include in your personal critique of the article, keep reading the article! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Prince Asiedu-Gyan

Prince Asiedu-Gyan

Apr 22, 2022

Did this article help you?

Sammy James

Sammy James

Sep 12, 2017

Juabin Matey

Juabin Matey

Aug 30, 2017

Kristi N.

Oct 25, 2023

Vanita Meghrajani

Vanita Meghrajani

Jul 21, 2016

Am I a Narcissist or an Empath Quiz

Featured Articles

Relive the 1970s (for Kids)

Trending Articles

How to Celebrate Passover: Rules, Rituals, Foods, & More

Watch Articles

Fold Boxer Briefs

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

wikiHow Tech Help Pro:

Develop the tech skills you need for work and life

  • SpringerLink shop

Peer-review process

Peer review exists to ensure that journals publish good science which is of benefit to entire scientific community.

Sometimes authors find the peer-review process intimidating because it can lead to the rejection of their manuscript. Keep in mind that revisions and improvement are part of the publication process and actually help raise the quality of your manuscript.

Peer review is a positive process

Peer review is an integral part of scientific publishing that confirms the validity of the science reported. Peer reviewers are experts who volunteer their time to help improve the journal manuscripts they review—they offer authors free advice .

Through the peer-review process, manuscripts should become:

  • More robust : Peer reviewers may point out gaps in your paper that require more explanation or additional experiments.
  • Easier to read : If parts of your paper are difficult to understand, reviewers can tell you so that you can fix them. After all, if an expert cannot understand what you have done, it is unlikely that a reader in a different field will understand.
  • More useful : Peer reviewers also consider the importance of your paper to others in your field and can make suggestions to improve or better highlight this to readers.

Of course, in addition to offering authors advice, another important purpose of peer review is to make sure that the manuscripts published in the journal are of the correct quality for the journal’s aims.

Different types of peer review

There are different forms of peer review used by journals, although the basis is always the same, field experts providing comments on a paper to help improve it. The most common types are

Closed – where the reviewers are aware of the authors’ identities but the authors do not know who reviewed their manuscript.

Double blind – in this case neither authors nor reviewers know each other’s identities.

Open – where there reviewers are aware of the authors’ identity and the reviewers’ identity is revealed to the authors. In some cases journals also publish the reviewers’ reports alongside the final published manuscript.

Back │ Next

Elsevier QRcode Wechat

  • Research Process

Writing a good review article

  • 3 minute read
  • 74.6K views

Table of Contents

As a young researcher, you might wonder how to start writing your first review article, and the extent of the information that it should contain. A review article is a comprehensive summary of the current understanding of a specific research topic and is based on previously published research. Unlike research papers, it does not contain new results, but can propose new inferences based on the combined findings of previous research.

Types of review articles

Review articles are typically of three types: literature reviews, systematic reviews, and meta-analyses.

A literature review is a general survey of the research topic and aims to provide a reliable and unbiased account of the current understanding of the topic.

A systematic review , in contrast, is more specific and attempts to address a highly focused research question. Its presentation is more detailed, with information on the search strategy used, the eligibility criteria for inclusion of studies, the methods utilized to review the collected information, and more.

A meta-analysis is similar to a systematic review in that both are systematically conducted with a properly defined research question. However, unlike the latter, a meta-analysis compares and evaluates a defined number of similar studies. It is quantitative in nature and can help assess contrasting study findings.

Tips for writing a good review article

Here are a few practices that can make the time-consuming process of writing a review article easier:

  • Define your question: Take your time to identify the research question and carefully articulate the topic of your review paper. A good review should also add something new to the field in terms of a hypothesis, inference, or conclusion. A carefully defined scientific question will give you more clarity in determining the novelty of your inferences.
  • Identify credible sources: Identify relevant as well as credible studies that you can base your review on, with the help of multiple databases or search engines. It is also a good idea to conduct another search once you have finished your article to avoid missing relevant studies published during the course of your writing.
  • Take notes: A literature search involves extensive reading, which can make it difficult to recall relevant information subsequently. Therefore, make notes while conducting the literature search and note down the source references. This will ensure that you have sufficient information to start with when you finally get to writing.
  • Describe the title, abstract, and introduction: A good starting point to begin structuring your review is by drafting the title, abstract, and introduction. Explicitly writing down what your review aims to address in the field will help shape the rest of your article.
  • Be unbiased and critical: Evaluate every piece of evidence in a critical but unbiased manner. This will help you present a proper assessment and a critical discussion in your article.
  • Include a good summary: End by stating the take-home message and identify the limitations of existing studies that need to be addressed through future studies.
  • Ask for feedback: Ask a colleague to provide feedback on both the content and the language or tone of your article before you submit it.
  • Check your journal’s guidelines: Some journals only publish reviews, while some only publish research articles. Further, all journals clearly indicate their aims and scope. Therefore, make sure to check the appropriateness of a journal before submitting your article.

Writing review articles, especially systematic reviews or meta-analyses, can seem like a daunting task. However, Elsevier Author Services can guide you by providing useful tips on how to write an impressive review article that stands out and gets published!

What are Implications in Research

  • Manuscript Preparation

What are Implications in Research?

how to write the results section of a research paper

How to write the results section of a research paper

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

article review process

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

More From Forbes

Labor department’s perm program failing immigrants and employers.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

The U.S. Department of Labor building in Washington, D.C. Critics say the Labor Department’s ... [+] policies produce long delays to sponsor high-skilled immigrants and contribute to a dysfunctional immigration system. (Photo by ALEX EDELMAN/AFP via Getty Images)

Critics say the Department of Labor’s policies have produced long and costly delays to sponsor immigrants for permanent residence and contributed to dysfunction in America’s immigration system. To obtain an employment-based green card, an employer must obtain a prevailing wage and, in most cases, use advertising to show no qualified U.S. worker is available under the DOL’s troubled PERM or permanent labor certification program.

“Although immigration law requires ‘labor certification’ for most employer-sponsored immigrants, [under the PERM program] the Department of Labor has created the current system out of whole cloth,” according to a National Foundation for American Policy report . Nothing in U.S. law requires advertising. At the time of the 1965 Immigration Act, Senator Edward Kennedy (D-MA) said the DOL could use available employment data because “It was not our intention . . . that all intending immigrants must undergo an employment analysis of great detail that could be time consuming and disruptive to the normal flow of immigration.”

Despite this, attorneys and employers say the Department of Labor has created a system with mounting costs and delays that disrupt the ability of employers to sponsor highly skilled immigrants. To better understand the process, I interviewed Krystal Alanis, who responded in writing. Alanis is a partner at Reddy Neumann Brown PC, managing the firm's PERM Labor Certification Department.

Stuart Anderson: How long does the Department of Labor say the PERM process should take?

Krystal Alanis: The DOL has said it should take 45 to 60 days to decide a PERM case, according to the preamble to its final rule in 2005.

Anderson: How long does the PERM process actually take?

Alanis: Currently, it takes approximately two years to complete the PERM process. It takes six months to obtain a Prevailing Wage Determination, or PWD, 60 days minimum for the labor market test and 13 months or more for the DOL to approve the case. If there is an audit, the process will take several additional months.

Anderson: What does the PERM process involve?

Alanis: First, a company must identify the permanent full-time job opportunity, including a job title, job description and minimum requirements for the sponsored position, and verify and document that the foreign national meets all qualifications for the position.

Second, before filing a PERM application, employers must obtain a Prevailing Wage Determination from the DOL’s National Prevailing Wage Center. The prevailing wage is the average wage paid to similarly employed workers in the area of intended employment. The employer must offer the foreign worker at least the prevailing wage rate for the sponsored position.

Third, employers must conduct recruitment for the sponsored job opportunity to test the U.S. labor market for able, willing, qualified and available U.S. workers. That is the heart of the PERM process. The recruitment involves advertising in multiple DOL mandated mediums. Based on strict regulatory timelines, it will take at least 60 days for employers to complete recruitment. However, the recruitment process often exceeds 60 days. For example, employers may have to conduct interviews beyond 60 days if they receive resumes from U.S. workers.

Fourth, after completing recruitment, the employer will file the PERM application on Form 9089 electronically through the Foreign Labor Application Gateway system. An employer can only file the PERM application for a foreign national if it found no able, willing, qualified, and available U.S. workers for the position after conducting good faith recruitment.

Fifth, after the PERM application is submitted, the DOL reviews it to determine whether the employer meets all PERM requirements. The DOL either approves, denies or requires a PERM audit.

Anderson: What are the negative impacts of the current process?

Alanis: Increased PERM processing times have burdened employers and foreign nationals. A significant portion of PERM beneficiaries are H-1B professionals. H-1B status can only be extended beyond the six-year limit if the green card process has started. The PERM delays often prevent foreign nationals stuck in the green card backlog from timely filing an extension beyond the six-year limit. While waiting for a PERM approval, a foreign national may need to stop work and, many times, even depart the United States.

Many of these individuals have children who attend school in the U.S. and must choose whether to uproot their family. Delays in the PERM process also burden H-4 dependent spouses, who can obtain employment authorization based on the H-1B holder’s approved I-140 petition. PERM delays affect foreign nationals whose priority date is current because, without an approved PERM application, they are prevented from filing for adjustment of status. That may lead to a missed filing opportunity if dates retrogress.

Anderson: Can you explain the PERM delay lawsuit your firm filed?

Alanis: The anticipated timeframe in 2005 of 45-60 days has turned into a 13-month turnaround with no real improvement in sight. These unreasonable and systematic delays have caused harm to employers and beneficiaries of PERM applications. The Administrative Procedure Act allows litigants to challenge a government agency’s unreasonable delay.

Our team recognizes the hardship PERM delays have caused and filed a lawsuit to hold the DOL accountable for their inaction. Our claim is straightforward: DOL’s PERM delays on unaudited applications are unreasonable after 60 days. Therefore, the plaintiffs in our lawsuit request the DOL to make expeditious decisions on their applications based on the original intent of the PERM program.

Anderson: Would the DOL allow employers to pay premium processing fees for faster adjudication of PERM cases, similar to how employers seek to manage delays with U.S. Citizenship and Immigration Services?

Alanis: A premium processing option would be a great resource, but the DOL cannot offer one without authorization from Congress to charge fees for PERM. The DOL has requested the concept of user fees multiple times with no progress in that area.

Anderson: What advice do you have for clients to avoid Department of Justice lawsuits, such as those against Apple and Facebook, when companies follow DOL’s PERM regulations but are accused of discriminating against U.S. workers?

Alanis: Considering the recent cases involving Apple and Facebook, employers should understand the interplay between PERM regulations and anti-discrimination laws. Simple adjustments to an employer’s PERM recruitment process to more closely align with their standard recruitment procedures may help employers avoid hefty civil penalties.

Employers should review their PERM recruitment processes to determine whether the best methods are being used that more closely mirror their non-PERM recruitment processes within the limitations of the PERM regulations. Employers should also consider employee training on PERM regulations and anti-discrimination laws. In doing so, employers should work with a qualified immigration attorney to help navigate this complicated process. Additionally, employers who use third-party companies to post PERM advertisements should discuss these issues with their provider to ensure compliance.

Anderson: NFAP and other organizations have urged placing more occupations on the Schedule A list, exempting those cases from the PERM process. Would that help?

Alanis: There are definitely benefits to expanding the Schedule A occupational list, which has not been expanded in decades. An employer wanting to hire a foreign national for a Schedule A occupation is not required to test the U.S. labor market or file a PERM application with the DOL. Instead, the employer can bypass the PERM recruitment process and apply for Schedule A designation by submitting an uncertified PERM labor certification application to the USCIS with an I-140 petition. This is cost-effective for employers and benefits foreign nationals stuck in the green card backlog.

An employer must still obtain a prevailing wage determination, but they will not have to wait 13 months or more for labor certification. That would allow foreign nationals to obtain an approved I-140 faster to lock in their priority date and utilize the approved I-140 to extend their H-1B status beyond the six-year limit. Faster processing of an EB-2 or EB-3 petition through Schedule A will reduce the chance of a missed adjustment of status filing opportunity if a foreign national’s priority date is current.

Anderson: What reforms would you like to see in the PERM and employment-based immigration system?

Alanis: The DOL must reevaluate its process and utilize the current technology available to ensure timely adjudication of applications. The DOL should also revise its regulations to include updated recruitment requirements. For example, requiring an employer to spend thousands of dollars on print newspaper advertisements is nonsensical and not the best method to test the U.S. labor market in keeping with the spirit of the regulations.

The DOL should also address the transition of numerous employers to remote-only operations and incorporate revised guidance on telecommuting. The DOL must also prioritize enhancing transparency by providing clear and straightforward guidelines when major changes or updates occur. For instance, although the DOL has implemented a new Form 9089 and a new filing system under the Foreign Labor Application Gateway or FLAG , the agency has yet to create internal guidelines on how its staff will adjudicate these applications.

Removing the per-country cap would create a more equitable system where foreign nationals are granted green cards on a first-come, first-served basis. Congress can also increase the number of annual employment-based visas available each year to account for the changing demand in the U.S. labor market. The annual limits were set in 1990 and Congress has kept them the same.

Congress can make significant progress in reducing the green card backlog by establishing that family (spouse and children) of employment based green card beneficiaries are not counted against the annual quota. Exempting spouses and children from the annual cap would effectively double the 140,000 immigrant visa cap. Finally, Congress or USCIS should allow beneficiaries of an approved I-140 petition to obtain employment authorization. That would allow greater mobility and job flexibility while waiting in the green card backlog.

Immigrants are valuable to the nation and part of America’s history and tradition. They deserve to be treated with respect and dignity.

Stuart Anderson

  • Editorial Standards
  • Reprints & Permissions

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

The 3-Stage Process That Makes Universities Prime Innovators

  • Anne-Laure Fayard
  • Martina Mendola

article review process

How these institutions carefully curate connections and relationships to take ideas from the drawing board into the marketplace.

While calls for cross-sector collaborations to tackle complex societal issues abound, in practice, only few succeed. Those that do often have a collaboration intermediary, which can bring together different actors, develop relationships among collaborators, and create an ecosystem to support ideas over time. With their strengths in knowledge creation and their role as community anchors, universities are ideally equipped to create and orchestrate support for the kind of innovation that the sustainability imperative requires. However, to be able to take on this role they need to develop a culture of open innovation, experimentation and iteration, and value, which requires supporting teams that will champion the change and facilitate collaborations among the diverse actors of the innovation ecosystem.

Cross-sector collaborations, combining the competencies, skills, and resources of diverse actors, are increasingly seen by academics and professionals alike as the best way to tackle complex societal and environmental issues — and universities are becoming key players in these collaborations.

  • Anne-Laure Fayard is the ERA Chair Professor in Social Innovation at Nova School of Business and Economics and visiting research faculty at New York University Tandon School of Engineering.
  • MM Martina Mendola is a researcher at The Dock.

Partner Center

Biden Title IX rules set to protect trans students, survivors of abuse

The administration’s regulations offer protections for transgender students, but do not address athletics.

article review process

The Biden administration on Friday finalized sweeping new rules barring schools from discriminating against transgender students and ordering significant changes for how schools adjudicate claims of sexual harassment and assault on campus.

The provisions regarding gender identity are the most politically fraught, feeding an election-year culture clash with conservative states and school boards that have limited transgender rights in schools, banned discussion of gender identity in classrooms and removed books with LGBTQ+ themes. Mindful of the politics, the administration is delaying action on the contentious issue of whether transgender girls and women should be allowed to compete in women’s and girls’ sports.

The long-awaited regulation represents the administration’s interpretation of Title IX, a 1972 law that bars sex discrimination in schools that receive federal funding. Title IX is best known for ushering in equal treatment for women in sports, but it also governs how schools handle complaints of sexual harassment and assault, a huge issue on many college campuses.

Now the Biden administration is deploying the regulation to formalize its long-standing view that sex discrimination includes discrimination based on gender identity as well as sexual orientation, a direct challenge to conservative policies across the country.

Ten states, for instance, require transgender students to use bathrooms and locker rooms that align with their biological sex identified at birth, according to tracking by the Movement Advancement Project. Some school districts will not use the pronouns corresponding with a trans student’s gender identity. Both situations might constitute violations of Title IX under the new regulation. In addition, if a school failed to properly address bullying based on gender identity or sexual orientation, that could be a violation of federal law.

“No one should face bullying or discrimination just because of who they are or who they love. Sadly, this happens all too often,” Education Secretary Miguel Cardona told reporters in a conference call.

The Education Department’s Office for Civil Rights investigates allegations of sex discrimination, among other things, and schools that fail to come into compliance risk losing federal funding. A senior administration official said that the office could investigate cases where schools were potentially discriminating, even if they were following their own state’s law.

The final regulation also includes provisions barring discrimination based on pregnancy, including childbirth, abortion and lactation. For instance, schools must accommodate students’ need to attend medical appointments, as well as provide students and workers who are nursing a clean, private space to pump milk.

The combination of these two issues — sexual assault and transgender rights — drew enormous public interest, with some 240,000 public comments submitted in response to the proposed version published in 2022. The new rules take effect Aug. 1, in time for the start of next school year.

The final regulation makes a number of significant changes to a Trump-era system for handling sexual assault complaints, discarding some of the rules that bolstered due process rights of the accused. Supporters said those rules were critical to ensuring that students had the opportunity to defend themselves; critics said they discouraged sexual assault survivors from reporting incidents and turned colleges into quasi-courtrooms.

Education Department officials said they had retained the elements that made sense but created a better overall framework that balanced the rights of all involved.

Under the Trump administration’s regulation , finalized in 2020, colleges were required to stage live hearings to adjudicate complaints, where the accused could cross-examine witnesses — including the students who were alleging assault.

Under President Biden’s new rules, colleges will have more flexibility. The investigator or person adjudicating the case may question witnesses in separate meetings or employ a live hearing.

The new rules also will allow schools to use a lower bar for adjudicating guilt. In weighing evidence, they are now directed to use a “preponderance of the evidence” standard, though they may opt for a higher standard of “clear and convincing evidence” if they use that standard in other similar proceedings. Under the Trump version, universities had a choice, but they were required to use the higher standard if they did so in other settings.

The proposal also expands the definition of what constitutes sexual harassment, discarding a narrower definition used under President Donald Trump . Under the new definition, conduct must be so “severe or pervasive” that it limits or denies a person’s ability to participate in their education. The prior version required it to be both severe and pervasive.

The changes were derided by conservative critics who said they would revive problems that the Trump rules had solved.

“So today’s regulations mean one thing: America’s college students are less likely to receive justice if they find themselves in a Title IX proceeding,” said Will Creeley, legal director for the Foundation for Individual Rights and Expression, a free speech advocacy group.

The group was concerned that the opportunity for cross examination would no longer be guaranteed and that schools will again be allowed to have one person investigate and adjudicate cases.

Some of the Trump provisions were retained. Schools, for instance, will continue to have the option to use informal resolution of discrimination complaints, unless the allegation involves an employee of a K-12 school.

The Biden administration’s approach was welcomed by advocates for sexual assault survivors.

The rules “will make schools safer and more accessible for young people, many of whom experienced irreparable harm while they fought for protection and support,” said Emma Grasso Levine, senior manager of Title IX policy and programs at Know Your IX, a project of the advocacy group Advocates for Youth.

“Now, it’s up to school administrators to act quickly to implement and enforce” the new rules, she added.

Sexual assault has been a serious issue on college campuses for years, and in releasing the regulation, the Biden administration said the rates were still “unacceptably high.”

But the most controversial element of the regulation involves the rights of transgender students, and it came under immediate fire.

Administration officials point to a 2020 Supreme Court ruling that sex discrimination in employment includes gender identity and sexual orientation to bolster their interpretation of the law. But conservatives contend that Title IX does not include these elements and argue that accommodations for transgender students can create situations that put other students at risk. They object, for instance, to having a transgender woman — someone they refer to as a “biological man” — using women’s bathrooms or locker rooms.

Rep. Virginia Foxx (R-N.C.), chairwoman of the House Education Committee, called the regulation an escalation of Democrats’ “contemptuous culture war that aims to radically redefine sex and gender.” The department, she said, has put decades of advancement for women and girls “squarely on the chopping block.”

Betsy DeVos, who was education secretary during Trump’s administration, criticized the gender identity protections and the new rules on handling assault and harassment allegations.

“The Biden Administration’s radical rewrite of Title IX guts the half century of protections and opportunities for women and callously replaces them with radical gender theory,” she said in a statement.

Still, the administration sidestepped the contentious issue of athletics, at least for now. A separate regulation governing how and when schools may exclude transgender students from women and girls teams remains under review, and administration officials offered no timetable for when it would be finalized. People familiar with their thinking said it was being delayed to avoid injecting the matter into the presidential campaign, where Biden faces a close race against Trump.

A senior administration official, who spoke on the condition of anonymity in a briefing with reporters Thursday, declined to comment on the politics of this decision but noted that the athletics rule was proposed after the main Title IX regulation.

Polling shows that clear majorities of Americans, including a sizable slice of Democrats, oppose allowing transgender athletes to compete on girls’ and women’s teams. Twenty-five states have statewide bans on their participation.

The proposed sports regulation disallows these statewide, blanket bans, but it allows school districts to restrict participation more narrowly defined — for instance on competitive high school or college teams. The main Title IX regulation does not address the issue, and an administration official said the status quo would remain in place for now. Still, some have argued that the new general ban on discrimination could apply to sports even though the administration does not intend it to.

Sen. Patty Murray (D-Wash.) welcomed the protections for transgender students but urged the department to issue the sports regulation as well.

“Trans youth deserve to play sports with their friends just like anyone else,” she said in a statement.

The rules governing how campuses deal with harassment complaints have changed repeatedly in recent years.

The Obama administration issued detailed guidance in 2014 for schools in handling complaints, but DeVos later tossed that out. Her department went through its own laborious rulemaking process to put a new system in place.

As a candidate for president in 2020, Biden promised to put a “quick end” to that version if elected, saying it gave colleges “a green light to ignore sexual violence and strip survivors of their rights.” Friday’s action makes good on his promise.

Ted Mitchell, president of the American Council on Education, which represents colleges and universities, praised many of the changes but bemoaned the fact that schools will have to retrain their staffs on the new rules in short order.

“After years of constant churn in Title IX guidance and regulations,” he said, “we hope for the sake of students and institutions that there will be more stability and consistency in the requirements going forward.”

Danielle Douglas-Gabriel contributed to this report.

article review process

  • Open access
  • Published: 19 April 2024

Person-centered care assessment tool with a focus on quality healthcare: a systematic review of psychometric properties

  • Lluna Maria Bru-Luna 1 ,
  • Manuel Martí-Vilar 2 ,
  • César Merino-Soto 3 ,
  • José Livia-Segovia 4 ,
  • Juan Garduño-Espinosa 5 &
  • Filiberto Toledano-Toledano 5 , 6 , 7  

BMC Psychology volume  12 , Article number:  217 ( 2024 ) Cite this article

305 Accesses

Metrics details

The person-centered care (PCC) approach plays a fundamental role in ensuring quality healthcare. The Person-Centered Care Assessment Tool (P-CAT) is one of the shortest and simplest tools currently available for measuring PCC. The objective of this study was to conduct a systematic review of the evidence in validation studies of the P-CAT, taking the “Standards” as a frame of reference.

First, a systematic literature review was conducted following the PRISMA method. Second, a systematic descriptive literature review of validity tests was conducted following the “Standards” framework. The search strategy and information sources were obtained from the Cochrane, Web of Science (WoS), Scopus and PubMed databases. With regard to the eligibility criteria and selection process, a protocol was registered in PROSPERO (CRD42022335866), and articles had to meet criteria for inclusion in the systematic review.

A total of seven articles were included. Empirical evidence indicates that these validations offer a high number of sources related to test content, internal structure for dimensionality and internal consistency. A moderate number of sources pertain to internal structure in terms of test-retest reliability and the relationship with other variables. There is little evidence of response processes, internal structure in measurement invariance terms, and test consequences.

The various validations of the P-CAT are not framed in a structured, valid, theory-based procedural framework like the “Standards” are. This can affect clinical practice because people’s health may depend on it. The findings of this study show that validation studies continue to focus on the types of validity traditionally studied and overlook interpretation of the scores in terms of their intended use.

Peer Review reports

Person-centered care (PCC)

Quality care for people with chronic diseases, functional limitations, or both has become one of the main objectives of medical and care services. The person-centered care (PCC) approach is an essential element not only in achieving this goal but also in providing high-quality health maintenance and medical care [ 1 , 2 , 3 ]. In addition to guaranteeing human rights, PCC provides numerous benefits to both the recipient and the provider [ 4 , 5 ]. Additionally, PCC includes a set of necessary competencies for healthcare professionals to address ongoing challenges in this area [ 6 ]. PCC includes the following elements [ 7 ]: an individualized, goal-oriented care plan based on individuals’ preferences; an ongoing review of the plan and the individual’s goals; support from an interprofessional team; active coordination among all medical and care providers and support services; ongoing information exchange, education and training for providers; and quality improvement through feedback from the individual and caregivers.

There is currently a growing body of literature on the application of PCC. A good example of this is McCormack’s widely known mid-range theory [ 8 ], an internationally recognized theoretical framework for PCC and how it is operationalized in practice. This framework forms a guide for care practitioners and researchers in hospital settings. This framework is elaborated in PCC and conceived of as “an approach to practice that is established through the formation and fostering of therapeutic relationships between all care providers, service users, and others significant to them, underpinned by values of respect for persons, [the] individual right to self-determination, mutual respect, and understanding” [ 9 ].

Thus, as established by PCC, it is important to emphasize that reference to the person who is the focus of care refers not only to the recipient but also to everyone involved in a care interaction [ 10 , 11 ]. PCC ensures that professionals are trained in relevant skills and methodology since, as discussed above, carers are among the agents who have the greatest impact on the quality of life of the person in need of care [ 12 , 13 , 14 ]. Furthermore, due to the high burden of caregiving, it is essential to account for caregivers’ well-being. In this regard, studies on professional caregivers are beginning to suggest that the provision of PCC can produce multiple benefits for both the care recipient and the caregiver [ 15 ].

Despite a considerable body of literature and the frequent inclusion of the term in health policy and research [ 16 ], PCC involves several complications. There is no standard consensus on the definition of this concept [ 17 ], which includes problematic areas such as efficacy assessment [ 18 , 19 ]. In addition, the difficulty of measuring the subjectivity involved in identifying the dimensions of the CPC and the infrequent use of standardized measures are acute issues [ 20 ]. These limitations and purposes motivated the creation of the Person-Centered Care Assessment Tool (P-CAT; [ 21 ]), which emerged from the need for a brief, economical, easily applied, versatile and comprehensive assessment instrument to provide valid and reliable measures of PCC for research purposes [ 21 ].

Person-centered care assessment tool (P-CAT)

There are several instruments that can measure PCC from different perspectives (i.e., the caregiver or the care recipient) and in different contexts (e.g., hospitals and nursing homes). However, from a practical point of view, the P-CAT is one of the shortest and simplest tools and contains all the essential elements of PCC described in the literature. It was developed in Australia to measure the approach of long-term residential settings to older people with dementia, although it is increasingly used in other healthcare settings, such as oncology units [ 22 ] and psychiatric hospitals [ 23 ].

Due to the brevity and simplicity of its application, the versatility of its use in different medical and care contexts, and its potential emic characteristics (i.e., constructs that can be cross-culturally applicable with reasonable and similar structure and interpretation; [ 24 ]), the P-CAT is one of the most widely used tests by professionals to measure PCC [ 25 , 26 ]. It has expanded to several countries with cultural and linguistic differences. Since its creation, it has been adapted in countries separated by wide cultural and linguistic differences, such as Norway [ 27 ], Sweden [ 28 ], China [ 29 ], South Korea [ 30 ], Spain [ 25 ], and Italy [ 31 ].

The P-CAT comprises 13 items rated on a 5-point ordinal scale (from “strongly disagree” to “strongly agree”), with high scores indicating a high degree of person-centeredness. The scale consists of three dimensions: person-centered care (7 items), organizational support (4 items) and environmental accessibility (2 items). In the original study ( n  = 220; [ 21 ]), the internal consistency of the instrument yielded satisfactory values for the total scale ( α  = 0.84) and good test-retest reliability ( r  =.66) at one-week intervals. A reliability generalization study conducted in 2021 [ 32 ] that estimated the internal consistency of the P-CAT and analyzed possible factors that could affect the it revealed that the mean α value for the 25 meta-analysis samples (some of which were part of the validations included in this study) was 0.81, and the only variable that had a statistically significant relationship with the reliability coefficient was the mean age of the sample. With respect to internal structure validity, three factors (56% of the total variance) were obtained, and content validity was assessed by experts, literature reviews and stakeholders [ 33 ].

Although not explicitly stated, the apparent commonality between validation studies of different versions of the P-CAT may be influenced by an influential decades-old validity framework that differentiates three categories: content validity, construct validity, and criterion validity [ 34 , 35 ]. However, a reformulation of the validity of the P-CAT within a modern framework, which would provide a different definition of validity, has not been performed.

Scale validity

Traditionally, validation is a process focused on the psychometric properties of a measurement instrument [ 36 ]. In the early 20th century, with the frequent use of standardized measurement tests in education and psychology, two definitions emerged: the first defined validity as the degree to which a test measures what it intends to measure, while the second described the validity of an instrument in terms of the correlation it presents with a variable [ 35 ].

However, in the past century, validity theory has evolved, leading to the understanding that validity should be based on specific interpretations for an intended purpose. It should not be limited to empirically obtained psychometric properties but should also be supported by the theory underlying the construct measured. Thus, to speak of classical or modern validity theory suggests an evolution in the classical or modern understanding of the concept of validity. Therefore, a classical approach (called classical test theory, CTT) is specifically differentiated from a modern approach. In general, recent concepts associated with a modern view of validity are based on (a) a unitary conception of validity and (b) validity judgments based on inferences and interpretations of the scores of a measure [ 37 , 38 ]. This conceptual advance in the concept of validity led to the creation of a guiding framework to for obtaining evidence to support the use and interpretation of the scores obtained by a measure [ 39 ].

This purpose is addressed by the Standards for Educational and Psychological Testing (“Standards”), a guide created by the American Educational Research Association (AERA), the American Psychological Association (APA) and the National Council on Measurement in Education (NCME) in 2014 with the aim of providing guidelines to assess the validity of the interpretations of scores of an instrument based on their intended use. Two conceptual aspects stand out in this modern view of validity: first, validity is a unitary concept centered on the construct; second, validity is defined as “the degree to which evidence and theory support the interpretations of test scores for proposed uses of tests” [ 37 ]. Thus, the “Standards” propose several sources that serve as a reference for assessing different aspects of validity. The five sources of valid evidence are as follows [ 37 ]: test content, response processes, internal structure, relations to other variables and consequences of testing. According to AERA et al. [ 37 ], test content validity refers to the relationship of the administration process, subject matter, wording and format of test items to the construct they are intended to measure. It is measured predominantly with qualitative methods but without excluding quantitative approaches. The validity of the responses is based on analysis of the cognitive processes and interpretation of the items by respondents and is measured with qualitative methods. Internal structure validity is based on the interrelationship between the items and the construct and is measured by quantitative methods. Validity in terms of the relationship with other variables is based on comparison between the variable that the instrument intends to measure and other theoretically relevant external variables and is measured by quantitative methods. Finally, validity based on the results of the test analyses consequences, both intended and unintended, that may be due to a source of invalidity. It is measured mainly by qualitative methods.

Thus, although validity plays a fundamental role in providing a strong scientific basis for interpretations of test scores, validation studies in the health field have traditionally focused on content validity, criterion validity and construct validity and have overlooked the interpretation and use of scores [ 34 ].

“Standards” are considered a suitable validity theory-based procedural framework for reviewing the validity of questionnaires due to its ability to analyze sources of validity from both qualitative and quantitative approaches and its evidence-based method [ 35 ]. Nevertheless, due to a lack of knowledge or the lack of a systematic description protocol, very few instruments to date have been reviewed within the framework of the “Standards” [ 39 ].

Current study

Although the P-CAT is one of the most widely used instruments by professionals and has seven validations [ 25 , 27 , 28 , 29 , 30 , 31 , 40 ], no analysis has been conducted of its validity within the framework of the “Standards”. That is, empirical evidence of the validity of the P-CAT has not been obtained in a way that helps to develop a judgment based on a synthesis of the available information.

A review of this type is critical given that some methodological issues seem to have not been resolved in the P-CAT. For example, although the multidimensionality of the P-CAT was identified in the study that introduced it, Bru-Luna et al. [ 32 ] recently stated that in adaptations of the P-CAT [ 25 , 27 , 28 , 29 , 30 , 40 ], the total score is used for interpretation and multidimensionality is disregarded. Thus, the multidimensionality of the original study was apparently not replicated. Bru-Luna et al. [ 32 ] also indicated that the internal structure validity of the P-CAT is usually underreported due to a lack of sufficiently rigorous approaches to establish with certainty how its scores are calculated.

The validity of the P-CAT, specifically its internal structure, appears to be unresolved. Nevertheless, substantive research and professional practice point to this measure as relevant to assessing PCC. This perception is contestable and judgment-based and may not be sufficient to assess the validity of the P-CAT from a cumulative and synthetic angle based on preceding validation studies. An adequate assessment of validity requires a model to conceptualize validity followed by a review of previous studies of the validity of the P-CAT using this model.

Therefore, the main purpose of this study was to conduct a systematic review of the evidence provided by P-CAT validation studies while taking the “Standards” as a framework.

The present study comprises two distinct but interconnected procedures. First, a systematic literature review was conducted following the PRISMA method ( [ 41 ]; Additional file 1; Additional file 2) with the aim of collecting all validations of the P-CAT that have been developed. Second, a systematic description of the validity evidence for each of the P-CAT validations found in the systematic review was developed following the “Standards” framework [ 37 ]. The work of Hawkins et al. [ 39 ], the first study to review validity sources according to the guidelines proposed by the “Standards”, was also used as a reference. Both provided conceptual and pragmatic guidance for organizing and classifying validity evidence for the P-CAT.

The procedure conducted in the systematic review is described below, followed by the procedure for examining the validity studies.

Systematic review

Search strategy and information sources.

Initially, the Cochrane database was searched with the aim of identifying systematic reviews of the P-CAT. When no such reviews were found, subsequent preliminary searches were performed in the Web of Science (WoS), Scopus and PubMed databases. These databases play a fundamental role in recent scientific literature since they are the main sources of published articles that undergo high-quality content and editorial review processes [ 42 ]. The search formula was as follows. The original P-CAT article [ 21 ] was located, after which all articles that cited it through 2021 were identified and analyzed. This approach ensured the inclusion of all validations. No articles were excluded on the basis of language to avoid language bias [ 43 ]. Moreover, to reduce the effects of publication bias, a complementary search in Google Scholar was also performed to allow the inclusion of “gray” literature [ 44 ]. Finally, a manual search was performed through a review of the references of the included articles to identify other articles that met the search criteria but were not present in any of the aforementioned databases.

This process was conducted by one of the authors and corroborated by another using the Covidence tool [ 45 ]. A third author was consulted in case of doubt.

Eligibility criteria and selection process

The protocol was registered in PROSPERO, and the search was conducted according to these criteria. The identification code is CRD42022335866.

The articles had to meet the following criteria for inclusion in the systematic review: (a) a methodological approach to P-CAT validations, (b) an experimental or quasiexperimental studies, (c) studies with any type of sample, and (d) studies in any language. We discarded studies that met at least one of the following exclusion criteria: (a) systematic reviews or bibliometric reviews of the instrument or meta-analyses or (b) studies published after 2021.

Data collection process

After the articles were selected, the most relevant information was extracted from each article. Fundamental data were recorded in an Excel spreadsheet for each of the sections: introduction, methodology, results and discussion. Information was also recorded about the limitations mentioned in each article as well as the practical implications and suggestions for future research.

Given the aim of the study, information was collected about the sources of validity of each study, including test content (judges’ evaluation, literature review and translation), response processes, internal structure (factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings), and relationships with other variables (convergent, divergent, concurrent and predictive validity) and consequences of measurement.

Description of the validity study

To assess the validity of the studies, an Excel table was used. Information was recorded for the seven articles included in the systematic review. The data were extracted directly from the texts of the articles and included information about the authors, the year of publication, the country where each P-CAT validation was produced and each of the five standards proposed in the “Standards” [ 37 ].

The validity source related to internal structure was divided into three sections to record information about dimensionality (e.g., factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings), reliability expression (i.e., internal consistency and test-retest) and the study of factorial invariance according to the groups into which it was divided (e.g., sex, age, profession) and the level of study (i.e., metric, intercepts). This approach allowed much more information to be obtained than relying solely on source validity based on internal structure. This division was performed by the same researcher who performed the previous processes.

Study selection and study characteristics

The systematic review process was developed according to the PRISMA methodology [ 41 ].

The WoS, Scopus, PubMed and Google Scholar databases were searched on February 12, 2022 and yielded a total of 485 articles. Of these, 111 were found in WoS, 114 in Scopus, 43 in PubMed and 217 in Google Scholar. In the first phase, the title and abstracts of all the articles were read. In this first screening, 457 articles were eliminated because they did not include studies with a methodological approach to P-CAT validation and one article was excluded because it was the original P-CAT article. This resulted in a total of 27 articles, 19 of which were duplicated in different databases and, in the case of Google Scholar, within the same database. This process yielded a total of eight articles that were evaluated for eligibility by a complete reading of the text. In this step, one of the articles was excluded due to a lack of access to the full text of the study [ 31 ] (although the original manuscript was found, it was impossible to access the complete content; in addition, the authors of the manuscript were contacted, but no reply was received). Finally, a manual search was performed by reviewing the references of the seven studies, but none were considered suitable for inclusion. Thus, the review was conducted with a total of seven articles.

Of the seven studies, six were original validations in other languages. These included Norwegian [ 27 ], Swedish [ 28 ], Chinese (which has two validations [ 29 , 40 ]), Spanish [ 25 ], and Korean [ 30 ]. The study by Selan et al. [ 46 ] included a modification of the Swedish version of the P-CAT and explored the psychometric properties of both versions (i.e., the original Swedish version and the modified version).

The item selection and screening process are illustrated in detail in Fig.  1 .

figure 1

PRISMA 2020 flow diagram for new systematic reviews including database searches

Validity analysis

To provide a clear overview of the validity analyses, Table  1 descriptively shows the percentages of items that provide information about the five standards proposed by the “Standards” guide [ 37 ].

The table shows a high number of validity sources related to test content and internal structure in relation to dimensionality and internal consistency, followed by a moderate number of sources for test-retest and relationship with other variables. A rate of 0% is observed for validity sources related to response processes, invariance and test consequences. Below, different sections related to each of the standards are shown, and the information is presented in more detail.

Evidence based on test content

The first standard, which focused on test content, was met for all items (100%). Translation, which refers to the equivalence of content between the original language and the target language, was met in the six articles that conducted validation in another language and/or culture. These studies reported that the validations were translated by bilingual experts and/or experts in the area of care. In addition, three studies [ 25 , 29 , 40 ] reported that the translation process followed International Test Commission guidelines, such as those of Beaton et al. [ 47 ], Guillemin [ 48 ], Hambleton et al. [ 49 ], and Muñiz et al. [ 50 ]. Evaluation by judges, who referred to the relevance, clarity and importance of the content, was divided into two categories: expert evaluation (a panel of expert judges for each of the areas to consider in the evaluation instrument) and experiential evaluation (potential participants testing the test). The first type of evaluation occurred in three of the articles [ 28 , 29 , 46 ], while the other occurred in two [ 25 , 40 ]. Only one of the items [ 29 ] reported that the scale contained items that reflected the dimension described in the literature. The validity evidence related to the test content presented in each article can be found in Table  2 .

Evidence based on response processes

The second standard, related to the validity of the response process, was obtained according to the “Standards” from the analysis of individual responses: “questioning test takers about their performance strategies or response to particular items (…), maintaining records that monitor the development of a response to a writing task (…), documentation of other aspects of performance, like eye movement or response times…” [ 37 ] (p. 15). According to the analysis of the validity of the response processes, none of the articles complied with this evidence.

Evidence based on internal structure

The third standard, validity related to internal structure, was divided into three sections. First, the dimensionality of each study was examined in terms of factor analysis, design, estimator, factor extraction method, factors and items, interfactor R, internal replication, effect of the method, and factor loadings. Le et al. [ 40 ] conducted an exploratory-confirmatory design while Sjögren et al. [ 28 ] conducted a confirmatory-exploratory design to assess construct validity using confirmatory factor analysis (CFA) and investigated it further using exploratory factor analysis (EFA). The remaining articles employed only a single form of factor analysis: three employed EFA, and two employed CFA. Regarding the next point, only three of the articles reported the factor extraction method used, including Kaiser’s eigenvalue, criterion, scree plot test, parallel analysis and Velicer’s MAP test. Instrument validations yielded a total of two factors in five of the seven articles, while one yielded a single dimension [ 25 ] and the other yielded three dimensions [ 29 ], as in the original instrument. The interfactor R was reported only in the study by Zhong and Lou [ 29 ], whereas in the study by Martínez et al. [ 25 ], it could be easily obtained since it consisted of only one dimension. Internal replication was also calculated in the Spanish validation by randomly splitting the sample into two to test the correlations between factors. The effectiveness of the method was not reported in any of the articles. This information is presented in Table  3 in addition to a summary of the factor loadings.

The second section examined reliability. All the studies presented measures of internal consistency conducted in their entirety with Cronbach’s α coefficient for both the total scale and the subscales. The ω coefficient of McDonald was not used in any case. Four of the seven articles performed a test-retest test. Martínez et al. [ 25 ] conducted a test-retest after a period of seven days, while Le et al. [ 40 ] and Rokstad et al. [ 27 ] performed it between one and two weeks later and Sjögren et al. [ 28 ] allowed approximately two weeks to pass after the initial test.

The third section analyzes the calculation of invariance, which was not reported in any of the studies.

Evidence based on relationships with other variables

In the fourth standard, based on validity according to the relationship with other variables, the articles that reported it used only convergent validity (i.e., it was hypothesized that the variables related to the construct measured by the test—in this case, person-centeredness—were positively or negatively related to another construct). Discriminant validity hypothesizes that the variables related to the PCC construct are not correlated in any way with any other variable studied. No article (0%) measured discriminant evidence, while four (57%) measured convergent evidence [ 25 , 29 , 30 , 46 ]. Convergent validity was obtained through comparisons with instruments such as the Person-Centered Climate Questionnaire–Staff Version (PCQ-S), the Staff-Based Measures of Individualized Care for Institutionalized Persons with Dementia (IC), the Caregiver Psychological Elder Abuse Behavior Scale (CPEAB), the Organizational Climate (CLIOR) and the Maslach Burnout Inventory (MBI). In the case of Selan et al. [ 46 ], convergent validity was assessed on two items considered by the authors as “crude measures of person-centered care (i.e., external constructs) giving an indication of the instruments’ ability to measure PCC” (p. 4). Concurrent validity, which measures the degree to which the results of one test are or are not similar to those of another test conducted at more or less the same time with the same participants, and predictive validity, which allows predictions to be established regarding behavior based on comparison between the values of the instrument and the criterion, were not reported in any of the studies.

Evidence based on the consequences of testing

The fifth and final standard was related to the consequences of the test. It analyzed the consequences, both intended and unintended, of applying the test to a given sample. None of the articles presented explicit or implicit evidence of this.

The last two sources of validity can be seen in Table  4 .

Table  5 shows the results of the set of validity tests for each study according to the described standards.

The main purpose of this article is to analyze the evidence of validity in different validation studies of the P-CAT. To gather all existing validations, a systematic review of all literature citing this instrument was conducted.

The publication of validation studies of the P-CAT has been constant over the years. Since the publication of the original instrument in 2010, seven validations have been published in other languages (taking into account the Italian version by Brugnolli et al. [ 31 ], which could not be included in this study) as well as a modification of one of these versions. The very unequal distribution of validations between languages and countries is striking. A recent systematic review [ 51 ] revealed that in Europe, the countries where the PCC approach is most widely used are the United Kingdom, Sweden, the Netherlands, Northern Ireland, and Norway. It has also been shown that the neighboring countries seem to exert an influence on each other due to proximity [ 52 ] such that they tend to organize healthcare in a similar way, as is the case for Scandinavian countries. This favors the expansion of PCC and explains the numerous validations we found in this geographical area.

Although this approach is conceived as an essential element of healthcare for most governments [ 53 ], PCC varies according to the different definitions and interpretations attributed to it, which can cause confusion in its application (e.g., between Norway and the United Kingdom [ 54 ]). Moreover, facilitators of or barriers to implementation depend on the context and level of development of each country, and financial support remains one of the main factors in this regard [ 53 ]. This fact explains why PCC is not globally widespread among all territories. In countries where access to healthcare for all remains out of reach for economic reasons, the application of this approach takes a back seat, as does the validation of its assessment tools. In contrast, in a large part of Europe or in countries such as China or South Korea that have experienced decades of rapid economic development, patients are willing to be involved in their medical treatment and enjoy more satisfying and efficient medical experiences and environments [ 55 ], which facilitates the expansion of validations of instruments such as the P-CAT.

Regarding validity testing, the guidelines proposed by the “Standards” [ 37 ] were followed. According to the analysis of the different validations of the P-CAT instrument, none of the studies used a structured validity theory-based procedural framework for conducting validation. The most frequently reported validity tests were on the content of the test and two of the sections into which the internal structure was divided (i.e., dimensionality and internal consistency).

In the present article, the most cited source of validity in the studies was the content of the test because most of the articles were validations of the P-CAT in other languages, and the authors reported that the translation procedure was conducted by experts in all cases. In addition, several of the studies employed International Test Commission guidelines, such as those by Beaton et al. [ 47 ], Guillemin [ 48 ], Hambleton et al. [ 49 ], and Muñiz et al. [ 50 ]. Several studies also assessed the relevance, clarity and importance of the content.

The third source of validity, internal structure, was the next most often reported, although it appeared unevenly among the three sections into which this evidence was divided. Dimensionality and internal consistency were reported in all studies, followed by test-retest consistency. In relation to the first section, factor analysis, a total of five EFAs and four CFAs were presented in the validations. Traditionally, EFA has been used in research to assess dimensionality and identify key psychological constructs, although this approach involves a number of inconveniences, such as difficulty testing measurement invariance and incorporating latent factors into subsequent analyses [ 56 ] or the major problem of factor loading matrix rotation [ 57 ]. Studies eventually began to employ CFA, a technique that overcame some of these obstacles [ 56 ] but had other drawbacks; for example, the strict requirement of zero cross-loadings often does not fit the data well, and misspecification of zero loadings tends to produce distorted factors [ 57 ]. Recently, exploratory structural equation modeling (ESEM) has been proposed. This technique is widely recommended both conceptually and empirically to assess the internal structure of psychological tools [ 58 ] since it overcomes the limitations of EFA and CFA in estimating their parameters [ 56 , 57 ].

The next section, reliability, reports the total number of items according to Cronbach’s α reliability coefficient. Reliability is defined as a combination of systematic and random influences that determine the observed scores on a psychological test. Reporting the reliability measure ensures that item-based scores are consistent, that the tool’s responses are replicable and that they are not modified solely by random noise [ 59 , 60 ]. Currently, the most commonly employed reliability coefficient in studies with a multi-item measurement scale (MIMS) is Cronbach’s α [ 60 , 61 ].

Cronbach’s α [ 62 ] is based on numerous strict assumptions (e.g., the test must be unidimensional, factor loadings must be equal for all items and item errors should not covary) to estimate internal consistency. These assumptions are difficult to meet, and their violation may produce small reliability estimates [ 60 ]. One of the alternative measures to α that is increasingly recommended by the scientific literature is McDonald’s ω [ 63 ], a composite reliability measure. This coefficient is recommended for congeneric scales in which tau equivalence is not assumed. It has several advantages. For example, estimates of ω are usually robust when the estimated model contains more factors than the true model, even with small samples, or when skewness in univariate item distributions produces lower biases than those found when using α [ 59 ].

The test-retest method was the next most commonly reported internal structure section in these studies. This type of reliability considers the consistency of the scores of a test between two measurements separated by a period [ 64 ]. It is striking that test-retest consistency does not have a prevalence similar to that of internal consistency since, unlike internal consistency, test-retest consistency can be assessed for practically all types of patient-reported outcomes. It is even considered by some measurement experts to report reliability with greater relevance than internal consistency since it plays a fundamental role in the calculation of parameters for health measures [ 64 ]. However, the literature provides little guidance regarding the assessment of this type of reliability.

The internal structure section that was least frequently reported in the studies in this review was invariance. A lack of invariance refers to a difference between scores on a test that is not explained by group differences in the structure it is intended to measure [ 65 ]. The invariance of the measure should be emphasized as a prerequisite in comparisons between groups since “if scale invariance is not examined, item bias may not be fully recognized and this may lead to a distorted interpretation of the bias in a particular psychological measure” [ 65 ].

Evidence related to other variables was the next most reported source of validity in the studies included in this review. Specifically, the four studies that reported this evidence did so according to convergent validity and cited several instruments. None of the studies included evidence of discriminant validity, although this may be because there are currently several obstacles related to the measurement of this type of validity [ 66 ]. On the one hand, different definitions are used in the applied literature, which makes its evaluation difficult; on the other hand, the literature on discriminant validity focuses on techniques that require the use of multiple measurement methods, which often seem to have been introduced without sufficient evidence or are applied randomly.

Validity related to response processes was not reported by any of the studies. There are several methods to analyze this validity. These methods can be divided into two groups: “those that directly access the psychological processes or cognitive operations (think aloud, focus group, and interviews), compared to those which provide indirect indicators which in turn require additional inference (eye tracking and response times)” [ 38 ]. However, this validity evidence has traditionally been reported less frequently than others in most studies, perhaps because there are fewer clear and accepted practices on how to design or report these studies [ 67 ].

Finally, the consequences of testing were not reported in any of the studies. There is debate regarding this source of validity, with two main opposing streams of thought. On the one hand [ 68 , 69 ]) suggests that consequences that appear after the application of a test should not derive from any source of test invalidity and that “adverse consequences only undermine the validity of an assessment if they can be attributed to a problem of fit between the test and the construct” (p. 6). In contrast, Cronbach [ 69 , 70 ] notes that adverse social consequences that may result from the application of a test may call into question the validity of the test. However, the potential risks that may arise from the application of a test should be minimized in any case, especially in regard to health assessments. To this end, it is essential that this aspect be assessed by instrument developers and that the experiences of respondents be protected through the development of comprehensive and informed practices [ 39 ].

This work is not without limitations. First, not all published validation studies of the P-CAT, such as the Italian version by Brugnolli et al. [ 31 ], were available. These studies could have provided relevant information. Second, many sources of validity could not be analyzed because the studies provided scant or no data, such as response processes [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ], relationships with other variables [ 27 , 28 , 40 ], consequences of testing [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ], or invariance [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ] in the case of internal structure and interfactor R [ 27 , 28 , 30 , 40 , 46 ], internal replication [ 27 , 28 , 29 , 30 , 40 , 46 ] or the effect of the method [ 25 , 27 , 28 , 29 , 30 , 40 , 46 ] in the case of dimensionality. In the future, it is hoped that authors will become aware of the importance of validity, as shown in this article and many others, and provide data on unreported sources so that comprehensive validity studies can be performed.

The present work also has several strengths. The search was extensive, and many studies were obtained using three different databases, including WoS, one of the most widely used and authoritative databases in the world. This database includes a large number and variety of articles and is not fully automated due to its human team [ 71 , 72 , 73 ]. In addition, to prevent publication bias, gray literature search engines such as Google Scholar were used to avoid the exclusion of unpublished research [ 44 ]. Finally, linguistic bias was prevented by not limiting the search to articles published in only one or two languages, thus avoiding the overrepresentation of studies in one language and underrepresentation in others [ 43 ].

Conclusions

Validity is understood as the degree to which tests and theory support the interpretations of instrument scores for their intended use [ 37 ]. From this perspective, the various validations of the P-CAT are not presented in a structured, valid, theory-based procedural framework like the “Standards” are. After integration and analysis of the results, it was observed that these validation reports offer a high number of sources of validity related to test content, internal structure in dimensionality and internal consistency, a moderate number of sources for internal structure in terms of test-retest reliability and the relationship with other variables, and a very low number of sources for response processes, internal structure in terms of invariance, and test consequences.

Validity plays a fundamental role in ensuring a sound scientific basis for test interpretations because it provides evidence of the extent to which the data provided by the test are valid for the intended purpose. This can affect clinical practice as people’s health may depend on it. In this sense, the “Standards” are considered a suitable and valid theory-based procedural framework for studying this modern conception of questionnaire validity, which should be taken into account in future research in this area.

Although the P-CAT is one of the most widely used instruments for assessing PCC, as shown in this study, PCC has rarely been studied. The developers of measurement tests applied to the health care setting, on which the health and quality of life of many people may depend, should use this validity framework to reflect the clear purpose of the measurement. This approach is important because the equity of decision making by healthcare professionals in daily clinical practice may depend on the source of validity. Through a more extensive study of validity that includes the interpretation of scores in terms of their intended use, the applicability of the P-CAT, an instrument that was initially developed for long-term care homes for elderly people, could be expanded to other care settings. However, the findings of this study show that validation studies continue to focus on traditionally studied types of validity and overlook the interpretation of scores in terms of their intended use.

Data availability

All data relevant to the study were included in the article or uploaded as additional files. Additional template data extraction forms are available from the corresponding author upon reasonable request.

Abbreviations

American Educational Research Association

American Psychological Association

Confirmatory factor analysis

Organizational Climate

Caregiver Psychological Elder Abuse Behavior Scale

Exploratory factor analysis

Exploratory structural equation modeling

Staff-based Measures of Individualized Care for Institutionalized Persons with Dementia

Maslach Burnout Inventory

Multi-item measurement scale

Maximum likelihood

National Council on Measurement in Education

Person-Centered Care Assessment Tool

  • Person-centered care

Person-Centered Climate Questionnaire–Staff Version

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

International Register of Systematic Review Protocols

Standards for Educational and Psychological Testing

weighted least square mean and variance adjusted

Web of Science

Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy; 2001.

Google Scholar  

International Alliance of Patients’ Organizations. What is patient-centred healthcare? A review of definitions and principles. 2nd ed. London, UK: International Alliance of Patients’ Organizations; 2007.

World Health Organization. WHO global strategy on people-centred and integrated health services: interim report. Geneva, Switzerland: World Health Organization; 2015.

Britten N, Ekman I, Naldemirci Ö, Javinger M, Hedman H, Wolf A. Learning from Gothenburg model of person centred healthcare. BMJ. 2020;370:m2738.

Article   PubMed   Google Scholar  

Van Diepen C, Fors A, Ekman I, Hensing G. Association between person-centred care and healthcare providers’ job satisfaction and work-related health: a scoping review. BMJ Open. 2020;10:e042658.

Article   PubMed   PubMed Central   Google Scholar  

Ekman N, Taft C, Moons P, Mäkitalo Å, Boström E, Fors A. A state-of-the-art review of direct observation tools for assessing competency in person-centred care. Int J Nurs Stud. 2020;109:103634.

American Geriatrics Society Expert Panel on Person-Centered Care. Person-centered care: a definition and essential elements. J Am Geriatr Soc. 2016;64:15–8.

Article   Google Scholar  

McCormack B, McCance TV. Development of a framework for person-centred nursing. J Adv Nurs. 2006;56:472–9.

McCormack B, McCance T. Person-centred practice in nursing and health care: theory and practice. Chichester, England: Wiley; 2016.

Nolan MR, Davies S, Brown J, Keady J, Nolan J. Beyond person-centred care: a new vision for gerontological nursing. J Clin Nurs. 2004;13:45–53.

McCormack B, McCance T. Person-centred nursing: theory, models and methods. Oxford, UK: Wiley-Blackwell; 2010.

Book   Google Scholar  

Abraha I, Rimland JM, Trotta FM, Dell’Aquila G, Cruz-Jentoft A, Petrovic M, et al. Systematic review of systematic reviews of non-pharmacological interventions to treat behavioural disturbances in older patients with dementia. The SENATOR-OnTop series. BMJ Open. 2017;7:e012759.

Anderson K, Blair A. Why we need to care about the care: a longitudinal study linking the quality of residential dementia care to residents’ quality of life. Arch Gerontol Geriatr. 2020;91:104226.

Bauer M, Fetherstonhaugh D, Haesler E, Beattie E, Hill KD, Poulos CJ. The impact of nurse and care staff education on the functional ability and quality of life of people living with dementia in aged care: a systematic review. Nurse Educ Today. 2018;67:27–45.

Smythe A, Jenkins C, Galant-Miecznikowska M, Dyer J, Downs M, Bentham P, et al. A qualitative study exploring nursing home nurses’ experiences of training in person centred dementia care on burnout. Nurse Educ Pract. 2020;44:102745.

McCormack B, Borg M, Cardiff S, Dewing J, Jacobs G, Janes N, et al. Person-centredness– the ‘state’ of the art. Int Pract Dev J. 2015;5:1–15.

Wilberforce M, Challis D, Davies L, Kelly MP, Roberts C, Loynes N. Person-centredness in the care of older adults: a systematic review of questionnaire-based scales and their measurement properties. BMC Geriatr. 2016;16:63.

Rathert C, Wyrwich MD, Boren SA. Patient-centered care and outcomes: a systematic review of the literature. Med Care Res Rev. 2013;70:351–79.

Sharma T, Bamford M, Dodman D. Person-centred care: an overview of reviews. Contemp Nurse. 2016;51:107–20.

Ahmed S, Djurkovic A, Manalili K, Sahota B, Santana MJ. A qualitative study on measuring patient-centered care: perspectives from clinician-scientists and quality improvement experts. Health Sci Rep. 2019;2:e140.

Edvardsson D, Fetherstonhaugh D, Nay R, Gibson S. Development and initial testing of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2010;22:101–8.

Tamagawa R, Groff S, Anderson J, Champ S, Deiure A, Looyis J, et al. Effects of a provincial-wide implementation of screening for distress on healthcare professionals’ confidence and understanding of person-centered care in oncology. J Natl Compr Canc Netw. 2016;14:1259–66.

Degl’ Innocenti A, Wijk H, Kullgren A, Alexiou E. The influence of evidence-based design on staff perceptions of a supportive environment for person-centered care in forensic psychiatry. J Forensic Nurs. 2020;16:E23–30.

Hulin CL. A psychometric theory of evaluations of item and scale translations: fidelity across languages. J Cross Cult Psychol. 1987;18:115–42.

Martínez T, Suárez-Álvarez J, Yanguas J, Muñiz J. Spanish validation of the person-centered Care Assessment Tool (P-CAT). Aging Ment Health. 2016;20:550–8.

Martínez T, Martínez-Loredo V, Cuesta M, Muñiz J. Assessment of person-centered care in gerontology services: a new tool for healthcare professionals. Int J Clin Health Psychol. 2020;20:62–70.

Rokstad AM, Engedal K, Edvardsson D, Selbaek G. Psychometric evaluation of the Norwegian version of the person-centred Care Assessment Tool. Int J Nurs Pract. 2012;18:99–105.

Sjögren K, Lindkvist M, Sandman PO, Zingmark K, Edvardsson D. Psychometric evaluation of the Swedish version of the person-centered Care Assessment Tool (P-CAT). Int Psychogeriatr. 2012;24:406–15.

Zhong XB, Lou VW. Person-centered care in Chinese residential care facilities: a preliminary measure. Aging Ment Health. 2013;17:952–8.

Tak YR, Woo HY, You SY, Kim JH. Validity and reliability of the person-centered Care Assessment Tool in long-term care facilities in Korea. J Korean Acad Nurs. 2015;45:412–9.

Brugnolli A, Debiasi M, Zenere A, Zanolin ME, Baggia M. The person-centered Care Assessment Tool in nursing homes: psychometric evaluation of the Italian version. J Nurs Meas. 2020;28:555–63.

Bru-Luna LM, Martí-Vilar M, Merino-Soto C, Livia J. Reliability generalization study of the person-centered Care Assessment Tool. Front Psychol. 2021;12:712582.

Edvardsson D, Innes A. Measuring person-centered care: a critical comparative review of published tools. Gerontologist. 2010;50:834–46.

Hawkins M, Elsworth GR, Nolte S, Osborne RH. Validity arguments for patient-reported outcomes: justifying the intended interpretation and use of data. J Patient Rep Outcomes. 2021;5:64.

Sireci SG. On the validity of useless tests. Assess Educ Princ Policy Pract. 2016;23:226–35.

Hawkins M, Elsworth GR, Osborne RH. Questionnaire validation practice: a protocol for a systematic descriptive literature review of health literacy assessments. BMJ Open. 2019;9:e030753.

American Educational Research Association, American Psychological Association. National Council on Measurement in Education. Standards for educational and psychological testing. Washington, DC: American Educational Research Association; 2014.

Padilla JL, Benítez I. Validity evidence based on response processes. Psicothema. 2014;26:136–44.

PubMed   Google Scholar  

Hawkins M, Elsworth GR, Hoban E, Osborne RH. Questionnaire validation practice within a theoretical framework: a systematic descriptive literature review of health literacy assessments. BMJ Open. 2020;10:e035974.

Le C, Ma K, Tang P, Edvardsson D, Behm L, Zhang J, et al. Psychometric evaluation of the Chinese version of the person-centred Care Assessment Tool. BMJ Open. 2020;10:e031580.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg. 2021;88:105906.

Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, web of Science, and Google Scholar: strengths and weaknesses. FASEB J. 2008;22:338–42.

Grégoire G, Derderian F, Le Lorier J. Selecting the language of the publications included in a meta-analysis: is there a tower of Babel bias? J Clin Epidemiol. 1995;48:159–63.

Arias MM. Aspectos metodológicos Del metaanálisis (1). Pediatr Aten Primaria. 2018;20:297–302.

Covidence. Covidence systematic review software. Veritas Health Innovation, Australia. 2014. https://www.covidence.org/ . Accessed 28 Feb 2022.

Selan D, Jakobsson U, Condelius A. The Swedish P-CAT: modification and exploration of psychometric properties of two different versions. Scand J Caring Sci. 2017;31:527–35.

Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976). 2000;25:3186–91.

Guillemin F. Cross-cultural adaptation and validation of health status measures. Scand J Rheumatol. 1995;24:61–3.

Hambleton R, Merenda P, Spielberger C. Adapting educational and psychological tests for cross-cultural assessment. Mahwah, NJ: Lawrence Erlbaum Associates; 2005.

Muñiz J, Elosua P, Hambleton RK. International test commission guidelines for test translation and adaptation: second edition. Psicothema. 2013;25:151–7.

Rosengren K, Brannefors P, Carlstrom E. Adoption of the concept of person-centred care into discourse in Europe: a systematic literature review. J Health Organ Manag. 2021;35:265–80.

Alharbi T, Olsson LE, Ekman I, Carlström E. The impact of organizational culture on the outcome of hospital care: after the implementation of person-centred care. Scand J Public Health. 2014;42:104–10.

Bensbih S, Souadka A, Diez AG, Bouksour O. Patient centered care: focus on low and middle income countries and proposition of new conceptual model. J Med Surg Res. 2020;7:755–63.

Stranz A, Sörensdotter R. Interpretations of person-centered dementia care: same rhetoric, different practices? A comparative study of nursing homes in England and Sweden. J Aging Stud. 2016;38:70–80.

Zhou LM, Xu RH, Xu YH, Chang JH, Wang D. Inpatients’ perception of patient-centered care in Guangdong province, China: a cross-sectional study. Inquiry. 2021. https://doi.org/10.1177/00469580211059482 .

Marsh HW, Morin AJ, Parker PD, Kaur G. Exploratory structural equation modeling: an integration of the best features of exploratory and confirmatory factor analysis. Annu Rev Clin Psychol. 2014;10:85–110.

Asparouhov T, Muthén B. Exploratory structural equation modeling. Struct Equ Model Multidiscip J. 2009;16:397–438.

Cabedo-Peris J, Martí-Vilar M, Merino-Soto C, Ortiz-Morán M. Basic empathy scale: a systematic review and reliability generalization meta-analysis. Healthc (Basel). 2022;10:29–62.

Flora DB. Your coefficient alpha is probably wrong, but which coefficient omega is right? A tutorial on using R to obtain better reliability estimates. Adv Methods Pract Psychol Sci. 2020;3:484–501.

McNeish D. Thanks coefficient alpha, we’ll take it from here. Psychol Methods. 2018;23:412–33.

Hayes AF, Coutts JJ. Use omega rather than Cronbach’s alpha for estimating reliability. But… Commun Methods Meas. 2020;14:1–24.

Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.

McDonald R. Test theory: a unified approach. Mahwah, NJ: Erlbaum; 1999.

Polit DF. Getting serious about test-retest reliability: a critique of retest research and some recommendations. Qual Life Res. 2014;23:1713–20.

Ceylan D, Çizel B, Karakaş H. Testing destination image scale invariance for intergroup comparison. Tour Anal. 2020;25:239–51.

Rönkkö M, Cho E. An updated guideline for assessing discriminant validity. Organ Res Methods. 2022;25:6–14.

Hubley A, Zumbo B. Response processes in the context of validity: setting the stage. In: Zumbo B, Hubley A, editors. Understanding and investigating response processes in validation research. Cham, Switzerland: Springer; 2017. pp. 1–12.

Messick S. Validity of performance assessments. In: Philips G, editor. Technical issues in large-scale performance assessment. Washington, DC: Department of Education, National Center for Education Statistics; 1996. pp. 1–18.

Moss PA. The role of consequences in validity theory. Educ Meas Issues Pract. 1998;17:6–12.

Cronbach L. Five perspectives on validity argument. In: Wainer H, editor. Test validity. Hillsdale, MI: Erlbaum; 1988. pp. 3–17.

Birkle C, Pendlebury DA, Schnell J, Adams J. Web of Science as a data source for research on scientific and scholarly activity. Quant Sci Stud. 2020;1:363–76.

Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6:245.

Web of Science Group. Editorial selection process. Clarivate. 2024. https://clarivate.com/webofsciencegroup/solutions/%20editorial-selection-process/ . Accessed 12 Sept 2022.

Download references

Acknowledgements

The authors thank the casual helpers for their aid in information processing and searching.

This work is one of the results of research project HIM/2015/017/SSA.1207, “Effects of mindfulness training on psychological distress and quality of life of the family caregiver”. Main researcher: Filiberto Toledano-Toledano Ph.D. The present research was funded by federal funds for health research and was approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez, National Institute of Health. The source of federal funds did not control the study design, data collection, analysis, or interpretation, or decisions regarding publication.

Author information

Authors and affiliations.

Departamento de Educación, Facultad de Ciencias Sociales, Universidad Europea de Valencia, 46010, Valencia, Spain

Lluna Maria Bru-Luna

Departamento de Psicología Básica, Universitat de València, Blasco Ibáñez Avenue, 21, 46010, Valencia, Spain

Manuel Martí-Vilar

Departamento de Psicología, Instituto de Investigación de Psicología, Universidad de San Martín de Porres, Tomás Marsano Avenue 242, Lima 34, Perú

César Merino-Soto

Instituto Central de Gestión de la Investigación, Universidad Nacional Federico Villarreal, Carlos Gonzalez Avenue 285, 15088, San Miguel, Perú

José Livia-Segovia

Unidad de Investigación en Medicina Basada en Evidencias, Hospital Infantil de México Federico Gómez Instituto Nacional de Salud, Dr. Márquez 162, 06720, Doctores, Cuauhtémoc, Mexico

Juan Garduño-Espinosa & Filiberto Toledano-Toledano

Unidad de Investigación Multidisciplinaria en Salud, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, México-Xochimilco 289, Arenal de Guadalupe, 14389, Tlalpan, Mexico City, Mexico

Filiberto Toledano-Toledano

Dirección de Investigación y Diseminación del Conocimiento, Instituto Nacional de Ciencias e Innovación para la Formación de Comunidad Científica, INDEHUS, Periférico Sur 4860, Arenal de Guadalupe, 14389, Tlalpan, Mexico City, Mexico

You can also search for this author in PubMed   Google Scholar

Contributions

L.M.B.L. conceptualized the study, collected the data, performed the formal anal- ysis, wrote the original draft, and reviewed and edited the subsequent drafts. M.M.V. collected the data and reviewed and edited the subsequent drafts. C.M.S. collected the data, performed the formal analysis, wrote the original draft, and reviewed and edited the subsequent drafts. J.L.S. collected the data, wrote the original draft, and reviewed and edited the subsequent drafts. J.G.E. collected the data and reviewed and edited the subsequent drafts. F.T.T. conceptualized the study and reviewed and edited the subsequent drafts. L.M.B.L. conceptualized the study and reviewed and edited the subsequent drafts. M.M.V. conceptualized the study and reviewed and edited the subsequent drafts. C.M.S. reviewed and edited the subsequent drafts. J.G.E. reviewed and edited the subsequent drafts. F.T.T. conceptualized the study; provided resources, software, and supervision; wrote the original draft; and reviewed and edited the subsequent drafts.

Corresponding author

Correspondence to Filiberto Toledano-Toledano .

Ethics declarations

Ethics approval and consent to participate.

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez, National Institute of Health. HIM/2015/017/SSA.1207, “Effects of mindfulness training on psychological distress and quality of life of the family caregiver”.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Bru-Luna, L.M., Martí-Vilar, M., Merino-Soto, C. et al. Person-centered care assessment tool with a focus on quality healthcare: a systematic review of psychometric properties. BMC Psychol 12 , 217 (2024). https://doi.org/10.1186/s40359-024-01716-7

Download citation

Received : 17 May 2023

Accepted : 07 April 2024

Published : 19 April 2024

DOI : https://doi.org/10.1186/s40359-024-01716-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Person-centered care assessment tool

BMC Psychology

ISSN: 2050-7283

article review process

  • Share full article

Advertisement

Supported by

A New Home for Sundance? Festival Organizers Say It’s Possible.

A 13-year contract with Park City, Utah, is set to expire in 2026 and the film festival is beginning a review process to see if it should move.

A look down a street in Park City, Utah, with snowy mountains in the backdrop.

By Nicole Sperling

For the past 40 years, the Sundance Film Festival, the influential annual gathering focused on independent film, has been indelibly linked with the snowy mountain town of Park City, Utah.

That may change.

The Sundance Institute, the nonprofit organization that runs the festival, announced Wednesday that with its 13-year contract with the city set to expire in 2026, it would begin a review process to determine whether it should move.

It will now start seeking information from other U.S. cities interested in hosting the festival, which traditionally takes place in January. Beginning in May, the institute will request proposals from cities selected to move forward. The institute expects to announce its decision around the end of the year or the beginning of 2025.

The festival will remain in Park City for 2025 and 2026. Park City, an hour outside Salt Lake City, often becomes snarled with traffic during the 10-day festival. Rental prices explode during the event, and the snowy climate often complicates the process of attending the screenings for many patrons.

“This exploration allows us to responsibly consider how we best continue sustainably serving our community while maintaining the essence of the Festival experience,” Eugene Hernandez, director of the Sundance Film Festival and Public Programming, said in a statement. “We are looking forward to conversations that center supporting artists and serving audiences as part of our mission and work at Sundance Institute, and are motivated by our commitment to insure that the festival continues to thrive culturally, operationally and financially as it has for four decades.”

Sundance, which Robert Redford founded in 1981 and moved to Park City in 1985, continues to be the dominant festival for independent film. For the 2024 edition, the festival received a record number of submissions, over 17,000 from 153 countries. Mr. Redford remains on the institute’s board, which will be part of the review process. His daughter Amy is on the task force that will review proposals.

Park City officials said the city planned to issue a proposal to keep the festival.

“We appreciate our partnership with Sundance, and we want the festival to remain here for another 40 years,” Nann Worel, Park City’s mayor, said in a statement.

Nicole Sperling covers Hollywood and the streaming industry. She has been a reporter for more than two decades. More about Nicole Sperling

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Councilwoman chosen as new Fort Wayne mayor, its 1st Black leader, in caucus to replace late mayor

Fort Wayne Mayor-elect Sharon Tucker is hugged by Acting Mayor Karl Bandemer on Saturday, April 20, 2024, after Tucker was chosen as the city’s new mayor during a caucus to replace Fort Wayne’s late mayor, who died in March. Tucker will be sworn in next week as the first Black mayor of the northeastern Indiana city. (Madelyn Kidd/The Journal-Gazette via AP)

Fort Wayne Mayor-elect Sharon Tucker is hugged by Acting Mayor Karl Bandemer on Saturday, April 20, 2024, after Tucker was chosen as the city’s new mayor during a caucus to replace Fort Wayne’s late mayor, who died in March. Tucker will be sworn in next week as the first Black mayor of the northeastern Indiana city. (Madelyn Kidd/The Journal-Gazette via AP)

Sharon Tucker speaks at podium on Saturday, April 20, 2024 in Fort Wayne, Ind. Tucker, a Democrat, becomes the first Black mayor of the northeastern Indiana city, The Journal Gazette reported, Saturday. She was elected in the second round of voting during a Democratic caucus when she met the requirement of 50% of the votes plus one.(Devan Filchak /The Journal-Gazette via AP)

  • Copy Link copied

FORT WAYNE, Ind. (AP) — A Fort Wayne city councilwoman was chosen Saturday as the new mayor of Indiana’s second most populous city, and its first Black leader, during a caucus to replace its late mayor, who died in March.

Councilwoman Sharon Tucker, a Democrat, will also become the second woman to serve as mayor of the northeastern Indiana city. She was elected in the second round of voting during a Democratic caucus when she met the requirement of 50% of the votes plus one, The Journal Gazette reported.

Derek Camp, chairman of the Allen County Democratic Party, said Tucker would be sworn in as mayor early next week. He said he would not reveal how many votes Tucker received in the caucus so that the party can unite behind Fort Wayne’s new mayor.

The local Democratic party said in a statement that it was excited to have “Mayor Tucker at the helm leading Fort Wayne into the future.”

“Today, Mayor Tucker proved that she has the energy and support of our party, and we can’t wait to support her as she works to continue moving our community forward together,” the statement adds.

Seven candidates, including Indiana Democratic House leader state Rep. Phil GiaQuinta , ran in Saturday’s party caucus, where 92 precinct committee members cast votes.

Former 6th District City Councilwoman Sharon Tucker is sworn in as mayor at the Clyde Theatre on Tuesday morning, April 23, 2024, in Fort Wayne, Ind. (Stan Sussina/The Journal-Gazette via AP)

The mayor’s office became vacant when Mayor Tom Henry died March 28 after experiencing a medical emergency related to his stomach cancer. He was 72.

Henry, a Democrat, was elected in November to his fifth term as mayor of the city with about 270,000 residents. He announced his diagnosis of late-stage stomach cancer during a news conference Feb. 26 and had started chemotherapy at the beginning of March.

Tucker will serve the remainder of Henry’s mayoral term, which runs through Dec. 31, 2027, Camp said.

He said that in addition to becoming the first Black person to serve as Fort Wayne mayor, Tucker will also become its second female mayor. The first was Cosette Simon, who served 11 days as mayor in 1985 after fellow Democrat Winfield Moses resigned as part of a plea agreement involving alleged campaign finance violations.

Her term ended when Moses won a party mayoral caucus to replace himself, The Journal Gazette reported.

article review process

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Turk J Urol
  • v.39(Suppl 1); 2013 Sep

How to write a review article?

In the medical sciences, the importance of review articles is rising. When clinicians want to update their knowledge and generate guidelines about a topic, they frequently use reviews as a starting point. The value of a review is associated with what has been done, what has been found and how these findings are presented. Before asking ‘how,’ the question of ‘why’ is more important when starting to write a review. The main and fundamental purpose of writing a review is to create a readable synthesis of the best resources available in the literature for an important research question or a current area of research. Although the idea of writing a review is attractive, it is important to spend time identifying the important questions. Good review methods are critical because they provide an unbiased point of view for the reader regarding the current literature. There is a consensus that a review should be written in a systematic fashion, a notion that is usually followed. In a systematic review with a focused question, the research methods must be clearly described. A ‘methodological filter’ is the best method for identifying the best working style for a research question, and this method reduces the workload when surveying the literature. An essential part of the review process is differentiating good research from bad and leaning on the results of the better studies. The ideal way to synthesize studies is to perform a meta-analysis. In conclusion, when writing a review, it is best to clearly focus on fixed ideas, to use a procedural and critical approach to the literature and to express your findings in an attractive way.

The importance of review articles in health sciences is increasing day by day. Clinicians frequently benefit from review articles to update their knowledge in their field of specialization, and use these articles as a starting point for formulating guidelines. [ 1 , 2 ] The institutions which provide financial support for further investigations resort to these reviews to reveal the need for these researches. [ 3 ] As is the case with all other researches, the value of a review article is related to what is achieved, what is found, and the way of communicating this information. A few studies have evaluated the quality of review articles. Murlow evaluated 50 review articles published in 1985, and 1986, and revealed that none of them had complied with clear-cut scientific criteria. [ 4 ] In 1996 an international group that analyzed articles, demonstrated the aspects of review articles, and meta-analyses that had not complied with scientific criteria, and elaborated QUOROM (QUality Of Reporting Of Meta-analyses) statement which focused on meta-analyses of randomized controlled studies. [ 5 ] Later on this guideline was updated, and named as PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). [ 6 ]

Review articles are divided into 2 categories as narrative, and systematic reviews. Narrative reviews are written in an easily readable format, and allow consideration of the subject matter within a large spectrum. However in a systematic review, a very detailed, and comprehensive literature surveying is performed on the selected topic. [ 7 , 8 ] Since it is a result of a more detailed literature surveying with relatively lesser involvement of author’s bias, systematic reviews are considered as gold standard articles. Systematic reviews can be diivded into qualitative, and quantitative reviews. In both of them detailed literature surveying is performed. However in quantitative reviews, study data are collected, and statistically evaluated (ie. meta-analysis). [ 8 ]

Before inquring for the method of preparation of a review article, it is more logical to investigate the motivation behind writing the review article in question. The fundamental rationale of writing a review article is to make a readable synthesis of the best literature sources on an important research inquiry or a topic. This simple definition of a review article contains the following key elements:

  • The question(s) to be dealt with
  • Methods used to find out, and select the best quality researches so as to respond to these questions.
  • To synthetize available, but quite different researches

For the specification of important questions to be answered, number of literature references to be consulted should be more or less determined. Discussions should be conducted with colleagues in the same area of interest, and time should be reserved for the solution of the problem(s). Though starting to write the review article promptly seems to be very alluring, the time you spend for the determination of important issues won’t be a waste of time. [ 9 ]

The PRISMA statement [ 6 ] elaborated to write a well-designed review articles contains a 27-item checklist ( Table 1 ). It will be reasonable to fulfill the requirements of these items during preparation of a review article or a meta-analysis. Thus preparation of a comprehensible article with a high-quality scientific content can be feasible.

PRISMA statement: A 27-item checklist

Contents and format

Important differences exist between systematic, and non-systematic reviews which especially arise from methodologies used in the description of the literature sources. A non-systematic review means use of articles collected for years with the recommendations of your colleagues, while systematic review is based on struggles to search for, and find the best possible researches which will respond to the questions predetermined at the start of the review.

Though a consensus has been reached about the systematic design of the review articles, studies revealed that most of them had not been written in a systematic format. McAlister et al. analyzed review articles in 6 medical journals, and disclosed that in less than one fourth of the review articles, methods of description, evaluation or synthesis of evidence had been provided, one third of them had focused on a clinical topic, and only half of them had provided quantitative data about the extend of the potential benefits. [ 10 ]

Use of proper methodologies in review articles is important in that readers assume an objective attitude towards updated information. We can confront two problems while we are using data from researches in order to answer certain questions. Firstly, we can be prejudiced during selection of research articles or these articles might be biased. To minimize this risk, methodologies used in our reviews should allow us to define, and use researches with minimal degree of bias. The second problem is that, most of the researches have been performed with small sample sizes. In statistical methods in meta-analyses, available researches are combined to increase the statistical power of the study. The problematic aspect of a non-systematic review is that our tendency to give biased responses to the questions, in other words we apt to select the studies with known or favourite results, rather than the best quality investigations among them.

As is the case with many research articles, general format of a systematic review on a single subject includes sections of Introduction, Methods, Results, and Discussion ( Table 2 ).

Structure of a systematic review

Preparation of the review article

Steps, and targets of constructing a good review article are listed in Table 3 . To write a good review article the items in Table 3 should be implemented step by step. [ 11 – 13 ]

Steps of a systematic review

The research question

It might be helpful to divide the research question into components. The most prevalently used format for questions related to the treatment is PICO (P - Patient, Problem or Population; I-Intervention; C-appropriate Comparisons, and O-Outcome measures) procedure. For example In female patients (P) with stress urinary incontinence, comparisons (C) between transobturator, and retropubic midurethral tension-free band surgery (I) as for patients’ satisfaction (O).

Finding Studies

In a systematic review on a focused question, methods of investigation used should be clearly specified.

Ideally, research methods, investigated databases, and key words should be described in the final report. Different databases are used dependent on the topic analyzed. In most of the clinical topics, Medline should be surveyed. However searching through Embase and CINAHL can be also appropriate.

While determining appropriate terms for surveying, PICO elements of the issue to be sought may guide the process. Since in general we are interested in more than one outcome, P, and I can be key elements. In this case we should think about synonyms of P, and I elements, and combine them with a conjunction AND.

One method which might alleviate the workload of surveying process is “methodological filter” which aims to find the best investigation method for each research question. A good example of this method can be found in PubMed interface of Medline. The Clinical Queries tool offers empirically developed filters for five different inquiries as guidelines for etiology, diagnosis, treatment, prognosis or clinical prediction.

Evaluation of the Quality of the Study

As an indispensable component of the review process is to discriminate good, and bad quality researches from each other, and the outcomes should be based on better qualified researches, as far as possible. To achieve this goal you should know the best possible evidence for each type of question The first component of the quality is its general planning/design of the study. General planning/design of a cohort study, a case series or normal study demonstrates variations.

A hierarchy of evidence for different research questions is presented in Table 4 . However this hierarchy is only a first step. After you find good quality research articles, you won’t need to read all the rest of other articles which saves you tons of time. [ 14 ]

Determination of levels of evidence based on the type of the research question

Formulating a Synthesis

Rarely all researches arrive at the same conclusion. In this case a solution should be found. However it is risky to make a decision based on the votes of absolute majority. Indeed, a well-performed large scale study, and a weakly designed one are weighed on the same scale. Therefore, ideally a meta-analysis should be performed to solve apparent differences. Ideally, first of all, one should be focused on the largest, and higher quality study, then other studies should be compared with this basic study.

Conclusions

In conclusion, during writing process of a review article, the procedures to be achieved can be indicated as follows: 1) Get rid of fixed ideas, and obsessions from your head, and view the subject from a large perspective. 2) Research articles in the literature should be approached with a methodological, and critical attitude and 3) finally data should be explained in an attractive way.

DOE Announces New Federal Permitting Rule To Slash Transmission Review Timelines in Half While Maintaining Integrity of the Environmental Review Process, New $331 Million Investment to Add More Than 2,000 Megawatts Across the Western U.S. Grid  

WASHINGTON, D.C. — In a continued commitment to bolster the U.S. power grid, today the Biden-Harris Administration announced a final transmission permitting reform rule and a new commitment for up to $331 million aimed at adding more than 2,000 megawatts (MW) of additional grid capacity throughout the Western United States – the equivalent to powering 2.5 million homes and creating more than 300 new, high quality and union construction jobs.  By improving Federal transmission permitting processes and investing in transmission build out and grid upgrades, the Biden-Harris Administration is deploying a multifaceted approach to ensuring that Americans have clean, reliable, and affordable power when and where they need it. These efforts advance the Biden-Harris Administration’s historic climate agenda, strengthen energy security and grid resilience, and reduce energy costs by bringing low-cost clean electricity to more families and businesses.   

The Department of Energy (DOE) is issuing a final rule to establish the Coordinated Interagency Transmission Authorizations and Permits (CITAP) Program , which aims to significantly improve Federal environmental reviews and permitting processes for qualifying transmission projects. Under the CITAP Program, DOE will coordinate a Federal integrated interagency process to consolidate Federal environmental reviews and authorizations within a standard two-year schedule while ensuring meaningful engagement with Tribes, local communities, and other stakeholders. This final rule, initiated and completed in under a year, implements a May 2023 interagency Memorandum of Understanding (MOU) to expedite the siting, permitting, and construction of electric transmission infrastructure in the United States. This rule is the latest action the Biden-Harris Administration has taken to accelerate permitting and environmental reviews. 

 DOE is also announcing up to $331 million through President Biden’s Bipartisan Infrastructure Law to support a new transmission line from Idaho to Nevada that will be built with union labor—the latest investment from the $30 billion that the Administration is deploying from the President’s Investing in America agenda to strengthen electric grid infrastructure.  

“The Biden-Harris Administration is doing everything everywhere to get more power to more people, in more places, said U.S. Secretary of Energy Jennifer M. Granholm . “We are acting with the urgency the American people deserve to realize a historic rework of the permitting process that slashes times for new transmission lines, puts more Americans to work and meets the energy needs of today and the future." 

“In order to reach our clean energy and climate goals, we've got to build out transmission as fast as possible to get clean power from where it's produced to where it's needed," said John Podesta, Senior Advisor to the President for International Climate Policy. “As today’s announcements demonstrate, the Biden-Harris administration is committed to using every tool at our disposal to accelerate progress on transmission permitting and financing and build a clean energy future.” 

“As the Federal government’s largest land manager, the Department of the Interior is working to review, approve and connect clean energy projects on hundreds of miles across the American West,” said U.S. Secretary of the Interior Deb Haaland.  “As we continue to surpass our clean energy goals, we are committed to working with our interagency partners to improve permitting efficiency for transmission projects, and ensuring that states, Tribes, local leaders and communities have a seat at the table as we consider proposals.”  

Expanding electric transmission capacity in the United States is essential to meet growing demand for electricity, ensure reliable and resilient electric service, and deliver new low-cost clean energy to customers when and where they need it. But over the past decade, transmission lines in the United States have been built at half the rate of the previous three decades, often due to permitting and financing challenges. The Biden-Harris Administration is tackling those challenges head-on, with today’s new CITAP Program and transmission investment announcement as the latest steps in broad efforts to take on climate change, lower energy costs, and strengthen energy security and grid reliability.  

Federal Permitting Reform  

Today DOE released a final rule that will significantly improve Federal environmental reviews and permitting processes for qualifying onshore electric transmission facilities, while ensuring meaningful engagement with Tribes, states, local communities, and other stakeholders. Consistent with the Fiscal Responsibility Act of 2023, the rule establishes the Coordinated Interagency Transmission Authorizations and Permits (CITAP) Program to better coordinate Federal permitting processes and establish a two-year deadline for completion of Federal authorizations and permits for electric transmission.  

The CITAP program helps transmission developers navigating the Federal review process, providing: 

  • Improved Permitting Review with Two-Year Timelines: DOE will serve as the lead coordinator for environmental review and permitting activities between all participating Federal agencies and project developers, ultimately making the Federal permitting process for transmission projects more efficient. DOE will lead an interagency pre-application process to ensure that developer submissions for Federal authorizations are ready for review on binding two-year timelines, without compromising critical National Environmental Policy Act (NEPA) requirements. This will significantly improve the efficiency of the permitting process for project developers by collecting information necessary for required Federal authorizations to site a transmission facility before starting the permitting process.    
  • Sustained Integrity in Environmental Review Process : DOE will work with the relevant agencies to prepare a single NEPA environmental review document to support each relevant Federal agency’s permit decision making, reducing duplication of work. Further, state siting authorities may participate in the CITAP Program alongside Federal agencies and take advantage of the efficiencies and resources DOE is offering through the program, including the single environmental review document, as a basis for their own decision-making.   
  • Transparent Transmission Permitting : The CITAP Program will require a comprehensive public participation plan that helps project developers identify community impacts from proposed lines at the outset of the project and encourages early engagement by potential applicants with communities and Tribes. The CITAP Program will allow potential applicants and agencies to coordinate via an online portal, which will allow project developers to directly upload relevant information and necessary documentation and will offer a one-stop-shop for their Federal permitting communications. The online portal will also allow participating Federal agencies to view and provide input during the initial document collection process and during Federal environmental reviews. 

“The Permitting Council is excited to have CITAP as a partner as we work together to bring clarity, transparency and efficiency to the federal permitting process for crucial transmission projects,” says Eric Beightel, Permitting Council Executive Director. “The ambitious clean energy goals of the Biden-Harris administration cannot be achieved without the transmission infrastructure needed to deliver renewable energy to consumers. This rule is a significant step forward in bringing coordination and accountability into the permitting review of these vital projects, and a perfect complement to our FAST-41 permitting assistance program, enabling us to deliver clean and affordable energy to homes across the nation." A public webinar will be held on May 15. Register now . 

Increasing Investor Confidence  

Today, DOE also announced the selection of one additional conditional project from the first round of capacity contract applications through the Transmission Facilitation Program (TFP) . 

Thanks to an investment of $331 million from President Biden’s Investing in America agenda, the Southwest Intertie Project (SWIP-North) will bolster resource adequacy in the West by bringing wind energy from Idaho to Southern Nevada and to customers in California, and providing a pathway for solar resources to meet evolving reliability needs in the Pacific Northwest. With construction anticipated to start in 2025, the proposed, 285-mile line will bring more than 2,000 MW of needed transmission capacity to the region and create over 300 new, high quality and union construction jobs. The SWIP-N line will also help increase grid resilience by providing an alternate route to deliver power supplies during wildfires or other system disruptions. This project will also upgrade a key substation in Nevada, unlocking an additional 1,000 MW of capacity along the existing One Nevada Line, a major transmission corridor in Southern Nevada. 

The National Transmission Needs Study , released in October 2023, estimates that by 2035 there will be a need for 3.3 gigawatts of new transfer capacity between the Mountain and Northwest regions to unlock the power sector emissions savings enabled by the Investing in America agenda. The SWIP-N project contributes 58% of this interregional transmission need. 

Funded by the President’s Bipartisan Infrastructure Law, the Transmission Facilitation Program is a $2.5 billion revolving fund to help overcome the financial hurdles associated with building new, large-scale transmission lines and upgrading existing transmission lines. Under the program, DOE is authorized to borrow up to $2.5 billion to purchase a percentage of the total proposed capacity of the eligible transmission line. By offering capacity contracts, DOE increases the confidence of investors, encourages additional customers to purchase transmission line capacity, and reduces the overall risk for project developers. 

Learn more about the Grid Deployment Office . 

IMAGES

  1. Article review sample

    article review process

  2. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    article review process

  3. Peer Review Process

    article review process

  4. Article review process.

    article review process

  5. (PDF) ARTICLE REVIEW

    article review process

  6. 3. TYPES OF REVIEW

    article review process

VIDEO

  1. How to Write an Article Review

  2. What is review article? #reviewarticle #researchmethodology #mimtechnovate #researchpaper

  3. How to Review a Journal Article (Grayson)

  4. How to Write and Structure a Literature Review

  5. How to Write a Review Paper: Step-by-Step Approach

  6. Planning Of Article 370 Removal

COMMENTS

  1. The Peer Review Process

    If accepted, the paper is sent to production. If the article is rejected or sent back for either major or minor revision, the handling editor should include constructive comments from the reviewers to help the author improve the article.At this point, reviewers should also be sent an email or letter letting them know the outcome of their review. If the paper was sent back for revision, the ...

  2. Peer review guidance: a primer for researchers

    Introduction. The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility ...

  3. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  4. Review articles: purpose, process, and structure

    Many research disciplines feature high-impact journals that are dedicated outlets for review papers (or review-conceptual combinations) (e.g., Academy of Management Review, Psychology Bulletin, Medicinal Research Reviews).The rationale for such outlets is the premise that research integration and synthesis provides an important, and possibly even a required, step in the scientific process.

  5. PDF A Guide to Peer Reviewing Journal Articles

    Peer review is an integral component of publishing the best quality research. Its purpose is to: 1. Aid in the vetting and selection of research for publication, ensuring that the best work is taken forward 2. Provide suggestions for improving articles that go through review, raising the general quality of published research

  6. How to conduct a review

    Respond to the invitation as soon as you can (even if it is to decline) — a delay in your decision slows down the review process and means more waiting for the author. If you do decline the invitation, it would be helpful if you could provide suggestions for alternative reviewers. 2. Managing your review.

  7. How to Write an Effective Journal Article Review

    Reviewing articles gives trainees familiarity with the peer review process in ways that facilitate their writing. For example, reviewing manuscripts can help students and early career psychologists understand what reviewers and editors look for in a peer-reviewed article and ways to critique and enhance a manuscript based on peer review.

  8. The peer review process

    The peer review process. The process of peer review is vital to academic research because it means that articles published in academic journals are of the highest possible quality, validity, and relevance. Journals rely on the dedication and support of reviewers to make sure articles are suitable for publication.

  9. The peer review process

    The review of research articles by peer experts prior to their publication is considered a mainstay of publishing in the medical literature. [ 1, 2] This peer review process serves at least two purposes. For journal editors, peer review is an important tool for evaluating manuscripts submitted for publication.

  10. Demystifying the process of scholarly peer-review: an ...

    The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory ...

  11. A Field Guide for the Review Process: Writing and Responding to Peer

    The peer review process--both writing reviews for academic journals and re-sponding to reviews of one's own work--is fundamental to building scientific knowledge. In this article, we explain why you should invest time reviewing, how to write a constructive review, and how to respond effectively to reviews of your own work.

  12. Everything You Need to Know About Peer Review

    The peer review process generally results in articles appearing in the Journal in better condition than when they were first submitted; apart from improvements in clarity of writing and enhanced presentation, relevant literature and limitations of methodology may be better acknowledged, and over-reaching conclusions moderated [10].

  13. Understanding the peer-review process

    The peer-review process is used to assess scholarly articles. Experts in a discipline similar to the author critique an article's methodology, findings, and reasoning to evaluate it for possible publication in a scholarly journal. Editors of scholarly journals use the peer-review process to decide which articles to publish, and the academic ...

  14. How to Write an Article Review (with Sample Reviews)

    Identify the article. Start your review by referring to the title and author of the article, the title of the journal, and the year of publication in the first paragraph. For example: The article, "Condom use will increase the spread of AIDS," was written by Anthony Zimmerman, a Catholic priest. 4.

  15. Peer-review process

    Peer review is a positive process. Peer review is an integral part of scientific publishing that confirms the validity of the science reported. Peer reviewers are experts who volunteer their time to help improve the journal manuscripts they review—they offer authors free advice. Through the peer-review process, manuscripts should become:

  16. How to write a good scientific review article

    Although the peer-review process is not usually as rigorous as for a research article [], there are some common features; for example, reviewers will consider whether the article cites the most relevant work in support of its analysis and conclusions. Journal editors will also carefully read the first draft and, depending on the journal ...

  17. Writing a Scientific Review Article: Comprehensive Insights for

    Writing a review article is a skill that needs to be learned; it is a rigorous but rewarding endeavour as it can provide a useful platform to project the emerging researcher or postgraduate student into the gratifying world of publishing. ... It is also important that the review process must be focused on the literature and not on the authors ...

  18. Understanding peer review

    The purpose of peer review is to evaluate the paper's quality and suitability for publication. As well as peer review acting as a form of quality control for academic journals, it is a very useful source of feedback for you. The feedback can be used to improve your paper before it is published. So at its best, peer review is a collaborative ...

  19. Review process

    Flow uses a single-anonymous model of peer review.The author does not know the identity of the reviewers, but the reviewers know the identity of the author. Articles (Research Articles, Flow Rapids, Case Studies, and Review Articles) are assigned to the Editor-in-Chief or an Associate Editor who will give rapid feedback to the authors initially (usually within 10 days).

  20. Peer Review at Science Journals

    As a peer reviewer for a Science j ournal, you are part of a valued community.Scientific progress depends on the trustworthiness of communicat ed information, and the peer-review process is a vital means to that end.. Only some of the papers submit ted to a Science journal are reviewed in depth. For in-depth review, at least two outside referees are consulted.

  21. Writing a good review article

    Tips for writing a good review article. Here are a few practices that can make the time-consuming process of writing a review article easier: Define your question: Take your time to identify the research question and carefully articulate the topic of your review paper. A good review should also add something new to the field in terms of a ...

  22. Peer Review in Scientific Publications: Benefits, Critiques, & A

    HISTORY OF PEER REVIEW. The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ().The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ().

  23. Labor Department's PERM Program Failing Immigrants And Employers

    To better understand the process, I interviewed Krystal Alanis, who responded in writing. Alanis is a partner at Reddy Neumann Brown PC, managing the firm's PERM Labor Certification Department.

  24. The 3-Stage Process That Makes Universities Prime Innovators

    Summary. While calls for cross-sector collaborations to tackle complex societal issues abound, in practice, only few succeed. Those that do often have a collaboration intermediary, which can bring ...

  25. Biden Title IX rules set to protect trans students, sexual abuse

    The final regulation makes a number of significant changes to a Trump-era system for handling sexual assault complaints, discarding some of the rules that bolstered due process rights of the accused.

  26. Person-centered care assessment tool with a focus on quality healthcare

    The person-centered care (PCC) approach plays a fundamental role in ensuring quality healthcare. The Person-Centered Care Assessment Tool (P-CAT) is one of the shortest and simplest tools currently available for measuring PCC. The objective of this study was to conduct a systematic review of the evidence in validation studies of the P-CAT, taking the "Standards" as a frame of reference.

  27. Sundance Organizers Consider New Home for Film Festival After 2026

    A 13-year contract with Park City, Utah, is set to expire in 2026 and the film festival is beginning a review process to see if it should move. Share full article The Sundance Film Festival has ...

  28. Councilwoman chosen as new Fort Wayne mayor, its 1st Black leader, in

    FORT WAYNE, Ind. (AP) — A Fort Wayne city councilwoman was chosen Saturday as the new mayor of Indiana's second most populous city, and its first Black leader, during a caucus to replace its late mayor, who died in March.. Councilwoman Sharon Tucker, a Democrat, will also become the second woman to serve as mayor of the northeastern Indiana city.

  29. How to write a review article?

    In conclusion, during writing process of a review article, the procedures to be achieved can be indicated as follows: 1) Get rid of fixed ideas, and obsessions from your head, and view the subject from a large perspective. 2) Research articles in the literature should be approached with a methodological, and critical attitude and 3) finally ...

  30. Biden-Harris Administration Announces Final Transmission Permitting

    WASHINGTON, D.C.— In a continued commitment to bolster the U.S. power grid, today the Biden-Harris Administration announced a final transmission permitting reform rule and a new commitment for up to $331 million aimed at adding more than 2,000 megawatts (MW) of additional grid capacity throughout the Western United States - the equivalent to powering 2.5 million homes and creating more ...