Reliability, Validity and Ethics

Cite this chapter.

validity of the research instrument pdf

  • Lindy Woodrow 2  

1546 Accesses

This chapter is about writing about the procedure of the research. This includes a discussion of reliability, validity and the ethics of research and writing. The level of detail about these issues varies across texts, but the reliability and validity of the study must feature in the text. Some-times these issues are evident from the research instruments and analysis and sometimes they are referred to explicitly. This chapter includes the following sections:

Technical information

Reliability of a measure

Internal validity

External validity

Research ethics

Reporting on reliability

Writing about validity

Reporting on ethics

Writing about research procedure

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Further reading

Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative and mixed methodologies . Oxford: Oxford University Press.

Google Scholar  

Paltridge, B. & Phakiti, A. (2010) (Eds.). Continuum companion to research methods in applied linguistics . London: Continuum.

Sources of examples

Lee, J.-A. (2009). Teachers’ sense of efficacy in teaching English, perceived English proficiency and attitudes toward English language: A case of Korean public elementary teachers . PhD, Ohio State University.

Levine, G. S. (2003). Student and instructor beliefs and attitudes about target language use, first language use and anxiety: Report of a questionnaire study. Modern Language Journal , 87(3), 343–364, doi: 10.1111/1540-4781.00194.

Article   Google Scholar  

Lin, H., Chen, T., & Dwyer, F. (2006). Effects of static visuals and computer-generated animations in facilitating Immediate and delayed achievement in the EFL classroom. Foreign Language Annals , 39(2), doi: 203-219.10.1111/j.1944-9720.2006.tb02262.x.

Mills, N. (2011). Teaching assistants’ self-efficacy in teaching literature: Sources, personal assessments, and consequences. Modern Language Journal , 95(1), 61–80. doi: 10.1111/j.1540-4781.2010.01145.x.

Rai, M. K., Loschky, L. C., Harris, R. J., Peck, N. R., & Cook, L. G. (2011). Effects of stress and working memory capacity on foreign language readers’ inferential processing during comprehension. Language Learning , 61(1), 187–218. doi: 10.1111/j.1467-9922.2010.00592.x.

Rose, H. (2010). Kanji learning of Japanese language learners on a year-long study exchange at a Japanese university: An investigation of strategy use, motivation control and self regulation . PhD, University of Sydney.

Download references

Author information

Authors and affiliations.

University of Sydney, Australia

Lindy Woodrow

You can also search for this author in PubMed   Google Scholar

Copyright information

© 2014 Lindy Woodrow

About this chapter

Woodrow, L. (2014). Reliability, Validity and Ethics. In: Writing about Quantitative Research in Applied Linguistics. Palgrave Macmillan, London.

Download citation


Publisher Name : Palgrave Macmillan, London

Print ISBN : 978-0-230-36997-9

Online ISBN : 978-0-230-36995-5

eBook Packages : Palgrave Language & Linguistics Collection Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research no longer supports Internet Explorer.

To browse and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Measuring the Validity and Reliability of Research Instruments

Profile image of Kahirol  Mohd Salleh

2015, Procedia - Social and Behavioral Sciences

Related Papers

Procedia - Social and Behavioral Sciences

Othman Jaafar

validity of the research instrument pdf

Journal of Counseling and Educational Technology

Izwah Ismail

Development Questionnaire II (Student) is to obtain feedback from respondents, the students associated with the Program Implementation Evaluation Diploma in Mechatronics Engineering at the Polytechnic towards Industrial Requirements in Malaysia. This study was conducted to produce empirical evidence about the validity and reliability of the questionnaire II (Student) using Rasch Measurement Model. A pilot study was conducted at the Department of Mechanical Engineering, Polytechnic Kota Kinabalu, Sabah on 38 students in their final semester of a diploma program in Mechatronic Engineering. Validity and reliability of the questionnaire II (Students) were measured using Rasch Measurement Model Winsteps Version Rasch analysis showed respondents reliability index is 0.97 and reliability index is 0.91.From the point of polarity items, each item can contribute to the measurement because the PTMEA CORR each item above 0.30, which is between 0.30 to 0.81. Appropriateness test shows...

Journal of Educational Research and Evaluation

Habib M M Adi


Abdul Rahim

This study was conducted to analyze the test instrument used to measure the ability of students in the odd semester final exam in mathematics. Sampling using purposive sampling technique. These students consist of 67 people. The questions given are in the form of multiple choice questions totaling 40 items related to the odd semester final exam material. The data analysis technique used quantitative descriptive analysis. The Rasch model is used to obtain fit items. This analysis was carried out with the help of Winsteps 3.73 software. From the output of the Winsteps program, 35 items were obtained according to the Rasch model with an average value of Outfit MNSQ for persons and items of 1.09 and 1.09, respectively. While the Outfit ZSTD values for person and item are -0,1 and -0,2 respectively. Meanwhile, the instrument reliability expressed in Cronbach's alpha is 0.77.

Ahmad Zainudin

Dr. Mamun Albnaa

Measurement theories are important to practice in educational measurement because they provide a background for addressing measurement problems. One of the most important problems is dealing with the Measurement Errors. A good theory can help in understanding the role of errors they play in measurement; (a) To evaluate the examinee's ability to minimize errors and (b) Correlations between variables. There are two theories addressing measurement problems such as test construction, and identification of biased test items: Classical Test Theory (CT) and Item Response Theory (IRT) (1950). As a result of a number of problems associated with the Classical Theory of Measurement, which cause inaccuracy in results i.e. methods and tools of measurement. There appeared a need to develop the methods of measuring behavior in a manner consistent with the Physical Measurement Methods. Based on the Philosophy of this measurement and assumption, which achieves the quality and safety of these methods, and acceptance of their results with a high Degree of Confidence. There were many research studies by professionals and those interested in behavioral measures, aimed and try to overcome some of the Behavioral Problems of Measurement. These studies have resulted in the emergence of Item Response Theory. Item response theory is a Statistical Theory about Items, Test Performance and abilities that are measured by Items. Item responses can be discrete or continuous and can be dichotomous and the item score categories can be ranked or non ranked . There can be one ability underlying test, and there are many models in which the relationship between item responses and the underlying ability can be specified. Within the IRT there are many models that have been applied to test data really but most famous among them is Racsh model. In this paper, both the theories i.e. Classical Test Theory and Item Response Theory (lRT) will be described in relation to approaches to measure the validity and reliability. The intent of this module is to provide a comparison of classical theory and item response theory

Asia Proceedings of Social Sciences

Mazlili Suhaini

Malaysia is considered one of the developing countries undergoing rapid economic development over the past five decades. As a developing country with a rapidly growing population, providing the citizens with comprehensive and updated knowledge is crucial for the country, particularly in vocational training. A number of vocational and technical training have been developed. However, the success of vocational education relies on the capability of instructors or teacher’s approach to achieve the goals. It is important to create appropriate method that take into consideration their students’ learning styles to get better outcome. Therefore, the purpose of this paper is to develop vocational learning styles instrument. Empirical evidence on the validity and reliability of modified items has been done. A survey of 57 Electrical Technology students were distributed. The Rasch measurement model was used to examine the functional items and detect the item and respondent reliability and index...

Zuhaira Zain

Exam has been used enormously as an assessment tool to measure students’ academic performance in most of the higher institutions in KSA. A good quality of a set of constructed items/questions on mid and final exam would be able to measure both students’ academic performance and their cognitive skills. We adopt Rasch Model to evaluate the reliability and quality of the first mid exam questions for Object-oriented Design course. The result showed that the reliability and quality of the exam questions constructed were relatively good and calibrated with students’ learned ability. Key-Words: Rasch Model, Item Constructions, Reliability, Quality, Students’ Academic Performance, Information Systems, Bloom’s Taxonomy

Education Research International

Amir Mohamed Talib

This paper describes a measurement model that is used to measure the student performance in the final examination of Information Technology (IT) Fundamentals (IT280) course in the Information Technology (IT) Department, College of Computer & Information Sciences (CCIS), Al-Imam Mohammad Ibn Saud Islamic University (IMSIU). The assessment model is developed based on students’ mark entries of final exam results for the second year IT students, which are compiled and tabulated for evaluation using Rasch Measurement Model, and it can be used to measure the students’ performance towards the final examination of the course. A study on 150 second year students (male = 52; female = 98) was conducted to measure students’ knowledge and understanding for IT280 course according to the three level of Bloom’s Taxonomy. The results concluded that students can be categorized as poor (10%), moderate (42%), good (18%), and successful (24%) to achieve Level 3 of Bloom’s Taxonomy. This study shows that...


Charles Cabell


Tomás Pérez Vejo

Ester M Tambunan

Masanori Shukuya

Fernando Choque Mamani

“Lektury licealisty. Szkice” (“Lectures of a High School Student. Essays”) edited by W. Pykosz and L. Bugajski, Wrocław, Ossolineum, 1986 pp. 137-149

Lech Keller

Gucho Caicedo


Jenny Chung

rosario sandrini


Saariluoma Pertti

Scientific reports

Amy Azuan Abdullah

loreto lagos

Revista Educação em Saúde

Cárita Aguiar

Katja Sirviö

British Journal of Surgery

Ranjith Kuzhupilly

British Journal of Special Education

jill porter

Anas Bashayreh

Universidad Nacional de La Matanza eBooks

Guillermo Hindi

Revista Lusofona De Educacao

Manuel Tavares

Memoria y Civilización, 1, 1998, 25-78

Juan Francisco Rodríguez Neila

Sandra Tominac Coslovich

Wellcome Open Research

Geneviève Boily-Larouche

Journal of Research in Music

Dr. Rohan Nethsinghe

Atenea (Concepción)

Miguel Gomes

Saudi journal of economics and finance

ahmad yunani

Richard Collins

Journal of Clinical Periodontology


Kidney Research and Clinical Practice

Vaclav Bunc

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access


Research Article

The development and structural validity testing of the Person-centred Practice Inventory–Care (PCPI-C)

Contributed equally to this work with: Brendan George McCormack, Paul F. Slater, Fiona Gilmour, Denise Edgar, Stefan Gschwenter, Sonyia McFadden, Ciara Hughes, Val Wilson, Tanya McCance

Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Validation, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Faculty of Medicine and Health, Susan Wakil School of Nursing and Midwifery/Sydney Nursing School, The University of Sydney, Camperdown Campus, New South Wales, Australia

ORCID logo

Roles Formal analysis, Methodology, Writing – original draft, Writing – review & editing

Affiliation Institute of Nursing and Health Research, Ulster University, Belfast, Northern Ireland

Roles Data curation, Investigation, Methodology, Writing – review & editing

Affiliation Division of Nursing, Queen Margaret University, Edinburgh, Scotland

Roles Data curation, Formal analysis, Writing – review & editing

Affiliation Nursing and Midwifery Directorate, Illawarra Shoalhaven Local Health District, New South Wales, Australia

Roles Data curation, Methodology, Validation, Writing – review & editing

Affiliation Division of Nursing Science with Focus on Person-Centred Care Research, Karl Landsteiner University of Health Sciences, Krems, Austria

Roles Data curation, Investigation, Validation, Writing – review & editing

Affiliation Prince of Wales Hospital, South East Sydney Local Health District, New South Wales, Australia

Roles Conceptualization, Formal analysis, Methodology, Validation, Writing – original draft, Writing – review & editing

  • Brendan George McCormack, 
  • Paul F. Slater, 
  • Fiona Gilmour, 
  • Denise Edgar, 
  • Stefan Gschwenter, 
  • Sonyia McFadden, 
  • Ciara Hughes, 
  • Val Wilson, 
  • Tanya McCance


  • Published: May 10, 2024
  • Reader Comments

Fig 1

Person-centred healthcare focuses on placing the beliefs and values of service users at the centre of decision-making and creating the context for practitioners to do this effectively. Measuring the outcomes arising from person-centred practices is complex and challenging and often adopts multiple perspectives and approaches. Few measurement frameworks are grounded in an explicit person-centred theoretical framework.

In the study reported in this paper, the aim was to develop a valid and reliable instrument to measure the experience of person-centred care by service users (patients)–The Person-centred Practice Inventory-Care (PCPI-C).

Based on the ‘person-centred processes’ construct of an established Person-centred Practice Framework (PCPF), a service user instrument was developed to complement existing instruments informed by the same theoretical framework–the PCPF. An exploratory sequential mixed methods design was used to construct and test the instrument, working with international partners and service users in Scotland, Northern Ireland, Australia and Austria. A three-phase approach was adopted to the development and testing of the PCPI-C: Phase 1 –Item Selection : following an iterative process a list of 20 items were agreed upon by the research team for use in phase 2 of the project; Phase 2 –Instrument Development and Refinement : Development of the PCPI-C was undertaken through two stages. Stage 1 involved three sequential rounds of data collection using focus groups in Scotland, Australia and Northern Ireland; Stage 2 involved distributing the instrument to members of a global community of practice for person-centred practice for review and feedback, as well as refinement and translation through one: one interviews in Austria. Phase 3 : Testing Structural Validity of the PCPI-C : A sample of 452 participants participated in this phase of the study. Service users participating in existing cancer research in the UK, Malta, Poland and Portugal, as well as care homes research in Austria completed the draft PCPI-C. Data were collected over a 14month period (January 2021-March 2022). Descriptive and measures of dispersion statistics were generated for all items to help inform subsequent analysis. Confirmatory factor analysis was conducted using maximum likelihood robust extraction testing of the 5-factor model of the PCPI-C.

The testing of the PCPI-C resulted in a final 18 item instrument. The results demonstrate that the PCPI-C is a psychometrically sound instrument, supporting a five-factor model that examines the service user’s perspective of what constitutes person-centred care.

Conclusion and implications

This new instrument is generic in nature and so can be used to evaluate how person-centredness is perceived by service users in different healthcare contexts and at different levels of an organisation. Thus, it brings a service user perspective to an organisation-wide evaluation framework.

Citation: McCormack BG, Slater PF, Gilmour F, Edgar D, Gschwenter S, McFadden S, et al. (2024) The development and structural validity testing of the Person-centred Practice Inventory–Care (PCPI-C). PLoS ONE 19(5): e0303158.

Editor: Nabeel Al-Yateem, University of Sharjah, UNITED ARAB EMIRATES

Received: January 26, 2023; Accepted: April 20, 2024; Published: May 10, 2024

Copyright: © 2024 McCormack et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Data cannot be shared publicly because of ethical reason. Data are available from the Ulster University Institutional Data Access / Ethics Committee (contact via email on [email protected] ) for researchers who meet the criteria for access to confidential data

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.


Person-centred healthcare focuses on placing the beliefs and values of service users at the centre of decision-making and creating the context for practitioners to do this effectively. Person-centred healthcare goes beyond other models of shared decision-making as it requires practitioners to work with service users (patients) as actively engaged partners in care [ 1 ]. It is widely agreed that person-centred practice has a positive influence on the care experiences of all people associated with healthcare, service users and staff alike. International evidence shows that person-centred practice has the capacity to have a positive effect on the health and social care experiences of service users and staff [ 1 – 4 ]. Person-centred practice is a complex health care process and exists in the presence of respectful relationships, attitudes and behaviours [ 5 ]. Fundamentally, person-centred healthcare can be seen as a move away from neo-liberal models towards the humanising of healthcare delivery, with a focus on the development of individualised approaches to care and interventions, rather than seeing people as ‘products’ that need to be moved through the system in an efficient and cost-effective way [ 6 ].

Person-centred healthcare is underpinned by philosophical and theoretical constructs that frame all aspects of healthcare delivery, from the macro-perspective of policy and organisational practices to the micro-perspective of person-to-person interaction and experience of healthcare (whether as professional or service user) and so is promoted as a core attribute of the healthcare workforce [ 1 , 7 ]. However, Dewing and McCormack [ 8 ] highlighted the problems of the diverse application of concepts, theories and models all under the label of person-centredness, leading to a perception of person-centred healthcare being poorly defined, non-specific and overly generalised. Whilst person-centredness has become a well-used term globally, it is often used interchangeably with other terms such as ’woman-centredness’ [ 9 ], ’child-centredness’ [ 10 ], ’family-centredness’ [ 11 ], ’client-centredness’ [ 12 ] and ’patient-centredness’ [ 13 ]. In their review of person-centred care, Harding et al [ 14 ] identified three fundamental ‘stances’ that encompass person-centred care— Person-centred care as an overarching grouping of concepts : includes care based on shared-decision making, care planning, integrated care, patient information and self-management support; Person-centred care emphasising personhood : people being immersed in their own context and a person as a discrete human being; Person-centred care as partnership : care imbued with mutuality, trust, collaboration for care, and a therapeutic relationship.

Harding et al. adopt the narrow focus of ’care’ in their review, and others contend that for person-centred care to be operationalised there is a need to understand it from an inclusive whole-systems perspective [ 15 ] and as a philosophy to be applied to all persons. This inclusive approach has enabled the principles of person-centredness to be integrated at different levels of healthcare organisations and thus enable its embeddedness in health systems [ 16 – 19 ]. This inclusive approach is significant as person-centred care is impossible to sustain if person-centred cultures do not exist in healthcare organisations [ 20 , 21 ].

McCance and McCormack [ 5 ] developed the Person-centred Practice Framework (PCPF) to highlight the factors that affect the delivery of person-centred practices. McCormack and McCance published the original person-centred nursing framework in 2006. The Framework has evolved over two decades of research and development activity into a transdisciplinary framework and has made a significant contribution to the landscape of person-centredness globally. Not only does it enable the articulation of the dynamic nature of person-centredness, recognising complexity at different levels in healthcare systems, but it offers a common language and a shared understanding of person-centred practice. The Person-centred Practice Framework is underpinned by the following definition of person-centredness:

[A]n approach to practice established through the formation and fostering of healthful relationships between all care providers , service users and others significant to them in their lives . It is underpinned by values of respect for persons , individual right to self-determination , mutual respect and understanding . It is enabled by cultures of empowerment that foster continuous approaches to practice development [ 16 ].

The Person-centred Practice Framework ( Fig 1 ) comprises five domains: the macro context reflects the factors that are strategic and political in nature that influence the development of person-centred cultures; prerequisites focus on the attributes of staff; the practice environment focuses on the context in which healthcare is experienced; the person-centred processes focus on ways of engaging that are necessary to create connections between persons; and the outcome , which is the result of effective person-centred practice. The relationships between the five domains of the Person-centred Practice Framework are represented pictorially, that being, to reach the centre of the framework, strategic and policy frames of reference need to be attended to, then the attributes of staff must be considered as a prerequisite to managing the practice environment and to engaging effectively through the person-centred processes. This ordering ultimately leads to the achievement of the outcome–the central component of the framework. It is also important to recognise that there are relationships and there is overlap between the constructs within each domain.


  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

In 2015, Slater et al. [ 22 ] developed an instrument for staff to use to measure person centred practice- the Person-centred Practice Inventory- Staff (PCPI-S). The PCPI-S is a 59-item, self-report measure of health professionals’ perceptions of their person-centred practice. The items in the PCPI-S relate to seventeen constructs across three domains of the PCPF (prerequisites, practice environment and person-centred processes). The PCPI-S has been widely used, translated into multiple languages and has undergone extensive psychometric testing [ 23 – 28 ].

No instrument exists to measure service users’ perspectives of person-centred care that is based on an established person-centred theoretical framework or that is designed to compare with service providers perceptions of it. In an attempt to address this gap in the evidence base, this study set out to develop such a valid and reliable instrument. The PCPI-C focuses on the person-centred processes domain, with the intention of measuring service users’ experiences of person-centred care. The person-centred processes are the components of care that directly affect service users’ experiences. The person-centred processes enable person-centred care outcomes to be achieved and include working with the person’s beliefs and values, sharing decision-making, engaging authentically, being sympathetically present and working holistically. Based on the ‘person-centred processes’ construct of the PCPF and relevant items from the PCPI-S, a version for service users was developed.

This paper describes the processes used to develop and test the instrument–The Person-centred Practice Inventory-Care (PCPI-C). The PCPI-C has the potential to enable healthcare services to understand service users’ experience of care and how they align with those of healthcare providers.

Materials and methods

The aim of this research was to develop and test the face validity of a service users’ version of the person-centred practice inventory–The Person-centred Practice Inventory-Care.

The development and testing of the instrument was guided by the instrument development principles of Boateng et al [ 29 ] ( Fig 2 ) and reported in line with the COSMIN guidelines for instrument testing [ 30 , 31 ]. An exploratory sequential mixed methods design was used to construct and test the instrument [ 29 , 30 ] working with international partners and service users. A three-phase approach was adopted to the development and testing of the PCPI-C. As phases 1 and 2 intentionally informed phase 3 (the testing phase), these two phases are included here in our description of methods.


Ethical approval

Ethics approval was sought and gained for each phase of the study and across each of the participating sites. For phase 2 of the study, a generic research protocol was developed and adapted for use by the Scottish, Australian and Northern Irish teams to apply for local ethical approval. In Scotland, ethics approval was gained from Queen Margaret University Edinburgh Divisional Research Ethics Committee; in Australia, ethics approval was gained from The University of Wollongong and in Northern Ireland ethics approval was gained from the Research Governance Filter Committee, Nursing and Health Research, Ulster University. For phase 3 of the study, secondary analysis of an existing data set was undertaken. For the original study from which this data was derived (see phase 3 for details), ethical approval was granted by the UK Office of Research Ethics Committee Northern Ireland (ORECNI Ref: FCNUR-21-019) and Ulster University Research Ethics Committee. Additional local approvals were obtained for each partner site as required. In addition, a data sharing agreement was generated to facilitate sharing of study data between European Union (EU) sites and the United Kingdom (UK).

Phase 1 –Item selection

An initial item pool for the PCPI-C was identified by <author initials to be added after peer-review> by selecting items from the ‘person-centred processes’ sub-scale of the PCPI-S ( Table 1 ). Sixteen items were extracted, and the wording of the statements was adjusted to reflect a service-user perspective. Additional items were identified (n = 4) to fully represent the construct from a service-user perspective. A final list of 20 items was agreed upon and this 20-item questionnaire was used in Phase 2 of the instrument development.


Phase 2 –Instrument development and refinement

Testing the validity of PCPI-C was undertaken through three sequential rounds of data collection using focus groups in Scotland, Australia and Northern Ireland. The purpose of these focus groups was to work with service users to share and compare understandings and views of their experiences of healthcare and to consider these experiences in the context of the initial set of PCPI-C items generated in phase 1 of the study. These countries were selected as the lead researchers had established relationships with healthcare partners who were willing to host the research. The inclusion of multiple countries provided different perspectives from service users who used different health services. In Scotland, a convenience sample of service users (n = 11) attending a palliative care day centre of a local hospice was selected. In Australia a cancer support group for people living with a cancer diagnosis (n = 9) was selected and in Northern Ireland, people with lived experience who were attending a community group hosted by a Cancer Charity (n = 9) were selected. All service users were current users of healthcare and so the challenge of memory recall was avoided. The type of conditions/health problems of participants was not the primary concern. Instead, we targeted persons who had recent experiences of the health system. The three centres selected were known to the researchers in those geographical areas and relationships were already established, which helped with gaining access to potential participants. Whilst the research team had potential access to other centres in each country, it was evident at focus group 3 that no significant new issues were being identified from the participants and thus we agreed to not do further rounds of refinement.

A Focus Group guide was developed ( Fig 3 ). Participants were invited to draw on their experiences as a user of the service; particularly remembering what they saw, the way they felt and what they imagined was happening [ 32 ]. The participants were invited to independently complete the PCPI-C and the purpose of the exercise was reiterated i.e. to think about how each question of the PCPI-C reflected their own experiences and their answers to the questions. Following completion of the questionnaire, participants were asked to comment on each question in the PCPI-C (20 questions), with a specific focus on their understanding of the question, what they thought about when they read the question, and any suggestions to improve readability. The focus group was concluded with a discussion on the overall usability of the PCPI-C. Each focus group was audiotaped and the audio recordings were transcribed in full. The facilitators of the focus group then listened to the audio recordings, alongside the transcripts, and identified the common issues that arose from the discussions and noted against each of the questions in the draft PCPI-C. Revisions were made to the questions in accordance with the comments and recommendations of the participants. At the end of the analysis phase of each focus group, a table of comments and recommendations mapped to the questions in the instrument was compiled and sent to the whole research team for review and consideration. The comments and recommendations were reviewed by the research team and amendments made to the draft PCPI-C. The amended draft was then used in the next focus group until a final version was agreed. Focus group 1 was held in Scotland, focus group 2 in Australia and focus group 3 in Northern Ireland. Table 2 presents a summary of the feedback from the final focus group.



A final stage of development involved distributing the agreed version of the PCPI-C to members of ‘The International Community of Practice for Person-centred Practice’ (PcP-ICoP) for review and feedback. The PcP-ICoP is an international community of higher education, health and care organisations and individuals who are committed to advancing knowledge in the field of person-centredness. No significant changes to the distributed version were suggested by the PcP-ICoP members, but several members requested permission to translate the instrument into their national language. PcP-ICoP members at the University of Vienna, who were leading on a large research project with nursing homes in the region of Lower Austria, agreed to undertake a parallel translation project as a priority, so they could use the PCPI-C in their research project. The instrument was culturally and linguistically adapted to the nursing home setting in an iterative process by the Austrian research team in collaboration with the international research team. Data were collected through face-to-face interviews by trained research staff. Residents of five nursing homes for older persons in Lower Austria were included. All residents who did not have a cognitive impairment or were physically unable to complete the questionnaire (because of ill-health) (n = 235) were included. 71% of these residents (N = 167) managed to complete the questionnaire. Whilst in Austria, formal ethical approval for non-intervention studies is not required, the team sought informed consent from participants. Particular attention was paid throughout the interviews to assure ongoing consent of residents by carefully guided conversations.

Phase 3: Testing structural validity of the PCPI-C

The aim of this phase was to test the structural validity of the PCPI-C using confirmatory factor analysis with an international sample of service users. The PCPI-C comprises 20 items measured on a 5-point scale ranging from ‘strongly disagree’ to ‘strongly agree. The 20 items represent the 5 constructs comprising the final model to be tested, which is outlined in Table 3 .


A sample of 452 participants was selected for this phase of the study. The sample selected comprised two groups. Group 1 (n = 285) were service users with cancer (breast, urological and other) receiving radiotherapy in four Cancer Treatment Centres in four European Countries–UK, Malta, Poland and Portugal. These service users were participants in a wider SAFE EUROPE ( ) project exploring the education and professional migration of therapeutic radiographers in the European Union. In the UK a study information poster with a link to the PCPI-C via Qualtrics © survey was disseminated via UK cancer charity social media websites. Service user information and consent were embedded in the online survey and presented to the participant following the study link. At the non-UK sites, hard copy English versions of the surveys were available in clinical departments where a convenience sampling approach was used, inviting everyone in their final few days of radiotherapy to participate. The ‘DeepL Translator’ software (DeepL GmbH, Cologne, Germany) was used to make the necessary terminology adaptions for both the questionnaire and the participant information sheet across the various countries. Fluent speakers based in the participating sites and who were members of the SAFE EUROPE project team confirmed the accuracy of this process by checking the accuracy of the translated version against the original English version. Participants were provided with study information and had at least 24 hours to decide if they wished to participate. Willing participants were then invited to provide written informed consent by the local study researcher. The study researcher provided the hard copy survey to the service user but did not engage with or assist them during completion. Service users were informed they could take the survey home for completion if they wished. Completed surveys were returned to a drop box in the department or returned by post (data collected May 2021-March 2022). Group 2 were residents in nursing homes in Lower Austria (n = 125). No participating residents had a cognitive impairment and were physically able to complete the questionnaire. Data were collected through face-to-face interviews by trained research staff (data collected January 2021-March 2021).

Statistical analysis

Descriptive and measures of dispersion statistics were generated for all items to help inform subsequent analysis. Measures of appropriateness to conduct factor analysis were conducted using The Kaiser-Meyer-Olkin Measures of Sampling Adequacy and Bartletts Test of Sphericity. Inter-item correlations were generated to examine for collinearity prior to full analysis. Confirmatory factor analysis was conducted using maximum likelihood robust extraction testing of the 5-factor model.

Acceptable fit statistics were set at Root Mean Square Estimations of Approximation (RMSEA) of 0.05 or below; 90% RMSEA higher bracket below 0.08; and Confirmation Fit Indices (CFI) of 0.95 or higher and SRMR below 0.05 [ 33 – 35 ]. Internal consistency was measured using Cronbach alpha scores for factors in the accepted factor model.

The model was re-specified using the modification indices provided in the statistical output until acceptable and a statistically significant relationship was identified. All re-specifications of the model were guided by principles of (1) meaningfulness (a clear theoretical rationale); (2) transitivity (if A is correlated to B, and B correlated to C, then A should correlate with C); and (3) generality (if there is a reason for correlating the errors between one pair of errors, then all pairs for which that reason applies should also be correlated) [ 36 ].

Acceptance modification criteria of:

  • The items to first order factors were initially fitted.
  • Correlated error variance permitted as all items were measuring the same unidimensional construct.
  • Only statistically significant relationship retained to help produce as parsimonious a model as possible.
  • Factor loadings above 0.40 to provide a strong emergent factor structure.

Factor loading scores were based on Comrey and Lee’s [ 37 ] guidelines (>.71 = excellent, >.63 = very good, >.55 = good, >.45 = fair and >.32 = poor) and acceptable factor loading given the sample size (n = 452) were set at >0.3 [ 33 , 38 ].

Results and discussion

Demographic details.

The sample of 452 participants represented an international sample of respondents drawn from across five countries: UK (14.6% n = 66), Portugal (47.8%. n = 216), Austria (27.7%, n = 125), Malta (6.6, n = 30) and Poland (3.3%, n = 15). Table 4 outline the demographic characteristics of the sample. The final sample of 452 participants provides an acceptable ratio 33 of 22:1 respondent to items.


The means scores indicate that respondents scored the items neutrally. The measures of skewness and kurtosis were acceptable and satisfied the conditions of normality of distribution for further psychometric testing. Examination of the Kaiser Meyer Olkin (0.947) and the Bartlett test for sphericity (4431.68, df = 190, p = 0.00) indicated acceptability of performing factor analysis on the items. Cronbach alpha scores for each of the constructs confirm the acceptability and unidimensionality of each construct.

Examination of the correlation matrix between items shows a range of between 0.144 and 0.740, indicating a broadness in the areas of care the questionnaire items address, as well as no issues of collinearity. The original measurement model was examined using maximum likelihood extraction and the original model had mixed fit statistics. All factor loadings (except for items 11 and 13) were above the threshold of 0.4 ( Table 3 ). Six further modifications were introduced into the original model based on highest scored modification indices until the fit statistics were deemed acceptable ( Table 5 for model fit statistics and Fig 4 for items correlated errors). Two item correlated error modifications were within factors and 4 between factors. The accepted model factor structure is displayed in Fig 4 .



Measuring person-centred care is a complex and challenging endeavour [ 39 ]. In a review of existing measures of person-centred care, DeSilva [ 39 ] identified that whilst there are many tools available to measure person-centred care, there was no agreement about which tools were most worthwhile. The complexity of measurement is further reinforced by the multiplicity of terms used that imply a person-centred approach being adopted without explicitly setting out the meaning of the term. Further, person-centred care is multifaceted and comprises a multitude of methods that are held together by a common philosophy of care and organisational goals that focus on service users having the best possible (personalised) experience of care. As DeSilva suggested, “it is a priority to understand what ‘person-centred’ means . Until we know what we want to achieve , it is difficult to know the most appropriate way to measure it . (p 3)” . However, it remains the case that many of the methods adopted are poorly specified and not embedded in clear conceptual or theoretical frameworks [ 40 , 41 ]. A clear advantage of the study reported here is that the PCPI-C is embedded in a theoretical framework of person-centredness (the PCPF) that clearly defines what we mean by person-centred practice. The PCPI-C is explicitly informed by the ‘person-centred processes’ domain of the PCPF, which has an explicit focus on the care processes used by healthcare workers in providing healthcare to service-users.

In the development of the PCPI-C, initial items were selected from the Person-centred Practice Inventory-Staff (PCPI-S) and these items are directly connected with the person-centred processes domain of the PCPF. The PCPI-S has been translated, validated and adopted internationally [ 23 – 28 ] and so provides a robust theoretically informed starting point for the development of the PCPI-C. This starting point contributed to the initial acceptability of the instrument to participants in the focus groups. Like DeSilva, [ 39 ] McCormack et al [ 42 ] and McCormack [ 41 ] have argued that measuring person-centred care as an isolated activity from the evaluation of the impact of contextual factors on the care experienced, is a limited exercise. As McCormack [ 41 ] suggests “ Evaluating person-centred care as a specific intervention or group of interventions , without understanding the impact of these cultural and contextual factors , does little to inform the quality of a service . ” (p1) Using the PCPI-C alongside other instruments such as the PCPI-S helps to generate contrasting perspectives from healthcare providers and healthcare service users, informed by clear definitions of terms that can be integrated in quality improvement and practice development programmes. The development of the PCPI-C was conducted in line with good practice guidelines in instrument development [ 29 ] and underpinned by an internationally recognised person-centred practice theoretical framework, the PCPF [ 5 ]. The PCPI-C provides a psychometrically robust tool to measure service users’ perspectives of person-centred care as an integrated and multi-faceted approach to evaluating person-centredness more generally in healthcare organisations.

With the advancement of Patient Reported Outcome Measures (PROMS) [ 43 , 44 ], Patient Reported Experience Measures (PREMS) [ 45 ] and the World Health Organization (WHO) [ 15 ] emphasis on the development of people-centred and integrated health systems, greater emphasis has been placed on developing measures to determine the person-centredness of care experienced by service users. Several instruments have been developed to measure the effectiveness of person-centred care in specific services, such as mental health [ 45 ], primary care [ 46 , 47 ], aged care [ 48 , 49 ] and community care [ 50 ]. However only one other instrument adopts a generic approach to evaluating services users’ experiences of person-centred care [ 51 ]. The work of Fridberg et al (The Generic Person-centred Care Questionnaire (GPCCQ)) is located in the Gothenburg Centre for Person-centred Care (GPCC) concept of person-centredness—patient narrative, partnership and documentation. Whilst there are clear connections between the GPCCQ and the PCPI-C, a strength of the PCPI-C is that it is set in a broader system of evaluation that views person-centredness as a whole system issue, with all parts of the system needing to be consistent in concepts used, definitions of terms and approaches to evaluation. Whilst the PCPI-S evaluates how person-centredness is perceived at different levels of the organisation, using the same theoretical framework and the same definition of terms, the PCPI-C brings a service user perspective to an organisation-wide evaluation framework.

A clear strength of this study lies in the methods engaged in phase 2. Capturing service user experiences of healthcare has become an important part of the evaluation of effectiveness. Service user experience evaluation methodologies adopt a variety of methods that aim to capture key transferrable themes across patient populations, supported by granular detail of individual specific experience [ 43 ]. This kind of service evaluation depends on systematically capturing a variety of experiences across different service-user groups. In the research reported here, service users from a variety of services including palliative care and cancer services from three countries, engaged in the focus group discussions and were freely able to discuss their experiences of care and consider them in the context of the questionnaire items. The use of focus groups in three different countries enabled different cultural perspectives to be considered in the way participants engaged with discussions and considered the relevance of items and their wording. The sequential approach enabled three rounds of refinement of the items and this enabled the most relevant wording to be achieved. The range of comments and depth of feedback prevented ‘knee-jerk’ changes being made based on one-off comments, but instead, it was possible to compare and contrast the comments and feedback and achieve a more considered outcome. The cultural relevance of the instrument was reinforced through the translation of the instrument to the German language in Austria, as few changes were made to the original wording in the translation process. This approach combined the capturing of individual lived experience with the systematic generation of key themes that can assist with the systematic evaluation of healthcare services. Further, adopting this approach provides a degree of confidence to users of the PCPI-C that it represents real service-user experiences.

The factorial validity of the instrument was supported by the findings of the study. The modified models fit indices suggest a good model fit for the sample [ 31 , 34 , 35 ]. The Confirmation Fit Indices (CFI) fall short of the threshold of >0.95. However, this is above 0.93 which is considered an acceptable level of fit [ 52 ]. Examination of the alpha scores confirm the reliability (internal consistency) of each construct [ 53 ]. All factor loadings were at a statistically significant level and above the acceptable criteria of 0.3 recommended for the sample size [ 38 ]. All but 2 of the loadings (v11 –‘ Staff don’t assume they know what is best for me’ and v13 – ‘My family are included in decisions about my care only when I want them to be’ ) were above the loadings considered as good to excellent [ 37 ]. At the level of construct, previous research by McCance et al [ 54 ] showed that all five constructs of the person-centred processes domain of the Person-centred Practice Framework carried equal significance in shaping how person-centred practice is delivered, and this is borne out by the approval of a 5-factor model in this study. However, it is also probable that there is a degree of overlap between items across the constructs, reflected in the 2 items with lower loadings. Other items in the PCPI-C address perspectives on shared decision-making and family engagement and thus it was concluded that based on the theoretical model and statistical analysis, these 2 items could be removed without compromising the comprehensiveness of the scale, resulting in a final 18-item version of the PCPI-C (available on request).

Whilst a systematic approach to the development of the PCPI-C was adopted, and we engaged with service users in several care settings in different countries, further research is required in the psychometric testing of the instrument across differing conditions, settings and with culturally diverse samples. Whilst the sample does provide an acceptable respondent to item ratio, and the sample contains international respondents, the model structure is not examined across international settings. Likewise, further research is required across service users with differing conditions and clinical settings. Whilst this is a limitation of this study reported here, the psychometric testing of an instrument is a continuous process and further testing of the PCPI-C is welcomed.


This paper has presented the systematic approach adopted to develop and test a theoretically informed instrument for measuring service users’ perspectives of person-centred care. The instrument is one of the first that is generic and theory-informed, enabling it to be applied as part of a comprehensive and integrated framework of evaluation at different levels of healthcare organisations. Whilst the instrument has good statistical properties, ongoing testing is recommended.


The authors of this paper acknowledge the significant contributions of all the service users who participated in this study.

  • View Article
  • Google Scholar
  • 2. Institute of Medicine Committee on Quality of Health Care in America (2001) Crossing the Quality Chasm : A New Health System for the 21st Century . Washington: National Academies Press. (Accessed 20/1/2023).
  • PubMed/NCBI
  • 5. McCance T. and McCormack B. (2021) The Person-centred Practice Framework, in McCormack B, McCance T, Martin S, McMillan A, Bulley C (2021) Fundamentals of Person-centred Healthcare Practice . Oxford. Wiley. PP: 23–32 .
  • 7. Nursing and Midwifery Council (2018) Future Nurse : Standards of Proficiency for Registered Nurses . London. Nursing and Midwifery Council. (Accessed 20/1/2023).
  • 14. Harding E., Wait S. and Scrutton J. (2015) The State of Play in Person-centred Care : A Pragmatic Review of how Person-centred Care is Defined , Applied and Measured . London: The Health Policy Partnership.
  • 16. McCormack B. and McCance T. (2017) Person-centred Practice in Nursing and Health Care : Theory and Practice . Chichester, UK: Wiley-Blackwell.
  • 17. Buetow S. (2016) Person-centred Healthcare : Balancing the Welfare of Clinicians and Patients . Oxford: Routledge.
  • 32. Kruger R. A. and Casey M. A., (2000). Focus groups : A practical guide for applied research . 3rd ed. Thousand Oaks, CA: Sage Publications.
  • 33. Kline P., (2014). An easy guide to factor analysis . Oxfordshire. Routledge.
  • 34. Byrne B.M., (2013). Structural equation modeling with Mplus : Basic concepts , applications , and programming . Oxfordshire. Routledge.
  • 35. Wang J. and Wang X., (2019). Structural equation modeling : Applications using Mplus . New Jersey. John Wiley & Sons.
  • 37. Comrey A.L. and Lee H.B., (2013). A first course in factor analysis . New York. Psychology press.
  • 38. Hair J.F., Black W.C., Babin B.J., Anderson R.E. R.L. and Tatham , (2018). Multivariate Data Analysis . 8th Edn., New Jersey. Pearson Prentice Hall.
  • 39. DeSilva (2014) Helping measure person-centred care : A review of evidence about commonly used approaches and tools used to help measure person-centred care . London, The Health Foundation. .
  • 42. McCormack B, McCance T and Maben J (2013) Outcome Evaluation in the Development of Person-Centred Practice In B McCormack, K Manley and A Titchen (2013) Practice Development in Nursing (Vol 2 ) . Oxford. Wiley-Blackwell Publishing. Pp: 190–211.
  • 43. Irish Platform for Patients Organisations Science & Industry (IPPOSI) (2018). Patient-centred outcome measures in research & healthcare : IPPOSI outcome report . Dublin, Ireland: IPPOSI. (Accessed 20/1/2023).


  1. (PDF) Reliability and validity in research

    validity of the research instrument pdf

  2. validity and reliability in research instrument, 8. validity and of

    validity of the research instrument pdf

  3. Validity of Research Instruments

    validity of the research instrument pdf

  4. [PDF] Validity and Reliability of the Research Instrument; How to Test

    validity of the research instrument pdf

  5. Validity of Research Instruments

    validity of the research instrument pdf

  6. Inquiries Research Instrument Validation Good and Scates 1972 .docx

    validity of the research instrument pdf


  1. Research instrument VALIDITY & RELIABILITY

  2. Writing on data collection instrument Validity and Reliability

  3. A Guide to the Kuder-Richardson Reliability Test

  4. Research Instrument, Validity and Reliability

  5. Research Instrument Validity

  6. Validity Analysis for Developing or Adapting the Instrument


  1. (PDF) Validity and Reliability of the Research Instrument; How to Test

    PDF | Questionnaire is one of the most widely used tools to collect data in especially social science research. ... drafting the instrument, face validity assessment by experts, data collection ...

  2. PDF Validity and reliability in quantitative studies

    1 Convergent validity—shows that an instrument is highly correlated with instruments measuring similar variables. 2 Divergent validity—shows that an instrument is poorly correlated to instruments that measure differ-ent variables. In this case, for example, there should be a low correlation between an instrument that mea-

  3. A Practical Guide to Instrument Development and Score ...

    an existing instrument is potentially psychometrically flawed (e.g., lacking reliability or validity evidence, step 7). An instrument development study might also be necessary if a researcher determines that an existing instrument is inappropriate for use with their target population (e.g., cross-cultural fairness issues). In some

  4. Quantitative Research Excellence: Study Design and Reliable and Valid

    Learn how to design and measure quantitative research with excellence and validity from this comprehensive article.

  5. Validity and Reliability of the Research Instrument; How to Test the

    Validity basically means "measure what is intended to be measured" (Field, 2005). In this paper, main types of validity namely; face validity, content validity, construct validity, criterion validity and reliability are discussed. Figure 1 shows the subtypes of various forms of validity tests exploring and describing in this article.

  6. Validity and Reliability of the Research Instrument; How to Test the

    Often new researchers are confused with selection and conducting of proper validity type to test their research instrument (questionnaire/survey). This review article explores and describes the validity and reliability of a questionnaire/survey and also discusses various forms of validity and reliability tests.

  7. PDF Understanding Reliability and Validity

    concerned with the accuracy of the actual measuring instrument or procedure, validity is concerned with the study's success at measuring what the researchers set out to measure. Researchers should be concerned with both external and internal validity. External validity refers to the extent to which the results of a study are generalizable or

  8. PDF Reliability, Validity and Ethics

    instrument is. Validity refers to the overall quality of the project. It reflects whether the research can be reasonably believed and to what extent generalisa-tions can be made. Validity reflects how sure a researcher is that instru-ment measures what it claims to measure. Validity is often described as internal or external.

  9. Measuring the Validity and Reliability of Research Instruments

    2. Research Objectives The objectives of the study are as follows: i) To analyse the reliability of the ILS, SPCD, and CMAT instruments; ii) To analyse the value of separation index in the ILS, SPCD, and CMAT instruments; iii) To distinguish the sufficiency of PTMEA and item fit in defining the terms in research instruments; and iv) To analyse ...

  10. [PDF] Validity and Reliability of the Research Instrument; How to Test

    Questionnaire is one of the most widely used tools to collect data in especially social science research. The main objective of questionnaire in research is to obtain relevant information in most reliable and valid manner. Thus the accuracy and consistency of survey/questionnaire forms a significant aspect of research methodology which are known as validity and reliability. Often new ...

  11. PDF Reliability and Validity

    establishing reliability and validity. TABLE 4-1 shows that, to establish any form. of reliability, one needs two or more independent observations on the same people. As we may realize later, the more independent observations we have on a. measurement of a concept taken with different points of time or forms, the more.

  12. PDF Reliability and Validity of Instrument on Academic Enhancement ...

    the instrument's validity and reliability is critical to ensuring the questionnaire's accuracy (Rahayah Ariffin et al., 2010). As a result, this study used the Rasch measurement model to establish the validity ... 2.1 Research Instrument The development of the instrument for this study was done in the first phase through a qualitative method ...

  13. PDF Instrument Validity in Manuscripts Published in the Journal of

    Gates, Johnson, & Shoulders Instrument Validity in Manuscripts … Journal of Agricultural Education 186 Volume 59, Issue 3, 2018 premier peer-reviewed journal is the Journal of Agricultural Education (JAE) (Knobloch, 2010). The review process for manuscripts submitted to JAE includes an evaluation of research methods, requiring that reviewers critique with the following three questions in mind:

  14. PDF Validity of Measurement Instruments Used in Research

    Validity and reliability of measurement instruments used in research. Am J Health Syst Pharm. 2008 Dec 1;65(23):2276-84. doi: 10.2146/ajhp070364. PMID: 19020196. Taherdoost, Hamed. (2016). Validity and Reliability of the Research Instrument; How to Test the Validation of a Questionnaire/Survey in a Research. International Journal of Academic ...

  15. PDF A Reliability and Validity of an Instrument to Evaluate the School ...

    An instrument is valid when it is measuring what is supposed to measure [20]. Or, in other words, when an instrument accurately measures any prescribed variable it is considered a valid instrument for that particular variable. There are four types of validity; face validity, criterion validity, content validity or construct validity [20],[21].


    3.1 INTRODUCTION. In Chapter 2, the study's aims of exploring how objects can influence the level of construct validity of a Picture Vocabulary Test were discussed, and a review conducted of the literature on the various factors that play a role as to how the validity level can be influenced. In this chapter validity and reliability are ...


    The concept of instrument validity or test can be divided into three types, namely: (a) ... Research Methods 2.1. Rational Test Validity Testing Learning outcomes that have been analyzed rationally have the power to measure accuracy, called learning outcomes tests that have logical validity. Other terms for logical validity are: rational validity,

  18. The 4 Types of Validity in Research

    The 4 Types of Validity in Research | Definitions & Examples. Published on September 6, 2019 by Fiona Middleton.Revised on June 22, 2023. Validity tells you how accurately a method measures something. If a method measures what it claims to measure, and the results closely correspond to real-world values, then it can be considered valid.

  19. Reliability vs. Validity in Research

    Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.opt. It's important to consider reliability and validity when you are creating your research design, planning your methods, and writing up your results, especially in quantitative research. Failing to do so can lead to several types of research ...

  20. (PDF) Measuring the Validity and Reliability of Research Instruments

    The application of the Rasch model in validity and reliability research instruments is valuable because the model able to define the constructs of valid items and provide a clear definition of the measurable constructs that are consistent with theoretical expectations.

  21. The development and structural validity testing of the Person-centred

    Background Person-centred healthcare focuses on placing the beliefs and values of service users at the centre of decision-making and creating the context for practitioners to do this effectively. Measuring the outcomes arising from person-centred practices is complex and challenging and often adopts multiple perspectives and approaches. Few measurement frameworks are grounded in an explicit ...