• Research article
  • Open access
  • Published: 08 June 2021

Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction

  • Oscar Arrogante 1 ,
  • Gracia María González-Romero 1 ,
  • Eva María López-Torre 1 ,
  • Laura Carrión-García 1 &
  • Alberto Polo 1  

BMC Nursing volume  20 , Article number:  92 ( 2021 ) Cite this article

18k Accesses

19 Citations

1 Altmetric

Metrics details

Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies.

Two hundred eighteen undergraduate nursing students participated in a cross-sectional study, using a mixed-method. MAES© (self-learning methodology in simulated environments) sessions were developed to assess students by formative evaluation. Objective Structured Clinical Examination sessions were conducted to assess students by summative evaluation. Simulated scenarios recreated clinical cases of critical patients. Students´ performance in all simulated scenarios were assessed using checklists. A validated questionnaire was used to evaluate satisfaction with clinical simulation. Quantitative data were analysed using the IBM SPSS Statistics version 24.0 software, whereas qualitative data were analysed using the ATLAS-ti version 8.0 software.

Most nursing students showed adequate clinical competence. Satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance.

The best solution to reduce students’ complaints with summative evaluation is to orient them to the simulated environment. It should be recommended to combine both evaluation strategies in simulated-based assessment, providing students feedback in summative evaluation, as well as evaluating their achievement of learning outcomes in formative evaluation.

Peer Review reports

Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [ 1 – 5 ]. In addition, simulation-based educational programs have demonstrated to be more useful than traditional teaching methodologies [ 4 , 6 ]. As a result, most nursing faculties are integrating this methodology into their study plans [ 7 ]. SBE has the potential to shorten the learning curve for students, increase the fusion between theoretical knowledge and clinical practice, establish deficient areas in students, develop communication and technical skills acquisition, improve patient safety, standardise the curriculum and teaching contents, and offer observations of real-time clinical decision making [ 5 , 6 , 8 , 9 ].

SBE offers an excellent opportunity to perform not only observed competency-based teaching, but also the assessment of these competencies. Simulated-based assessment (SBA) is aimed at evaluating various professional skills, including knowledge, technical and clinical skills, communication, and decision-making; as well as higher-order competencies such as patient safety and teamwork skills [ 1 – 4 , 10 ]. Compared with other traditional assessment methods (i.e. written or oral test), SBA offers the opportunity to evaluate the actual performance in an environment similar to the ‘real’ clinical practice, assess multidimensional professional competencies, and present standard clinical scenarios to all students [ 1 – 4 , 10 ].

The main SBA strategies are formative and summative evaluation. Formative evaluation is conducted to establish students’ progression during the course [ 11 ]. This evaluation strategy is helpful to educators in improving students’ deficient areas and testing their knowledge [ 12 ]. Employing this evaluation strategy, educators give students feedback about their performance. Subsequently, students self-reflect to evaluate their learning and determine their deficient areas. In this sense, formative evaluation includes an ideal phase to achieve the purposes of strategy: the debriefing [ 13 ]. International Nursing Association for Clinical Simulation and Learning (INACSL) defines debriefing as a reflective process immediately following the simulation-based experience where ‘participants explore their emotions and question, reflect, and provide feedback to one another’. Its aim is ‘to move toward assimilation and accommodation to transfer learning to future situations’ [ 14 ]. Therefore, debriefing is a basic component for learning to be effective after the simulation [ 15 , 16 ]. Furthermore, MAES© (according to its Spanish initials of self-learning methodology in simulated environments) is a clinical simulation methodology created to perform formative evaluations [ 17 ]. MAES© allows evaluating specifically nursing competencies acquired by several nursing students at the same time. MAES© is structured through the union of other active learning methodologies such as self-directed learning, problem-based learning, peer education and simulation-based learning. Specifically, students acquire and develop competencies through self-directed learning, as they voluntarily choose competencies to learn. Furthermore, this methodology encourages students to be the protagonists of their learning process, since they can choose the case they want to study, design the clinical simulation scenario and, finally, actively participate during the debriefing phase [ 17 ]. This methodology meets all the requirements defined by the INACSL Standards of Best Practice [ 18 ]. Compared to traditional simulation-based learning (where simulated clinical scenarios are designed by the teaching team and led by facilitators), the MAES© methodology (where simulated clinical scenarios are designed and led by students) provides students nursing a better learning process and clinical performance [ 19 ]. Currently, the MAES© methodology is used in clinical simulation sessions with nursing students in some universities, not only in Spain but also in Norway, Portugal and Brazil [ 20 ].

In contrast, summative evaluation is used to establish the learning outcomes achieved by students at the end of the course [ 11 ]. This evaluation strategy is helpful to educators in evaluating students’ learning, the competencies acquired by them and their academic achievement [ 12 ]. This assessment is essential in the education process to determine readiness and competence for certification and accreditation [ 10 , 21 ]. Accordingly, Objective Structured Clinical Examination (OSCE) is commonly conducted in SBA as a summative evaluation to evaluate students’ clinical competence [ 22 ]. Consequently, OSCE has been used by educational institutions as a valid and reliable method of assessment. OSCE most commonly consists of a ‘round-robin’ of multiple short testing stations, in each of which students must demonstrate defined clinical competencies, while educators evaluate their performance according to predetermined criteria using a standardized marking scheme, such as checklists. Students must rotate through these stations where educators assess students’ performance in clinical examination, technical skills, clinical judgment and decision-making skill during the nursing process [ 22 , 23 ]. This strategy of summative evaluation incorporates actors performing as simulated patients. Therefore, OSCE allows assessing students’ clinical competence in a real-life simulated clinical environment. After simulated scenarios, this evaluation strategy provides educators with an opportunity to give students constructive feedback according to their achieved results in the checklist [ 10 , 21 – 23 ].

Despite both evaluation strategies are widely employed in SBA, there is scarce evidence about the possible differences in satisfaction with clinical simulation when nursing students are assessed using formative and summative evaluation. Considering the high satisfaction with the formative evaluation perceived by our students during the implementation of the MAES© methodology, we were concerned if this satisfaction would be similar using the same simulated clinical scenarios through a summative evaluation. Additionally, we were concerned about the reasons why this satisfaction would be different using both strategies of SBA. Therefore, the aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation methodology in undergraduate nursing students, as well as to compare their satisfaction with this methodology using two strategies of SBA, such as formative and summative evaluation. In this sense, our research hypothesis is that both strategies of SBA are effective in acquiring nursing competencies, but student satisfaction with the formative evaluation is higher than with the summative evaluation.

Study design and setting

A descriptive cross-sectional study using a mixed-method and analysing both quantitative and qualitative data. The study was conducted from September 2018 to May 2019 in a University Centre of Health Sciences in Madrid (Spain). This centre offers Physiotherapy and Nursing Degrees.

Participants

The study included 3rd-year undergraduate students (106 students participated in MAES© sessions within the subject ‘Nursing care for critical patients’) and 4th-year undergraduate students (112 students participated in OSCE sessions within the subject ‘Supervised clinical placements – Advanced level’) in Nursing Degree. It should be noted, 4th-year undergraduate students had completed all their clinical placements and they had to approve OSCE sessions to achieve their certification.

Clinical simulation sessions

To assess the clinical performance of 3rd-year undergraduate students using formative evaluation, MAES© sessions were conducted. This methodology consists of 6 elements in a minimum of two sessions [ 17 ]: Team selection and creation of group identity (students are grouped into teams and they create their own identity), voluntary choice of subject of study (each team will freely choose a topic that will serve as inspiration for the design of a simulation scenario), establishment of baseline and programming skills to be acquired through brainstorming (the students, by teams, decide what they know about the subject and then what they want to learn from it, as well as the clinical and non- technical skills they would like to acquire with the case they have chosen), design of a clinical simulation scenario in which the students practice the skills to be acquired (each team commits to designing a scenario in the simulation room), execution of the simulated clinical experience (another team, different from the one that has designed the case, will enter the high-fidelity simulation room and will have a simulation experience), and finally debriefing and presentation of the acquired skills (in addition to analysing the performance of the participants in the scenario, the students explain what they learned during the design of the case and look for evidence of the learning objectives).

Alternatively, OSCE sessions were developed to assess the clinical performance of 4th-year undergraduate students using summative evaluation. Both MAES© and OSCE sessions recreated critically ill patients with diagnoses of Exacerbation of Chronic Obstructive Pulmonary Disease (COPD), acute coronary syndrome haemorrhage in a postsurgical, and severe traumatic brain injury.

It should be noted that the implementation of all MAES© and OSCEs sessions followed the Standards of Best Practice recommended by the INACSL [ 14 , 24 – 26 ]. In this way, all the stages included in a high-fidelity session were accomplished: pre-briefing, briefing, simulated scenario, and debriefing. Specifically, a session with all nursing students was carried out 1 week before the performance of OSCE stations to establish a safe psychological learning environment and familiarize students with this summative evaluation. In this pre-briefing phase, we implemented several activities based on practices recommended by the INACSL Standards Committee [ 24 , 25 ] and Rudolph, Raemer, and Simon [ 27 ] for establishing a psychologically safe context. Although traditional OSCEs do not usually include the debriefing phase, we decided to include this phase in all OSCEs carried out in our university centre, since we consider this phase is quite relevant to nursing students’ learning process and their imminent professional career.

Critically ill patient’s role was performed by an advanced simulator mannequin (NursingAnne® by Laerdal Medical AS) in all simulated scenarios. A confederate (a health professional who acts in a simulated scenario) performed the role of a registered nurse or a physician who could help students as required. Occasionally, this confederate could perform the role of a relative of a critically ill patient. Nursing students formed work teams of 2–3 students in all MAES© and OSCE sessions. Specifically, each work team formed in MAES© sessions received a brief description of simulated scenario 2 months before and students had to propose 3 NIC (Nursing Interventions Classification) interventions [ 28 ], and 5 related nursing activities with each of them, to resolve the critical situation. In contrast, the critical situation was presented to each work team formed in OSCE sessions for 2 min before entering the simulated scenario. During all simulated experiences, professors were monitoring and controlling the simulation with a sophisticated computer program in a dedicated control room. All simulated scenarios lasted 10 min.

After each clinical simulated scenario was concluded, a debriefing was carried out to give students feedback about their performance. Debriefings in MAES© sessions were conducted according to the Gather, Analyse, and Summarise (GAS) method, a structured debriefing model developed by Phrampus and O’Donnell [ 29 ]. According to this method, the debriefing questions used were: What went well during your performance?; What did not go so well during your performance?; How can you do better next time? . Additionally, MAES© includes an expository phase in debriefings, where the students who performed the simulated scenario establish the contributions of scientific evidence about its resolution [ 17 ]. Each debriefing lasted 20 min in MAES© sessions. In contrast, debriefings in OSCE sessions lasted 10 min and they were carried out according to the Plus-Delta debriefing tool [ 30 ], a technique recommended when time is limited. Consequently, the debriefing questions were reduced to two questions: What went well during your performance?; What did not go so well during your performance? . Within these debriefings, professors communicated to students the total score obtained in the appropriate checklist. Each debriefing lasted 10 min in OSCE sessions. After all debriefings, students completed the questionnaires to evaluate their satisfaction with clinical simulation. In OSCE sessions, students had to report their satisfaction only with the scenario performed, which took part in a series of clinical stations.

In summary, Table  1 shows the required elements for formative and summative evaluation according to the Standards of Best Practice for participant evaluation recommended by the INACSL [ 18 ]. It should be noted that our MAES© and OSCE sessions accomplished these required elements.

Instruments

Clinical performance.

Professors assessed students’ clinical performance using checklists (‘Yes’/‘No’). In MAES© sessions, checklists were based on the 5 most important nursing activities included in the NIC [ 28 ] selected by nursing students. Table  2 shows the checklist of the most important NIC interventions and its related nursing activities selected by nursing students in the Exacerbation of Chronic Obstructive Pulmonary Disease (COPD) simulated scenario. In contrast, checklists for evaluating OSCE sessions were based on nursing activities selected by consensus among professors, registered nurses, and clinical placement mentors. Nursing activities were divided into 5 categories: nursing assessment, clinical judgment/decision-making, clinical management/nursing care, communication/interpersonal relationships, and teamwork. Table  3 shows the checklist of nursing activities that nursing students had to perform in COPD simulated scenario. During the execution of all simulated scenarios, professors checked if the participants perform or not the nursing activities selected.

Clinical simulation satisfaction

To determine satisfaction with clinical simulation perceived by nursing students, the Satisfaction Scale Questionnaire with High-Fidelity Clinical Simulation [ 31 ] was used after each clinical simulation session. This questionnaire consists of 33 items with a 5-point Likert scale ranging from ‘strongly disagree’ to ‘totally agree’. These items are divided into 8 scales: simulation utility, characteristics of cases and applications, communication, self-reflection on performance, increased self-confidence, relation between theory and practice, facilities and equipment and negative aspects of simulation. Cronbach’s α values for each scale ranged from .914 to .918 and total scale presents satisfactory internal consistency (Cronbach’s α value = .920). This questionnaire includes a final question about any opinion or suggestion that participating students wish to reflect after the simulation experience.

Data analysis

Quantitative data were analysed using IBM SPSS Statistics version 24.0 software for Windows (IBM Corp., Armonk, NY, USA). Descriptive statistics were calculated to interpret the results obtained in demographic data, clinical performance, and satisfaction with clinical simulation. The dependent variables after the program in the two groups were analyzed using independent t-tests. The differences in the mean changes between the two groups were analyzed using an independent t-test. Cohen’s d was calculated to analyse the effect size for t-tests. Statistical tests were two-sided (α = 0.05), so the statistical significance was set at 0.05. Subsequently, all students’ opinions and comments were analysed using the ATLAS-ti version 8.0 software (Scientific Software Development GmbH, Berlin, Germany). All the information contained in these qualitative data were stored, managed, classified and organized through this software. All the reiterated words, sentences or ideas were grouped into themes using a thematic analysis [ 32 ]. It should be noted that the students’ opinions and comments were preceded by the letter ‘S’ (student) and numerically labelled.

A total of 218 nursing students participated in the study (106 students were trained through MAES© sessions, whereas 112 students were assessed through OSCE sessions). The age of students ranged from 20 to 43 years (mean = 23.28; SD = 4.376). Most students were women ( n  = 184; 84.4%).

In formative evaluation, professors checked 93.2% of students selected adequately both NIC interventions and its related nursing activities for the resolution of the clinical simulated scenario. Subsequently, these professors checked 85.6% of students, who participated in each simulated scenario, performed the nursing activities previously selected by them. In summative evaluation, students obtained total scores ranged from 65 to 95 points (mean = 7.43; SD = .408).

Descriptive data for each scale of satisfaction with clinical simulation questionnaire, t-test, and effect sizes (d) of differences between two evaluation strategies are shown in Table  4 . Statistically significant differences were found between two evaluation strategies for all scales of the satisfaction with clinical simulation questionnaire. Students´ satisfaction with clinical simulation was higher for all scales of the questionnaire when they were assessed using formative evaluation, including the ‘negative aspects of simulation’ scale, where the students perceived fewer negative aspects. The effect size of these differences was large (including the total score of the questionnaire) (Cohen’s d values > .8), except for the ‘facilities and equipment’ scale, which effect size was medium (Cohen’s d value > .5) [ 33 ].

Table  5 shows specifically descriptive data, t-test, and effect sizes (d) of differences between both evaluation strategies for each item of the clinical simulation satisfaction questionnaire. Statistically significant differences were found between two evaluation strategies for all items of the questionnaire, except for items ‘I have improved communication with the family’, ‘I have improved communication with the patient’, and ‘I lost calm during any of the cases’. Students´ satisfaction with clinical simulation was higher in formative evaluation sessions for most items, except for item ‘simulation has made me more aware/worried about clinical practice’, where students informed being more aware and worried in summative evaluation sessions. Most effect sizes of these differences were small or medium (Cohen’s d values ranged from .238 to .709) [ 33 ]. The largest effect sizes of these differences were obtained for items ‘timing for each simulation case has been adequate’ (d = 1.107), ‘overall satisfaction of sessions’ (d = .953), and ‘simulation has made me more aware/worried about clinical practice’ (d = -.947). In contrast, the smallest effect sizes of these differences were obtained for items ‘simulation allows us to plan the patient care effectively’ (d = .238) and ‘the degree of cases difficulty was appropriate to my knowledge’ (d = .257).

In addition, participating students provided 74 opinions or suggestions expressed through short comments. Most students’ comments were related to 3 main themes after the thematic analysis: utility of clinical simulation methodology (S45: ‘it has been a useful activity and it helped us to recognize our mistakes and fixing knowledge’, S94: ‘to link theory to practice is essential’), to spend more time on this methodology (S113: ‘I would ask for more practices of this type‘, S178: ‘I feel very happy, but it should be done more frequently’), and its integration into other subjects (S21: ‘I consider this activity should be implemented in more subjects’, S64: ‘I wish there were more simulations in more subjects’). Finally, students´ comments about summative evaluation sessions included other 2 main themes related to: limited time of simulation experience (S134: ‘time is short’, S197: ‘there is no time to perform activities and assess properly’) and students´ anxiety (S123: ‘I was very nervous because people were evaluating me around’, S187: ‘I was more nervous than in a real situation’).

The most significant results obtained in our study are the nursing competency acquisition through clinical simulation by nursing students and the different level of their satisfaction with this methodology depending on the evaluation strategy employed.

Firstly, professors in this study verified most students acquired the nursing competencies to resolve each clinical situation. In our study, professors verified that most nursing students performed the majority of the nursing activities required for the resolution of each MAES© session and OSCE station. This result confirms the findings in other studies that have demonstrated nursing competency acquisition by nursing students through clinical simulation [ 34 , 35 ], and specifically nursing competencies related to critical patient management [ 9 , 36 ].

Secondly, students’ satisfaction assessed using both evaluation strategies could be considered high in most items of the questionnaire, regarding their mean scores (quite close to the maximum score in the response scale of the satisfaction questionnaire). The high level of satisfaction expressed by nursing students with clinical simulation obtained in this study is also congruent with empirical evidence, which confirms that this methodology is a useful tool for their learning process [ 6 , 31 , 37 – 40 ].

However, satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance. Reduced time is a frequent complaint of students in OSCE [ 23 , 41 ] and clinical simulation methodology [ 5 , 6 , 10 ]. Professors, registered nurses, and clinical placement mentors tested all simulated scenarios and their checklist in this study. They checked the time was enough for its resolution. Another criticism of summative evaluation is increased anxiety. However, several studies have demonstrated during clinical simulation students’ anxiety increase [ 42 , 43 ] and it is considered as the most disadvantage of clinical simulation [ 1 – 10 ]. In this sense, anxiety may influence negatively students’ learning process [ 42 , 43 ]. Although the current simulation methodology can mimic the real medical environment to a great degree, it might still be questionable whether students´ performance in the testing environment really represents their true ability. Test anxiety might increase in an unfamiliar testing environment; difficulty to handle unfamiliar technology (i.e., monitor, defibrillator, or other devices that may be different from the ones used in the examinee’s specific clinical environment) or even the need to ‘act as if’ in an artificial scenario (i.e., talking to a simulator, examining a ‘patient’ knowing he/she is an actor or a mannequin) might all compromise examinees’ performance. The best solution to reduce these complaints is the orientation of students to the simulated environment [ 10 , 21 – 23 ].

Nevertheless, it should be noted that the diversity in the satisfaction scores obtained in our study could be supported not by the choice of the assessment strategy, but precisely by the different purposes of formative and summative assessment. In this sense, there is a component of anxiety that is intrinsic in summative assessment, which must certify the acquisition of competencies [ 10 – 12 , 21 ]. In contrast, this aspect is not present in formative assessment, which is intended to help the student understand the distance to reach the expected level of competence, without penalty effects [ 10 – 12 ].

Both SBA strategies allow educators to evaluate students’ knowledge and apply it in a clinical setting. However, formative evaluation is identified as ‘assessment for learning’ and summative evaluation as ‘assessment of learning’ [ 44 ]. Using formative evaluation, educators’ responsibility is to ensure not only what students are learning in the classroom, but also the outcomes of their learning process [ 45 ]. In this sense, formative assessment by itself is not enough to determine educational outcomes [ 46 ]. Consequently, a checklist for evaluating students’ clinical performance was included in MAES© sessions. Alternatively, educators cannot make any corrections in students’ performance using summative evaluation [ 45 ]. Gavriel [ 44 ] suggests providing students feedback in this SBA strategy. Therefore, a debriefing phase was included after each OSCE session in our study. The significance of debriefing recognised by nursing students in our study is also congruent with the most evidence found  [ 13 , 15 , 16 , 47 ]. Nursing students appreciate feedback about their performance during simulation experience and, consequently, debriefing is considered as the most rewarding phase in clinical simulation by them  [ 5 , 6 , 48 ]. In addition, nursing students in our study expressed they could learn from their mistakes in debriefing. Learn from error is one of the most advantages of clinical simulation shown in several studies  [ 5 , 6 , 49 ] and mistakes should be considered learning opportunities rather than there being embarrassment or punitive consequences  [ 50 ].

Furthermore, nursing students who participated in our study considered the practical utility of clinical simulation as another advantage of this teaching methodology. This result is congruent with previous studies [ 5 , 6 ]. Specifically, our students indicated this methodology is useful to bridge the gap between theory and practice [ 51 , 52 ]. In this sense, clinical simulation has proven to reduce this gap and, consequently, it has demonstrated to shorten the gap between classrooms and clinical practices  [ 5 , 6 , 51 , 52 ]. Therefore, as this teaching methodology relates theory and practice, it helps nursing students to be prepared for their clinical practices and future careers. According to Benner’s model of skill acquisition in nursing [ 53 ], nursing students become competent nurses through this learning process, acquiring a degree of safety and clinical experience before their professional careers [ 54 ]. Although our research indicates clinical simulation is a useful methodology for the acquisition and learning process of competencies mainly related to adequate management and nursing care of critically ill patients, this acquisition and learning process could be extended to most nursing care settings and its required nursing competencies.

Limitations and future research

Although checklists employed in OSCE have been criticized for their subjective construction [ 10 , 21 – 23 ], they were constructed with the expert consensus of nursing professors, registered nurses and clinical placement mentors. Alternatively, the self-reported questionnaire used to evaluate clinical simulation satisfaction has strong validity. All simulated scenarios were similar in MAES© and OSCE sessions (same clinical situations, patients, actors and number of participating students), although the debriefing method employed after them was different. This difference was due to reduced time in OSCE sessions. Furthermore, it should be pointed out that the two groups of students involved in our study were from different course years and they were exposed to different strategies of SBA. In this sense, future studies should compare nursing students’ satisfaction with both strategies of SBA in the same group of students and using the same debriefing method. Finally, future research should combine formative and summative evaluation for assessing the clinical performance of undergraduate nursing students in simulated scenarios.

It is needed to provide students feedback about their clinical performance when they are assessed using summative evaluation. Furthermore, it is needed to evaluate whether they achieve learning outcomes when they are assessed using formative evaluation. Consequently, it should be recommended to combine both evaluation strategies in SBA. Although students expressed high satisfaction with clinical simulation methodology, they perceived a reduced time and increased anxiety when they are assessed by summative evaluation. The best solution is the orientation of students to the simulated environment.

Availability of data and materials

The datasets analysed during the current study are available from the corresponding author on reasonable request.

Martins J, Baptista R, Coutinho V, Fernandes M, Fernandes A. Simulation in nursing and midwifery education. Copenhagen: World Health Organization Regional Office for Europe; 2018.

Google Scholar  

Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs. 2010;66:3–15.

Article   PubMed   Google Scholar  

Chernikova O, Heitzmann N, Stadler M, Holzberger D, Seidel T, Fischer F. Simulation-based learning in higher education: a meta-analysis. Rev Educ Res. 2020;90:499–541.

Article   Google Scholar  

Kim J, Park JH, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ. 2016;16:152.

Article   PubMed   PubMed Central   Google Scholar  

Ricketts B. The role of simulation for learning within pre-registration nursing education—a literature review. Nurse Educ Today. 2011;31:650–4.

PubMed   Google Scholar  

Shin S, Park JH, Kim JH. Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Educ Today. 2015;35:176–82.

Bagnasco A, Pagnucci N, Tolotti A, Rosa F, Torre G, Sasso L. The role of simulation in developing communication and gestural skills in medical students. BMC Med Educ. 2014;14:106.

Oh PJ, Jeon KD, Koh MS. The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis. Nurse Educ Today. 2015;35:e6–e15.

Stayt LC, Merriman C, Ricketts B, Morton S, Simpson T. Recognizing and managing a deteriorating patient: a randomized controlled trial investigating the effectiveness of clinical simulation in improving clinical performance in undergraduate nursing students. J Adv Nurs. 2015;71:2563–74.

Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc. 2016;9:69–82.

PubMed   PubMed Central   Google Scholar  

Billings DM, Halstead JA. Teaching in nursing: a guide for faculty. 4th ed. St. Louis: Elsevier; 2012.

Nichols PD, Meyers JL, Burling KS. A framework for evaluating and planning assessments intended to improve student achievement. Educ Meas Issues Pract. 2009;28:14–23.

Cant RP, Cooper SJ. The benefits of debriefing as formative feedback in nurse education. Aust J Adv Nurs. 2011;29:37–47.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Glossary. Clin Simul Nurs. 2016;12:S39–47.

Dufrene C, Young A. Successful debriefing-best methods to achieve positive learning outcomes: a literature review. Nurse Educ Today. 2014;34:372–6.

Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58–63.

Díaz JL, Leal C, García JA, Hernández E, Adánez MG, Sáez A. Self-learning methodology in simulated environments (MAES©): elements and characteristics. Clin Simul Nurs. 2016;12:268–74.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM : Participant Evaluation. Clin Simul Nurs. 2016;12:S26–9.

Díaz Agea JL, Megías Nicolás A, García Méndez JA, Adánez Martínez MG, Leal CC. Improving simulation performance through self-learning methodology in simulated environments (MAES©). Nurse Educ Today. 2019;76:62–7.

Díaz Agea JL, Ramos-Morcillo AJ, Amo Setien FJ, Ruzafa-Martínez M, Hueso-Montoro C, Leal-Costa C. Perceptions about the self-learning methodology in simulated environments in nursing students: a mixed study. Int J Environ Res Public Health. 2019;16:4646.

Article   PubMed Central   Google Scholar  

Oermann MH, Kardong-Edgren S, Rizzolo MA. Summative simulated-based assessment in nursing programs. J Nurs Educ. 2016;55:323–8.

Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ. 1979;13:41–54.

CAS   PubMed   Google Scholar  

Mitchell ML, Henderson A, Groves M, Dalton M, Nulty D. The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum. Nurse Educ Today. 2009;29:394–404.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Design. Clin Simul Nurs. 2016;12:S5–S12.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Facilitation. Clin Simul Nurs. 2016;12:S16–20.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Debriefing. Clin Simul Nurs. 2016;12:S21–5.

Rudolph JW, Raemer D, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc. 2014;9:339–49.

Butcher HK, Bulechek GM, Dochterman JMM, Wagner C. Nursing Interventions Classification (NIC). 7th ed. St. Louis: Elsevier; 2018.

Phrampus PE, O’Donnell JM. Debriefing using a structured and supported approach. In: AI AIL, De Maria JS, Schwartz AD, Sim AJ, editors. The comprehensive textbook of healthcare simulation. New York: Springer; 2013. p. 73–84.

Chapter   Google Scholar  

Decker S, Fey M, Sideras S, Caballero S, Rockstraw L, Boese T, et al. Standards of best practice: simulation standard VI: the debriefing process. Clin Simul Nurs. 2013;9:S26–9.

Alconero-Camarero AR, Gualdrón-Romero A, Sarabia-Cobo CM, Martínez-Arce A. Clinical simulation as a learning tool in undergraduate nursing: validation of a questionnaire. Nurse Educ Today. 2016;39:128–34.

Mayan M. Essentials of qualitative inquiry. Walnut Creek: Left Coast Press, Inc.; 2009.

Cohen L, Manion L, Morrison K. Research methods in education. 7th ed. London: Routledge; 2011.

Lapkin S, Levett-Jones T, Bellchambers H, Fernandez R. Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clin Simul Nurs. 2010;6:207–22.

McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Revisiting “a critical review of simulation-based medical education research: 2003-2009”. Med Educ. 2016;50:986–91.

Abelsson A, Bisholt B. Nurse students learning acute care by simulation - focus on observation and debriefing. Nurse Educ Pract. 2017;24:6–13.

Bland AJ, Topping A, Wood BA. Concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Educ Today. 2011;31:664–70.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN student satisfaction and self-confidence in learning, design scale simulation, and educational practices questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34:1298–304.

Levett-Jones T, McCoy M, Lapkin S, Noble D, Hoffman K, Dempsey J, et al. The development and psychometric testing of the satisfaction with simulation experience scale. Nurse Educ Today. 2011;31:705–10.

Zapko KA, Ferranto MLG, Blasiman R, Shelestak D. Evaluating best educational practices, student satisfaction, and self-confidence in simulation: a descriptive study. Nurse Educ Today. 2018;60:28–34.

Kelly MA, Mitchell ML, Henderson A, Jeffrey CA, Groves M, Nulty DD, et al. OSCE best practice guidelines-applicability for nursing simulations. Adv Simul. 2016;1:10.

Cantrell ML, Meyer SL, Mosack V. Effects of simulation on nursing student stress: an integrative review. J Nurs Educ. 2017;56:139–44.

Nielsen B, Harder N. Causes of student anxiety during simulation: what the literature says. Clin Simul Nurs. 2013;9:e507–12.

Gavriel J. Assessment for learning: a wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Educ Prim Care. 2013;24:93–6.

Taras M. Summative and formative assessment. Act Learn High Educ. 2008;9:172–82.

Wunder LL, Glymph DC, Newman J, Gonzalez V, Gonzalez JE, Groom JA. Objective structured clinical examination as an educational initiative for summative simulation competency evaluation of first-year student registered nurse anesthetists’ clinical skills. AANA J. 2014;82:419–25.

Neill MA, Wotton K. High-fidelity simulation debriefing in nursing education: a literature review. Clin Simul Nurs. 2011;7:e161–8.

Norman J. Systematic review of the literature on simulation in nursing education. ABNF J. 2012;23:24–8.

King A, Holder MGJr, Ahmed RA. Error as allies: error management training in health professions education. BMJ Qual Saf. 2013;22:516–9.

Higgins M, Ishimaru A, Holcombe R, Fowler A. Examining organizational learning in schools: the role of psychological safety, experimentation, and leadership that reinforces learning. J Educ Change. 2012;13:67–94.

Hope A, Garside J, Prescott S. Rethinking theory and practice: Pre-registration student nurses experiences of simulation teaching and learning in the acquisition of clinical skills in preparation for practice. Nurse Educ Today. 2011;31:711–7.

Lisko SA, O’Dell V. Integration of theory and practice: experiential learning theory and nursing education. Nurs Educ Perspect. 2010;31:106–8.

Benner P. From novice to expert: excellence and power in clinical nursing practice. Menlo Park: Addison-Wesley Publishing; 1984.

Book   Google Scholar  

Nickless LJ. The use of simulation to address the acute care skills deficit in pre-registration nursing students: a clinical skill perspective. Nurse Educ Pract. 2011;11:199–205.

Download references

Acknowledgements

The authors appreciate the collaboration of nursing students who participated in the study.

STROBE statement

All methods were carried out in accordance with the 22-item checklist of the consolidated criteria for reporting cross-sectional studies (STROBE).

The authors have no sources of funding to declare.

Author information

Authors and affiliations.

Fundación San Juan de Dios, Centro de Ciencias de la Salud San Rafael, Universidad de Nebrija, Paseo de La Habana, 70, 28036, Madrid, Spain

Oscar Arrogante, Gracia María González-Romero, Eva María López-Torre, Laura Carrión-García & Alberto Polo

You can also search for this author in PubMed   Google Scholar

Contributions

OA: Conceptualization, Data Collection, Formal Analysis, Writing – Original Draft, Writing - Review & Editing, Supervision; GMGR: Conceptualization, Data Collection, Writing - Review & Editing; EMLT: Conceptualization, Writing - Review & Editing; LCG: Conceptualization, Data Collection, Writing - Review & Editing; AP: Conceptualization, Data Collection, Formal Analysis, Writing - Review & Editing, Supervision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Oscar Arrogante .

Ethics declarations

Ethics approval and consent to participate.

The research committee of the Centro Universitario de Ciencias de la Salud San Rafael-Nebrija approved the study (P_2018_012). According to the ethical standards, all participants received written informed consent and written information about the study and its goals. Additionally, written informed consent for audio-video recording was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Arrogante, O., González-Romero, G.M., López-Torre, E.M. et al. Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction. BMC Nurs 20 , 92 (2021). https://doi.org/10.1186/s12912-021-00614-2

Download citation

Received : 09 February 2021

Accepted : 17 May 2021

Published : 08 June 2021

DOI : https://doi.org/10.1186/s12912-021-00614-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical competence
  • High Fidelity simulation training
  • Nursing students

BMC Nursing

ISSN: 1472-6955

formative assessment nursing education

Assessment and Evaluation in Nursing Education: A Simulation Perspective

  • First Online: 29 February 2024

Cite this chapter

formative assessment nursing education

  • Loretta Garvey 7 &
  • Debra Kiegaldie 8  

Part of the book series: Comprehensive Healthcare Simulation ((CHS))

183 Accesses

Assessment and evaluation are used extensively in nursing education. In many instances, these terms are often used interchangeably, which can create confusion, yet key differences are associated with each.

Assessment in undergraduate nursing education is designed to ascertain whether students have achieved their potential and have acquired the knowledge, skills, and abilities set out within their course. Assessment aims to understand and improve student learning and must be at the forefront of curriculum planning to ensure assessments are well aligned with learning outcomes. In the past, the focus of assessment has often been on a single assessment. However, it is now understood that we must examine the whole system or program of assessment within a course of study to ensure integration and recognition of all assessment elements to holistically achieve overall course aims and objectives. Simulation is emerging as a safe and effective assessment tool that is increasingly used in undergraduate nursing.

Evaluation, however, is more summative in that it evaluates student attainment of course outcomes and their views on the learning process to achieve those outcomes. Program evaluation takes assessment of learning a step further in that it is a systematic method to assess the design, implementation, improvement, or outcomes of a program. According to Frye and Hemmer, student assessments (measurements) can be important to the evaluation process, but evaluation measurements come from various sources (Frye and Hemmer. Med Teacher 34:e288-e99, 2012). Essentially, program evaluation is concerned with the utility of its process and results (Alkin and King. Am J Evalu 37:568–79, 2016). The evaluation of simulation as a distinct program of learning is an important consideration when designing and implementing simulation into undergraduate nursing. This chapter will examine assessment and program evaluation from the simulation perspective in undergraduate nursing to explain the important principles, components, best practice approaches, and practical applications that must be considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Masters GN. Reforming Education Assessment: Imperatives, principles, and challenges. Camberwell: ACER Press; 2013.

Google Scholar  

MacLellan E. Assessment for Learning: the differing perceptions of tutors and students. Assess Eval High Educ. 2001;26(4):307–18.

Article   Google Scholar  

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63–7.

Article   CAS   PubMed   Google Scholar  

Alinier G. Nursing students’ and lecturers’ perspectives of objective structured clinical examination incorporating simulation. Nurse Educ Today. 2003;23(6):419–26.

Article   PubMed   Google Scholar  

Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, et al. 2018 Consensus framework for good assessment. Med Teach. 2018;40(11):1102–9.

Biggs J. Constructive alignment in university teaching: HERDSA. Rev High Educ. 2014;1:5–22.

Hamdy H. Blueprinting for the assessment of health care professionals. Clin Teach. 2006;3(3):175–9.

Welch S. Program evaluation: a concept analysis. Teach Learn Nurs. 2021;16(1):81–4.

Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE Guide No. 67. Med Teach. 2012;34(5):e288–e99.

Johnston S, Coyer FM, Nash R. Kirkpatrick's evaluation of simulation and debriefing in health care education: a systematic review. J Nurs Educ. 2018;57(7):393–8.

ACGM. Glossary of Terms: Accreditation Council for Graduate Medical Education 2020. https://www.acgme.org/globalassets/pdfs/ab_acgmeglossary.pdf .

Shadish WR, Luellen JK. History of evaluation. In: Mathison S, editor. Encyclopedia of evaluation. Sage; 2005. p. 183–6.

Lewallen LP. Practical strategies for nursing education program evaluation. J Prof Nurs. 2015;31(2):133–40.

Kirkpatrick DL. Evaluation of training. In: Craig RL, Bittel LR, editors. New York: McGraw Hill; 1967.

Cahapay M. Kirkpatrick model: its limitations as used in higher education evaluation. Int J Assess Tools Educ. 2021;8(1):135–44.

Yardley S, Dornan T. Kirkpatrick's levels and education 'evidence'. Med Educ. 2012;46(1):97–106.

Kirkpatrick J, Kirkpatrick W. An introduction to the new world Kirkpatrick model. Kirkpatrick Partners; 2021.

Bhatia M, Stewart AE, Wallace A, Kumar A, Malhotra A. Evaluation of an in-situ neonatal resuscitation simulation program using the new world Kirkpatrick model. Clin Simul Nurs. 2021;50:27–37.

Lippe M, Carter P. Using the CIPP model to assess nursing education program quality and merit. Teach Learn Nurs. 2018;13(1):9–13.

Kardong-Edgren S, Adamson KA, Fitzgerald C. A review of currently published evaluation instruments for human patient simulation. Clin Simul Nurs. 2010;6(1):e25–35.

Solutions S. Reliability and Validity; 2022

Rauta S, Salanterä S, Vahlberg T, Junttila K. The criterion validity, reliability, and feasibility of an instrument for assessing the nursing intensity in perioperative settings. Nurs Res Pract. 2017;2017:1048052.

PubMed   PubMed Central   Google Scholar  

Jeffries PR, Rizzolo MA. Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: a national, multi-site, multi-method study (summary report). Sci Res. 2006;

Unver V, Basak T, Watts P, Gaioso V, Moss J, Tastan S, et al. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire. Contemp Nurse. 2017;53(1):60–74.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34(10):1298–304.

Guise J-M, Deering SH, Kanki BG, Osterweil P, Li H, Mori M, et al. Validation of a tool to measure and promote clinical teamwork. Simul Healthc. 2008;3(4)

Millward LJ, Jeffries N. The team survey: a tool for health care team development. J Adv Nurs. 2001;35(2):276–87.

Download references

Author information

Authors and affiliations.

Federation University Australia, University Dr, Mount Helen, VIC, Australia

Loretta Garvey

Holmesglen Institute, Healthscope Hospitals, Monash University, Mount Helen, VIC, Australia

Debra Kiegaldie

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Loretta Garvey .

Editor information

Editors and affiliations.

Emergency Medicine, Icahn School of Medicine at Mount Sinai, Director of Emergency Medicine Simulation, Mount Sinai Hospital, New York, NY, USA

Jared M. Kutzin

School of Nursing, University of California San Francisco, San Francisco, CA, USA

Perinatal Patient Safety, Kaiser Permanente, Pleasanton, CA, USA

Connie M. Lopez

Eastern Health Clinical School, Faculty of Medicine, Nursing & Health Sciences, Monash University, Melbourne, VIC, Australia

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Garvey, L., Kiegaldie, D. (2023). Assessment and Evaluation in Nursing Education: A Simulation Perspective. In: Kutzin, J.M., Waxman, K., Lopez, C.M., Kiegaldie, D. (eds) Comprehensive Healthcare Simulation: Nursing. Comprehensive Healthcare Simulation. Springer, Cham. https://doi.org/10.1007/978-3-031-31090-4_14

Download citation

DOI : https://doi.org/10.1007/978-3-031-31090-4_14

Published : 29 February 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-31089-8

Online ISBN : 978-3-031-31090-4

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Formative Assessment Strategies for Healthcare Educators

formative assessment nursing education

Formative assessments are those lower-stakes assessments that are delivered during instruction in some way, or 'along the way' so to speak. As an educator, it was always a challenge to identify if or what my students were understanding, what skills they had acquired, and if or how I should adjust my teaching strategy to help improve their learning. I’m guessing I am not alone with this. In medical education, the pace is so fast that many instructors feel like they do not have the time to spare in giving assessments ‘along the way’, but would rather focus on teaching everything students need for the higher-stakes exams. With medical education being incredibly intense and fast, this is completely understandable. However, there must be a reason so much research supports the effectiveness in administering formative assessments….along the way.

One reason formative assessments are proven so useful is they provide meaningful and useful feedback; feedback that can be used by both the instructor and students.

Results from formative assessments should have a direct relation to the learning objectives established by the instructor, and because of this, the results provide trusted feedback for both the instructor and student. This is incredibly important. For instructors, it allows them to make immediate adjustments to their teaching strategy and for the students, it helps them develop a more reliable self-awareness of their own learning. These two things alone are very useful, but when combined, they can result in an increase in student outcomes.

Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities.  

1. Pre-Assessment:

Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas

  • When instructors have feedback from student assessments prior to class, it is easier to tailor the lesson to student needs.
  • Posing questions prior to class can help students focus on what the instructor thinks is important.
  • By assessing students before class, it helps ensure students are more prepared for what learning will take place in class.
  • Pre-assessments can provide more ‘in-class’ time flexibility- knowing ahead of time which knowledge gaps students may have allows the instructor to better use class time in a more flexible way...not as many ‘surprises’ flexibility.

formative assessment nursing education

2. Frequent class assessments:

Provides students with feedback for learning during class, and provides a focus for students related to important topics which help increase learning gains

formative assessment nursing education

  • Adding more formative assessments during class increases student retention.
  • Frequent formative assessments help students stay focused by giving them natural ‘breaks’ from either a lecture or the activity.
  • Multiple formative assessments can provide students with a “road-map” to what the instructor feels is important (i.e. what will appear on summative assessments).
  • By using frequent assessments, the instructor can naturally help students with topic or content transitions during a lecture or activity.
  • The data/feedback from the assessments can help instructors better understand which instructional methods are most effective- in other words, what works and what doesn’t.

3. Guided Study assessments (group or tutorial):

‍ Provides students with opportunities to acquire information needed to complete the assessment, for example through research or group work, and increases student self-awareness related to their own knowledge (gaps)

formative assessment nursing education

  • Assessments where students are expected to engage in research allows them to develop and use higher-level thinking skills.
  • Guided assessments engage students in active learning either independently or through collaboration with a group.
  • Small group assessments encourage students to articulate their thinking and reasoning, and helps them develop self-awareness about what they do and do not yet understand.
  • Tutorial assessments can provide the instructor real-time feedback for student misconceptions and overall understanding- allowing them to make important decisions about how to teach particular topics.

4. Take-Home assessments: ‍

Allows students to preview the instructors assessment style, are low-stakes and self-paced to allow students to engage with the material, and provides the instructor with formative feedback 

  • Assessments that students can engage in outside of class gives them a ‘preview’ of the information that they will likely need to retrieve again on a summative exam.
  • When students take an assessment at home, the instructor can receive feedback with enough time to adjust the classroom instruction to address knowledge gaps or misconceptions.
  • Take home assessments can help students develop self-awareness of their own misunderstandings or knowledge gaps.

formative assessment nursing education

5.“Bedside” observation:

Informs students in clinical settings of their level of competence and learning, and may improve motivation and improve participation in clinical activities.

  • Real-time formative assessments can provide students with critical feedback related to the skills that are necessary for practicing medicine.
  • On the fly assessments can help clinical instructors learn more about student understanding as well as any changes they can make in their instruction.
  • Formative assessments in a clinical setting can equip clinical instructors with a valuable tool to help them make informed decisions around their teaching and student learning.
  • Bedside assessments provide a standardized way of formatively assessing students in a very unpredictable learning environment.

The challenge for many instructors is often in the “how” when delivering formative assessments. Thankfully, improving teaching and learning through the use of formative assessments (and feedback) can be greatly enhanced with educational technology. DaVinci Education’s Leo platform provides multiple ways in which you can deliver formative assessments. With Leos’ exam feature you can:

  • Assign pre-class, in-class or take-home quizzes
  • Deliver IRATs used during TBL exercises to assess student individual readiness
  • Deliver GRATs used during TBL exercises by using Leo’s digital scratch-off tool to encourage collaboration and assess group readiness
  • Monitor student performance in real-time using Leo’s Monitor Exam feature
  • Customize student feedback options during or following an assessment

References:

Burch, v. c., seggie, j. l., & gary, n. e. (2006, may). formative assessment promotes learning in undergraduate clinical clerkships. retrieved from https://www.ncbi.nlm.nih.gov/pubmed/16751919, feedback and formative assessment tools . (n.d.). retrieved from http://www.queensu.ca/teachingandlearning/modules/assessments/11_s2_03_feedback_and_formative.html, hattie, j. and timperely, h. (2007). the power of feedback. review of educational research , 77, 81–112, heritage, m. 2014, formative assessment: an enabler of learning, retrieved from http://www.amplify.com/assets/regional/heritage_fa.pdf, magna publications, inc. (2018). designing better quizzes: ideas for rethinking your quiz practices . madison, wi., schlegel, c. (2018). objective structured clinical examination (osce). osce – kompetenzorientiert prüfen in der pflegeausbildung , 1–7. doi: 10.1007/978-3-662-55800-3_1, other resources.

formative assessment nursing education

510 Meadowmont Village Circle #129 Chapel Hill, NC 27517 ‍ (919) 694-7498

View privacy policy

DAVINCI EDUCATION MANAGEMENT SYSTEM®, ACADEMIC PORTRAIT®, and LEO® are the registered trademarks of DaVinci Education, Inc.

formative assessment nursing education

Academic staff perspectives of formative assessment in nurse education

Affiliation.

  • 1 Thames Valley University, Faculty of Health and Human Sciences, Paragon House, Boston Manor Road, Brentford, Middx TW8 9GA, UK. [email protected]
  • PMID: 19818688
  • DOI: 10.1016/j.nepr.2009.08.007

High quality formative assessment has been linked to positive benefits on learning while good feedback can make a considerable difference to the quality of learning. It is proposed that formative assessment and feedback is intricately linked to enhancement of learning and has to be interactive. Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse education on formative assessment and feedback of theoretical assessment. A qualitative approach using semi-structured interviews was adopted. The interview data were analysed and the following themes identified: purposes of formative assessment, involvement of peers in the assessment process, ambivalence of timing of assessment, types of formative assessment and quality of good feedback. The findings offer suggestions which may be of value to teachers facilitating formative assessment. The conclusion is that teachers require changes to the practice of formative assessment and feedback by believing that learning is central to the purposes of formative assessment and regarding students as partners in this process.

Copyright 2009 Elsevier Ltd. All rights reserved.

  • Education, Nursing / standards*
  • Educational Measurement / methods*
  • Faculty, Nursing / standards*
  • Feedback, Psychological
  • Qualitative Research
  • United Kingdom

formative assessment nursing education

  • Teacher Education
  • Nursing Education
  • Behavioral Sciences
  • Sign & Foreign Languages
  • Performing Arts
  • Communication
  • Any Skill You Teach

WATCH FOR FREE

ReAction On-Demand!

Dive into our on-demand library from the skills based conference.

formative assessment nursing education

SEE GOREACT IN ACTION

Try for Free

See how GoReact can help empower confident skills

formative assessment nursing education

CONTENT TYPE

  • Case Studies
  • Product Demos

formative assessment nursing education

ReAction On-Demand

Dive into our on-demand library from ReAction, the skills based conference. Whether you missed a session or want to rewatch, it's all here (and free)!

  • CONTACT SALES EXPLORE GOREACT TRY FOR FREE CONTACT SALES

Higher Education

Improve Learning With Formative Assessment Tools

formative assessment nursing education

Assessments are more than just tests and grades. When developed and delivered thoughtfully, assessments can shape the educational experience by offering insights into students’ progress and paving the way for personalized instruction. One of the more powerful types of assessment is a formative assessment. Let’s delve into this essential assessment type and explore how it can enhance the learning experience for higher educators and students alike.

What are Formative Assessments?

Formative assessments are educational evaluations designed to monitor students’ learning progress throughout a course or instructional period. Unlike summative assessments, which typically occur at the end of a unit or course to measure overall comprehension, formative assessments are ongoing and provide continuous feedback to both students and instructors. In higher education, these assessments serve as checkpoints, allowing educators to adjust their teaching strategies in real-time based on students’ performance and understanding.

Formative assessment tools are essential to enhancing student learning by providing tangible evidence of student progress, informing instructional planning and decision making. By carefully aligning formative assessments to learning outcomes, instructors can ensure their assessment strategy is helping students get closer to achieving their learning goals.

What is a Common Formative Assessment?

Formative assessments come in various shapes and sizes, tailored to fit different subjects and instructional styles. Some common formative assessments include a quick quiz at the beginning of a class to gauge prior knowledge, peer evaluations during group activities, and open-ended class discussions. Many institutions are incorporating an innovative element into their formative assessment strategies .

formative assessment nursing education

How Can Video be Used in Formative Assessments?

A study of preservice teachers showed that when woven into formative assessments, student videos facilitate reflection, support inquiry into success and failure, and influence plans for self-improvement. For example, if students learn a demonstrable skill in your course, instructors could ask students to record themselves performing it: presentations, speeches, violin solos, nursing skills check-offs, counseling sessions, student teaching observations, and more. After performing the specific skill, students can review their own performance and receive feedback from instructors. 

formative assessment nursing education

Video brings a new dimension to formative assessments, offering a rich multimedia experience that caters to diverse learning preferences. Educators can leverage video content to simulate real-world scenarios, encourage self-reflection, and foster critical thinking skills. With video and feedback solutions like GoReact, instructors can annotate directly on the video, providing targeted feedback that enhances students’ comprehension and retention.

By using formative assessment tools like video , educators can create a more dynamic and engaging learning environment. Whether it’s analyzing a lab experiment, critiquing a performance, or practicing language skills, video assessments offer endless possibilities for active learning. In this recent webinar , a panel of experts dives even deeper into the details of ways your institution can start effectively integrating videos into your formative assessment strategy.

Try Video Assessment With GoReact

Ready to add a new formative assessment tool to your approach to your strategy? Request a demo of GoReact today and discover how adding video to your assessment strategy can transform your institution’s learning experience. From real-time feedback to insightful analytics, GoReact empowers educators to elevate student learning outcomes through effective formative video assessments.

formative assessment nursing education

Personalize Your GoReact Experience

  • Open access
  • Published: 02 May 2024

The Ottawa resident observation form for nurses (O-RON): evaluation of an assessment tool’s psychometric properties in different specialties

  • Hedva Chiu 1 ,
  • Timothy J. Wood 2 ,
  • Adam Garber 3 ,
  • Samantha Halman 4 ,
  • Janelle Rekman 5 ,
  • Wade Gofton 6 &
  • Nancy Dudek 7  

BMC Medical Education volume  24 , Article number:  487 ( 2024 ) Cite this article

173 Accesses

1 Altmetric

Metrics details

Workplace-based assessment (WBA) used in post-graduate medical education relies on physician supervisors’ feedback. However, in a training environment where supervisors are unavailable to assess certain aspects of a resident’s performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses’ assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopedic Surgery. However, different clinical settings may impact a tool’s performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa.

O-RON forms were distributed on Internal Medicine, General Surgery, and Obstetrical wards at the University of Ottawa over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed.

179 O-RONs were completed on 30 residents. With four forms per resident, the ORON’s reliability was 0.82. Global judgement response and frequency of concerns was correlated ( r  = 0.627, P  < 0.001).

Conclusions

Consistent with the original study, the findings demonstrated strong evidence for validity. However, the number of forms collected was less than expected. Exit interviews identified factors impacting form completion, which included clinical workloads and interprofessional dynamics.

Peer Review reports

As the practice of medicine evolves, medical educators strive to refine the teaching curriculum and find innovative ways to train physicians who can adapt to and thrive within this changing landscape. In 2015, The Royal College of Physicians & Surgeons of Canada published the updated CanMEDS competency framework [ 1 ], which emphasizes the importance of intrinsic roles in addition to the skills needed to be a medical expert. These intrinsic roles are important in developing well-rounded physicians, but are less tangible and can be challenging to integrate into traditional assessment formats [ 2 , 3 , 4 ]. Knowing this, medical educators are given the task of developing new ways to assess these skills in resident physicians.

Another innovation in medical education is the shift from a traditional time-based curriculum to a competency-based curriculum (or competency-based medical education, “CBME”). This shift allows for an increased focus on a resident’s learning needs and achievements. It encourages a culture of frequent observed formative assessments [ 5 ]. This shift calls for assessment tools that accurately reflect a resident’s competence and can be feasibly administered in the training environment.

Workplace-based assessments (WBA) are considered one of the best methods to assess professional competence in the post-graduate medical education curriculum because they can be feasibly administered in the clinical setting [ 6 , 7 ]. Most WBA relies on physician supervisors making observations of residents. However, restraints of a complex and busy training environment mean that supervisors are not always available to observe some aspects of a resident’s performance. For example, when a resident rounds on patients independently or attends to on-call scenarios in the middle of the night, the physician supervisor may not be present. Physician supervisors may also not be present during multi-disciplinary team meetings where residents participate in the co-management of patients with other health professionals.

On a hospital ward, the health professional that most often interacts with a resident is a nurse. Given this, it makes sense to consider obtaining assessment information from a nurse’s viewpoint. This has the potential to be valuable for several reasons. First, they may provide authentic information about resident performance because residents may perform differently when they know that they are not being directly observed by their physician supervisors [ 8 ]. Second, nurses play an integral role in patient care, and often serve as a liaison between patients, their families and physicians regarding daily care needs and changes to clinical conditions. This liaison role provides nurses with a unique perspective on the intrinsic roles of physician competence in patient management, communication, and leadership skills that would also improve collaboration between nurses and physicians [ 9 ]. As such, using a WBA tool that incorporates nursing-identified elements of physician competence to assess a resident’s ability to demonstrate those elements in their workplace is important in training future physicians.

Although assessment of resident performance by nurses is captured with multi-source feedback (MSF) tools, there are some concerns if relying solely on this approach, as MSF tools generally present the data as an aggregate score regardless of individual rater roles. This convergence of ratings may not be helpful in feedback settings because it disregards how behaviour can change in different contexts (i.e., the specific situation and the relationship of the rater with the one being rated) [ 10 ]. Furthermore, there is evidence that different groups of health professionals rate the same individuals differently, more specifically, there is evidence to suggest that nursing perspectives often differ from other health professionals and physician supervisors [ 11 , 12 , 13 , 14 , 15 , 16 ]. When the groups are combined, the perspective of one group can be lost. It is not a weakness that different groups have different perspectives, but it needs to be documented to provide more useful formative feedback. Therefore, there is a need for a tool that uniquely captures the nurses’ perspective of resident performance.

To address this issue, Dudek et al. (2021) developed The Ottawa Resident Observation Form for Nurses (O-RON), a tool that captures nurses’ assessment of resident performance in a hospital ward environment (Fig.  1 ). This tool allows nurses to identify concerning behaviours in resident performance. The tool was implemented and studied in the Orthopedic Surgery Residency Program at the University of Ottawa, Canada. Nurses voluntarily completed the O-RON and indicated that it was easy to use. Validity evidence related to internal processes was gathered by calculating the reliability of the scale using a generalizability analysis and decision study. The results showed that with eight forms per resident the reliability of the O-RON was 0.8 and with three forms per resident, the reliability was 0.59. A reliability of 0.8 is considered acceptable for summative assessments [ 17 ]. These results suggest that the O-RON could be a promising WBA tool that provides residents and training programs with important feedback on aspects of residents’ performance on a hospital ward through the eyes of the nurses.

figure 1

The Ottawa resident observation form for nurses (O-RON)

The O-RON garnered international interest. Busch et al. translated the O-RON into Spanish and implemented it in two cardiology centres in Buenos Aires [ 18 ]. Their findings also demonstrated strong evidence for validity, although they required a higher number of forms ( n  = 60) to achieve high reliability (G coefficient = 0.72).

The demonstrated psychometric characteristics of the tool for these two studies were determined in single specialties. Local assessment culture, clinical setting, interprofessional dynamics and rater experience are some of the factors that can affect how a nurse may complete the O-RON [ 19 , 20 , 21 , 22 , 23 ]. These external factors can lead to measurement errors, which in turn would impact the generalizability and validity of the O-RON. Therefore, further testing is vital to determine whether the O-RON will perform consistently in other environments [ 24 , 25 ].

The primary objective of this project was to collect additional validity evidence related to the O-RON by implementing it in multiple residency programs including both surgical and medical specialties, which represent different assessment cultures and clinical contexts. However, it became evident throughout the data collection period that the number of completed forms was lower than anticipated. As such, there needed to be shift in focus to also explore challenges surrounding implementation of a new assessment tool in different programs. Therefore, the secondary objective of this study was to better understand the barriers to the implementation of the O-RON.

This study sought to assess the psychometric properties of the O-RON in three specialties at the University of Ottawa, Canada, using modern validity theory as a framework to guide the evaluation of the O-RON [ 25 ]. The O-RON was used in the Core Internal Medicine, General Surgery, and Obstetrics and Gynecology residency programs at the University of Ottawa. These programs did not have an assessment tool completed exclusively by nurses to evaluate their residents prior to the start of the project. They agreed to provide the research team with the anonymized data from this tool to study its psychometric properties. Ethics approval was granted by the Ottawa Health Science Network Research Ethics Board.

Dudek et al. (2021) developed the O-RON through a nominal group technique where nurses identified dimensions of performance that they perceived as reflective of high-quality physician performance on a hospital ward. These were included as items, of which there were 15, on the O-RON. Each item is rated on a 3-point frequency scale (no concerns, minor concerns, major concerns) with a fourth option of “unable to assess”. There is an additional “yes/no” question regarding whether the nurse would want to work with the resident as a team member (“global assessment question”) and a space for comments.

Residents from the three residency programs were provided a briefing by their program director on the use of the O-RON prior to the start of the project. Nurses on the internal medicine, general surgery, and obstetrics wards at two hospital campuses were asked to complete the O-RON for the residents on rotation. Nurse managers reviewed the form with the nurses at the start of the project and were available for questions. This was consistent with how the tool was used in the original study. At the end of each four-week rotation, 10 O-RON forms per resident were distributed to the nurse manager, who then distributed them to their nurses. Nurses were assigned a code by the nurse manager so that they could anonymously complete the forms. Any nurse who felt that they would like to provide an assessment on a resident, received a form to complete and returned it to the nurse manager within two weeks. The completed forms were collected by the research assistant at the two-week mark who collated the data for each resident and provided a summary sheet to their program director. The research assistant assigned a code for each resident and recorded the anonymized O-RON data for the study analysis.

Sample size

In the original study [ 26 ] of the O-RON the results demonstrated a strong reliability coefficient (0.80) with a sample of eight forms per resident. Using the procedure described by Streiner and Norman [ 24 ], an estimate of 256 forms in total was needed to achieve a desired reliability of 0.80 with a 95% confidence interval of +/- 10%. Typically, there were 16 residents ranging from PGY1-3 participating in a general internal medicine ward, 16 residents ranging from PGY 1–5 participating in a general surgery ward, and eight residents ranging from PGY1-5 participating in a labour and delivery ward at any time. To have at least 256 forms per specialty and considering that nurses were unlikely to complete 10 forms on each resident each time and fluctuations in resident numbers between rotations is expected, a collection period of six months was established.

Response to low participation rate

The completion rate was closely monitored throughout the collection period. There was a low rate of participation after six rounds of collection. In response, we initiated improvement processes including (a) displaying photos of the residents with their names in the nursing office, (b) displaying a poster about the project as a reminder for the nurses in the nursing office, (c) reaching out to nurse managers to review the project. We also extended the collection period for additional three rotations for a total of nine rotations to allow time for the improvement processes to work.

At the end of the extended collection period, we conducted semi-structured interviews with each nurse manager individually at each of the O-RON collection sites to further explore reasons behind low participation rate.

Quantitative analyses

Analyses were conducted using SPSS v27 statistical software. Rating response frequencies were calculated across scale items and “yes/no” frequencies were calculated for the global assessment question. Chi-square tests were conducted on each item against the global assessment response to determine the effect of concerns on the global assessment. Total O-RON score was calculated for the purposes of data analysis by counting the number of items that had a minor or major rating and dividing by the number of items that had a valid rating. A higher score indicated more concerns. Invalid rating items with either “unable to assess” as a response or left blank were excluded from this analysis. Tests of between-subjects effects were conducted between total O-RON score and the global assessment rating.

The reliability of the O-RON was calculated using a generalizability analysis (g-study) and the number of forms required for an acceptable level of reliability was determined through a decision study. These outcomes contributed to validity evidence related to internal processes.

A g-study calculates variance components, which can be used to derive the reliability of the O-RON. Variance components are associated with each facet used in the analysis and reflect the degree to which overall variance in scores is attributed to each facet. For this study, this was calculated using the mean total scores, which were analyzed using a between subjects ANOVA with round as a grouping facet, and people and forms as nested facets. Using the results from the generalizability analysis, a decision study derives estimates of reliability based on varying the facets used in the analysis. For our study, we varied the number of forms per resident to understand its impact on the reliability of the O-RON.

Qualitative analyses

Semi-structured exit interviews were conducted by the study principal investigator (HC) with each nurse manager. They were voice-recorded and transcribed into text documents. Using conventional content analysis, interview content was thematically analysed and coded by two of the study’s co-investigators (HC and ND) independently. The codes were compared between the two researchers and a consensus was met. This coding structure was then used to code all six interviews.

Quantitative

180 O-RONs were completed on 30 residents over the study period with an average of six forms per resident (range = 1–34). The large range is due to some residents being assessed on more than one rotation. One form was excluded from analysis because it had a value of “could not assess” for every item. A total of 179 O-RONs were included for analysis.

The Obstetrics units had the highest frequency of O-RONs completed (74.3%), followed by General Surgery (16.2%), and Internal Medicine (9.5%). Due to the small numbers within each specialty, subsequent analysis was done on the aggregate data.

Across forms and items, the frequency of reported rating in descending order was “no concerns” (80.7%), “minor concerns” (11.5%), “unable to assess” (3.0%), and “major concerns” (1.9%). Blank items accounted for 2.9% of responses. For the global assessment rating, 92.3% of valid responses were “yes” for whether they wanted this physician on their team (Table  1 ).

In terms of item-level analysis, nurses reported the least concern for item 13 (“acts with honesty and integrity”) (90.5% - no concerns). They reported the most major concerns for item 1 (“basic medical knowledge is appropriate to his/her stage of training”) (4.5% - major concerns), and the most overall concerns for item 8 (“Accepts feedback/expertise from nurses appropriately”) (21.8% - minor + major concerns). The raters were most frequently unable to assess item 15 (“advocates for patients without disrupting, discrediting, other HCP”) at 7.8%.

2 × 2 comparison tests were used to assess the presence of concern as a function of their response to the global assessment question (Table  2 ). Since there was only a small number of major concerns for each item, minor and major concerns were combined (“any concerns”). All items except four (items 10, 12, 13 and 14) showed a statistically significant difference ( P  < 0.01). Tests of between-subjects effects was used to compare between total O-RON score and response to the global assessment question, which showed a correlation between global response and frequency of concerns ( r  = 0.627, P  < 0.001).

The g-study results showed that people (object of measurement) accounted for 54% of the variance. Rotation did not account for any variance indicating that ratings were similar across all nine rotations. The decision study results showed that with three forms per resident, the reliability was 0.78 and with four forms, the reliability was 0.82.

Qualitative

Factors impacting the implementation of the o-ron.

Five themes were identified as factors that had an impact, whether positive or negative, on the implementation of the O-RON (Table  3 ).

Strong project lead on the unit

Units where clinical managers described strong involvement of a lead person (usually themselves) who was persistent in reminding nurses to complete O-RONs and were passionate about using the tool had higher completion O-RON rates. Conversely, if there was not such a strong lead, there was a much lower O-RON completion rate.

“If I was to step away from this position and it was a different manager coming in, would they do the same that I would do in this process, I don’t know. So[…]I know it works okay for me because […] I don’t see it as a huge investment of time[…]but if I’m off or I’m not here[…]it’s finding a nurse who would be responsible to do it.” (Participant 2) . “[…]from the leadership perspective, we talk about it, but we don’t own it […] The feedback doesn’t change anything to me as a leader, as a manager. […] Not that I don’t concentrate on the O-RON, I do talk about it, but I’m not passionate about it.” (Participant 4) .

Familiarity with residents

Clinical managers expressed the importance of having collegial relationships with the residents. This was usually facilitated by having a smaller number of residents or having in-person ward rounds. Because of this, the nurses knew the residents better, had more time to work with them personally, and were able to match their faces to their names more frequently. Conversely, if a unit employed virtual rounds, had a lot of residents, or mainly used technology to communicate with residents, the nurses were unfamiliar with the residents and felt they were not able to comment as easily on resident performance.

“So with our group, […] our […] residents, is tiny. There’s two of them on at a time in a month. Maybe only one. So, […] they’re here 24 hours, with our nurses, working, they get to know each other quite well, so, that could be a contributing factor potentially.” (Participant 2) . “Where before we used to have rounds and the residents would come and the staff would come, so we could have that connection with the resident. We could put a face to them, a name to them. We knew who they were. Where, with EPIC [electronic medical record system], first of all the nurses don’t attend EPIC rounds. We don’t see the residents, we don’t see the staff. Like I have no idea, who […] is because I don’t see him. So, it’s very difficult for me to do an evaluation on someone I have not met, not seen, and only see through EPIC. A lot of the conversations the nurses have are also through EPIC, they’ll send an EPIC chat. The resident will email back. So, you know, it’s missing that piece.” (Participant 1) .

Nursing workload

Clinical managers mentioned that completing the O-RON was an additional item to their existing full workload. This was largely driven by an overall shortage of staff and a large number of new nurses joining the units. The new nurses are trying to learn new protocols and clinical skills and had little capacity to do extra work.

“I mean every day we are working short, right? We’re missing one or two nurses. I have nurses from other units, I have nurses that have never been here. So yes, I could see how that would have contributed to having a lower response.” (Participant 1) .

“I’m going to say about 60% of our staff have less than one year experience and we’ve also re-introduced RPNs to the unit. And so the unit right now is really burdened with new staff. But it’s not only new staff, but it’s new staff whose skillset are not as advanced as what they potentially would have been five years ago. And so the staff are really concentrating on beefing up their skillset, just really integrating into the unit. And so, there is really not a lot of thought or concentration necessarily on trying to do the extras, such as doing the surveys.” (Participant 4) .

Work experience of nurses

In addition to new nursing staff having less time for non-essential tasks, clinical managers also pointed out that newer nurses tended to be more hesitant to comment on a resident’s performance compared to a more experienced nurse.

“A lot of junior staff that I don’t know if they would take that initiative to […] put some feedback on a piece of paper for a resident even though it’s almost untraceable to them. You know, a little bit more timid and shy.” (Participant 6) .

“Most of them [those who filled out the form] were the […]mid-career nurses. So, right now, my mid-career nurses have been around for five to ten years. […] And so those nurses are the ones who are still very engaged, wanting to do different projects. Those were the nurses that were doing it, it was not the newer hires, and it was not the nurses who have been here for, you know, 20 + years.” (Participant 4) .

Culture of assessment

All clinical managers interviewed noted that there was not a strong culture of nurses providing any feedback or assessment of residents prior to the implementation of the O-RON. There may have been informal discussions and feedback, but there was no formal process or tool.

Suggestions for improvement

Four suggested areas for improvement of the implementation of the O-RON were identified (Table  3 ).

Mixed leadership roles

Clinical managers suggested that having physicians promote the O-RON in addition to themselves may be helpful.

“But I’m even thinking, like if it didn’t just come from me, if the staff [doctor] would come around and say, “Hey guys, I would really appreciate it.” […] say if it came just from me, from oh the manager is asking for us to fill out another sheet, or something to that effect. It may help a little bit.” (Participant 1) . “I think at the huddle, if one of you can come (Staff physician), although we mention it, but I think it would be important, even if it’s only once a month, you know. […] Or you know, come on the unit anytime and just you know, remind the nurses.” (Participant 3) .

Increase familiarity between nurses and residents

Clinical managers suggested increasing familiarity between nurses and residents by having more in-person rounds where residents regularly attend and involving the residents in the distribution of O-RONs.

“My recommendation would be to bring back rounds, in-person rounds. Also, it would be nice if we would have like an introduction. ‘This is the resident for Team C,’ you know something to that effect. I know they come around and they sit, and they look at EPIC and they chat, but we sometimes don’t make the connection of who is this resident, you know, what team is he part of.” (Participant 1) .

“I guess maybe a suggestion would be to have the residents go around, and not every single day, but maybe once a week, prioritise 30 minutes and take their own surveys and go up to the nursing staff and say, “Hey, I’m looking for your feedback, will you complete this survey for me?” And then hand the nurse the survey that relates directly to that particular resident.” (Participant 4) .

Transparent feedback procedure

Clinical managers highlighted the importance of having a clear loop back procedure that allows the nurses to know that their feedback is being reviewed and shared with the residents. They felt that this is very important for maintaining nursing participation in resident assessment.

“I guess the one question is, they fill this in, but now we’re getting to a point of, how do we know that information or how is that information getting to the residents? What sort of structure is that? So that at least I can have a conversation explaining that yeah, when you fill this in, this is the next steps that happen of how it loops back with the individuals. So I think the further along we get into this and not having that closed loop on it, we may start to lose some engagement because then their maybe not going to see a worth or value to doing it.” (Participant 2) .

Format of the O-RON

Some clinical managers felt having different formats of the O-RON available for use (paper and digital) may increase engagement. They pointed out that some nurses really like the option of a digital version of surveys that they have used in different projects. On the other hand, others pointed out that some of their staff preferred a paper form.

WBAs that rely on observations by physician supervisions is a predominant method used to assess professional competency in the post-graduate medical education curriculum [ 7 ]. However, in a complex training environment where supervisors are unavailable to observe certain aspects of a trainee’s performance, nurses are well-positioned to do so. The O-RON was developed to capture nurses feedback, which is critical in identifying and fostering the development of physician characteristics that improve collaboration between nurses and physicians [ 9 ]. Our study assessed the use of the O-RON in three different residency programs at the University of Ottawa to gather more validity evidence and allow us to generalize results to multiple contexts.

As in the original study, our findings demonstrated strong validity evidence for internal processes, which was demonstrated by the calculation of reliability using the generalizability analysis and decision study. With only four forms per resident, the O-RON had a reliability of 0.82, and with three forms, the O-RON had a reliability of 0.78. A reliability range of 0.8–0.89 is considered acceptable for moderate stakes summative assessments and a reliability range of 0.7–0.79 is considered acceptable for formative assessments [ 17 ]. The results of the 2 × 2 comparison tests highlighted the correlation between global assessment and presence of concern, which reflected that nurses would more likely want to work with a physician who showed no concerning behaviour on the O-RON items. This further supports the consistency of the tool in identifying concerning behaviour through the eyes of nurses.

However, in our study we had substantially fewer forms completed than in the original study (180 forms for 30 residents over nine months versus 1079 forms for 38 residents over 11 months) and less than the intended sample size of 256 forms per specialty. Because of that, we were only able to analyze the data as an aggregate rather than per specialty and were not able to make comparisons between specialty groups. Nonetheless, there was a sufficient number of submitted forms to perform the generalizability analysis and the dependability analysis allowed us to estimate the reliability of the O-RON with a range of submitted forms. Furthermore, the resulting reliability was greater than was obtained in the original study [ 26 ].

To better understand the reasons behind this difference, we conducted semi-structured interviews with the clinical managers on each unit individually. Five major themes were identified that had an impact on the implementation of the O-RON. Better implementation occurred when there was strong leadership for the implementation of the tool, there were a higher number of experienced nurses, and the nurses knew the residents. When these factors were absent, uptake of the tool was limited. Additionally, heavy clinical workloads related to staffing shortages caused both by the COVID pandemic and the current nursing staffing crisis in Canada had a significant negative impact. Furthermore, certain COVID protocols and the implementation of the electronic health record made a lot of nurse-resident interaction more virtual instead of in-person. It is also worth noting that the wards in the original study had an established culture of feedback collected by the clinical managers who reported it on a regular basis using their own form to the residency program director. This may also have contributed to the more successful implantation of the O-RON in the original study.

The barriers to implementation we identified in our study are consistent with the literature on challenges facing implementation of new assessment tools. Local assessment culture, clinical setting, interprofessional dynamics, leadership engagement and time constraint issues have all been previously identified [ 27 , 28 , 29 ]. Our study was able to additionally highlight nursing suggestions to address these barriers, which include mixed leadership roles, ways to improve collegial familiarity, and feedback transparency (Table  3 ).

Despite the challenges identified, clinical managers were appreciative of the O-RON as an avenue for nurses to be assessors and felt that it was a valuable tool. That, in combination with its growing evidence for validity, suggest that future work should be targeted towards addressing the barriers prior to implementation of the O-RON. Our study participants offered several suggestions for this. They also emphasized the importance on ensuring that nurses are made aware of how their assessments will be provided and followed up on with residents.

Our study has limitations. First, there was a relatively smaller number of completed O-RONs compared to what we had anticipated. Because of that, we needed to aggregate the data between all specialties for further analysis rather than analyse them separately. This also led us to pursue the qualitative portion of our study, which characterized why this was the case. This new information may be beneficial for future work. Second, this study was performed in a single university and three specific specialties. To generate further evidence for validity of the O-RON as an assessment tool, implementing the O-RON at different institutions and specialties should be considered.

The O-RON is a useful tool to capture nurses’ assessment of resident performance. The findings of our study demonstrated reliable results in various clinical settings thus adding to the validity of the results. However, understanding the assessment environment and ensuring it has the capacity to perform this assessment is crucial for its successful implementation. Future research should focus on how we can create conditions whereby implementing this tool is feasible from the perspective of nurses.

Data availability

The datasets used and/or analysed during the study are available from the corresponding author on reasonable request.

Abbreviations

Competency-based medical education

Ottawa Resident Observation Form for Nurses

  • Workplace-based assessment

Snell L, Frank JR, Sherbino J. CanMEDS 2015 Physician Competency Framework. Royal College of Physicians & Surgeons of Canada; 2015. https://books.google.ca/books?id=1-iAjgEACAAJ .

McConnell M, Gu A, Arshad A, Mokhtari A, Azzam K. An innovative approach to identifying learning needs for intrinsic CanMEDS roles in continuing professional development. Med Educ Online. 2018;23(1):1497374.

Article   Google Scholar  

Binnendyk J, Pack R, Field E, Watling C. Not wanted on the voyage: highlighting intrinsic CanMEDS gaps in competence by design curricula. Can Med Educ J. 2021;12(4):39–47.

Google Scholar  

Rida TZ, Dubois D, Hui Y, Ghatalia J, McConnell M, LaDonna K. Assessment of CanMEDS Competencies in Work-Based Assessment: Challenges and Lessons Learned. In: 2020 CAS Annual Meeting. 2020. p. 4.

Hawkins RE, Welcher CM, Holmboe ES, Kirk LM, Norcini JJ, Simons KB, et al. Implementation of competency-based medical education: are we addressing the concerns and challenges? Med Educ. 2015;49(11):1086–102.

Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287(2):226–35.

Prentice S, Benson J, Kirkpatrick E, Schuwirth L. Workplace-based assessments in postgraduate medical education: a hermeneutic review. Med Educ. 2020;54(11):981–92.

LaDonna KA, Hatala R, Lingard L, Voyer S, Watling C. Staging a performance: learners’ perceptions about direct observation during residency. Med Educ. 2017;51(5):498–510.

Bhat C, LaDonna KA, Dewhirst S, Halman S, Scowcroft K, Bhat S, et al. Unobserved observers: nurses’ perspectives about sharing feedback on the performance of Resident Physicians. Acad Med. 2022;97(2):271.

Batista-Foguet JM, Saris W, Boyatzis RE, Serlavós R, Velasco Moreno F. Multisource Assessment for Development Purposes: Revisiting the Methodology of Data Analysis. Front Psychol. 2019 Jan 4 [cited 2021 Jan 19];9. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6328456/ .

Allerup P, Aspegren K, Ejlersen E, Jørgensen G, Malchow-Møller A, Møller MK, et al. Use of 360-degree assessment of residents in internal medicine in a Danish setting: a feasibility study. Med Teach. 2007;29(2–3):166–70.

Ogunyemi D, Gonzalez G, Fong A, Alexander C, Finke D, Donnon T, et al. From the eye of the nurses: 360-degree evaluation of residents. J Contin Educ Health Prof. 2009;29(2):105–10.

Bullock AD, Hassell A, Markham WA, Wall DW, Whitehouse AB. How ratings vary by staff group in multi-source feedback assessment of junior doctors. Med Educ. 2009;43(6):516–20.

Castonguay V, Lavoie P, Karazivan P, Morris J, Gagnon R. P030: Multisource feedback for emergency medicine residents: different, relevant and useful information. Can J Emerg Med. 2017;19(S1):S88–88.

Jong M, Elliott N, Nguyen M, Goyke T, Johnson S, Cook M, et al. Assessment of Emergency Medicine Resident performance in an adult Simulation using a Multisource Feedback Approach. West J Emerg Med. 2019;20(1):64–70.

Bharwani A, Swystun D, Paolucci EO, Ball CG, Mack LA, Kassam A. Assessing leadership in junior resident physicians: using a new multisource feedback tool to measure Learning by Evaluation from All-inclusive 360 Degree Engagement of Residents (LEADER). BMJ Leader. 2020;leader.

Downing SM. Reliability: on the reproducibility of assessment data. Med Educ. 2004;38(9):1006–12.

Busch G, Rodríguez Borda MV, Morales PI, Weiss M, Ciambrone G, Costabel JP, et al. Validation of a form for assessing the professional performance of residents in cardiology by nurses. J Educ Health Promot. 2023;12:127.

Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32(8):676–82.

Govaerts MJB, Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM. Workplace-based assessment: effects of rater expertise. Adv Health Sci Educ Theory Pract. 2011;16(2):151–65.

Yeates P, O’Neill P, Mann K, Eva K. Seeing the same thing differently: mechanisms that contribute to assessor differences in directly-observed performance assessments. Adv Health Sci Educ Theory Pract. 2013;18(3):325–41.

Briesch AM, Swaminathan H, Welsh M, Chafouleas SM. Generalizability theory: a practical guide to study design, implementation, and interpretation. J Sch Psychol. 2014;52(1):13–35.

Kogan JR, Conforti LN, Iobst WF, Holmboe ES. Reconceptualizing variable rater assessments as both an educational and clinical care problem. Acad Med. 2014;89(5):721–7.

Streiner DL, Norman GR. Health Measurement Scales., Oxford P. 2008 [cited 2021 Sep 4]. https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199231881.001.0001/acprof-9780199231881 .

American Educational Research Association. Standards for educational and psychological testing. Washington, DC: American Educational Research Association; 2014.

Dudek N, Duffy MC, Wood TJ, Gofton W. The Ottawa Resident Observation Form for Nurses (O-RON): Assessment of Resident Performance through the Eyes of the Nurses. Journal of Surgical Education. 2021 Jun 3 [cited 2021 Sep 4]; https://www.sciencedirect.com/science/article/pii/S1931720421000672 .

Dudek NL, Papp S, Gofton WT. Going Paperless? Issues in converting a Surgical Assessment Tool to an Electronic Version. Teach Learn Med. 2015;27(3):274–9.

Hess LM, Foradori DM, Singhal G, Hicks PJ, Turner TL. PLEASE complete your evaluations! Strategies to Engage Faculty in Competency-based assessments. Acad Pediatr. 2021;21(2):196–200.

Young JQ, Sugarman R, Schwartz J, O’Sullivan PS. Faculty and Resident Engagement with a workplace-based Assessment Tool: use of implementation science to explore enablers and barriers. Acad Med. 2020;95(12):1937–44.

Download references

Acknowledgements

The authors would like to acknowledge Katherine Scowcroft and Amanda Pace for their assistance in managing this project.

This study was funded by a Physicians’ Services Incorporated Foundation Resident Research Grant – Number R22-09, and the University of Ottawa Department of Innovation in Medical Education Health Professions Education Research Grant – Number 603978-152399-2001.

Author information

Authors and affiliations.

Department of Medicine, Division of Physical Medicine & Rehabilitation, University of Ottawa, Ottawa, Canada

Department of Innovation in Medical Education, University of Ottawa, Ottawa, Canada

Timothy J. Wood

Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Canada

Adam Garber

Department of Medicine, Division of General Internal Medicine, University of Ottawa, Ottawa, Canada

Samantha Halman

Department of Surgery, Division of General Surgery, University of Ottawa, Ottawa, Canada

Janelle Rekman

Department of Surgery, Division of Orthopedic Surgery, University of Ottawa, Ottawa, Canada

Wade Gofton

Department of Medicine, Division of Physical Medicine & Rehabilitation), The Ottawa Hospital, University of Ottawa, Ottawa, ON, Canada

Nancy Dudek

You can also search for this author in PubMed   Google Scholar

Contributions

All authors made contributions to the conception of this project. H.C., T.W. and N.D. made contributions to the data analysis and interpretation. All authors made contributions to the critical revision of the article and final approval of the version to be published.

Corresponding author

Correspondence to Hedva Chiu .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was granted by the Ottawa Health Science Network Research Ethics Board. All research was carried out in accordance with the Declaration of Helsinki. Informed consent was obtained from all participants of the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Chiu, H., Wood, T.J., Garber, A. et al. The Ottawa resident observation form for nurses (O-RON): evaluation of an assessment tool’s psychometric properties in different specialties. BMC Med Educ 24 , 487 (2024). https://doi.org/10.1186/s12909-024-05476-1

Download citation

Received : 18 October 2023

Accepted : 26 April 2024

Published : 02 May 2024

DOI : https://doi.org/10.1186/s12909-024-05476-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Post-graduate medical education
  • Inter-professional assessment
  • Professionalism

BMC Medical Education

ISSN: 1472-6920

formative assessment nursing education

  • Skip to content
  • Skip to search
  • Staff portal (Inside the department)
  • Student portal
  • Key links for students

Other users

  • Forgot password

Notifications

{{item.title}}, my essentials, ask for help, contact edconnect, directory a to z, how to guides, service leadership, formative assessment.

This page contains information on formative assessment and the Formative Assessment in the Year Before School Project.

Formative assessment refers to educators’ collection of formal or informal information during children’s learning experiences to inform teaching strategies and learning experiences for improved learning outcomes.

Formative assessment is an ongoing and integral part of the assessment and planning cycle used by teachers and educators in their everyday practice under the National Quality Standard, Quality Area 1 (Standard 1.3) , and the Early Years Learning Framework V2.0 .

It is important that teachers and educators understand effective formative assessment and how to implement it in their context.

Benefits of formative assessment tools

Benefits for early childhood teachers and educators.

Formative assessment tools support teachers and educators to implement the assessment and planning cycle by providing a framework to:

  • understand, interpret and document a child’s learning, wellbeing and development
  • inform pedagogy and practice
  • support the development, implementation and evaluation of educational programs.

Formative assessment tools can be used throughout the year to complement teacher and educators’ existing practices.

Benefits for children

Formative assessment can document the different pathways that children take in their learning journeys and make their own learning and development visible to them. Rich and meaningful information can be documented that depicts children’s learning in context, having their voices heard, their beliefs understood and valued; their sense of self affirmed and knowing more about themselves.

Benefits for parents and carers

Formative assessment can support the inclusion of parents and carers in the assessment and planning cycle, providing them with opportunities to reflect on and share information with teachers and educators about their child’s learning and development.

Formative Assessment in the Year Before School Project

In 2024, the NSW Department of Education is working with eligible early childhood education and care (ECEC) services to explore the use of formative assessment tools to understand how they may be used in NSW.

The project will inform the department’s approach to the 2025 national trial of the Preschool Outcomes Measure, which is a key reform of the Preschool Reform Agreement. This includes determining a formative assessment tool that complements existing teaching and learning practices in NSW ECEC services to support all children’s learning, development and wellbeing outcomes.

To participate, approved services must meet the eligibility criteria outlined in the project guidelines .

Services who are selected to be part of the project will be allocated formative assessment tools to explore in their service and will receive professional learning and up to $8,000 .

For more information on the formative assessment in the year before school project, including financial support, read the project guidelines .

  • Early childhood education

Business Unit:

  • Early Childhood Outcomes

Outreach Skills Clinic for Assessment (OSCA): Cardiac Assessment (inc ECG)

Outreach Skills Clinic for Assessment (OSCA): Cardiac Assessment (inc ECG)

Outreach Skills Clinic for Assessment (OSCA): Cardiac Assessment (inc ECG) -Theory, Practice and Assessment

Date and time

University of Chester (Marris House)

About this event

What is the Outreach Skills Clinic for Assessment (OSCA)?

OUTREACH- Outreach Skills Clinics for Assessment (OSCA) provides clinics at a range of sites/venues across Cheshire and Merseyside. These include both practice and university sites to offer flexibility to learners so they can book when and where they choose to access OSCA.

SKILLS- OSCA provides opportunities for learners to engage in theoretical and simulated practice learning opportunities to support learning and assessment across a range of skills and procedures. These include: cannulation, venepuncture, nasogastric tube insertion, chest auscultation and catheterisation.

CLINIC- OSCA enables learners to access a range of sessions that they can attend by using an online booking system. Leaners have the opportunity to book theoretical and practice learning opportunities which can fit around their university and life commitments.

ASSESSMENT- During the sessions, learners will have the opportunity to book and complete an assessment (both formative and summative) regarding a specific skill. On successful completion of the assessment, learners can be signed off (where appropriate to the discipline) the relevant proficiency/skill within their practice assessment documents.

Full details of OSCA and what it provides are available on the below link, copy and paste this into your internet browser to access the link. If you are experiencing challenges accessing the link, please email: [email protected].

https://sway.office.com/js21831P53Jpr0mv?ref=Link

Who can access a OSCA skills Clinic?

OSCA is a Health Education England funded project that is currently funded to support non-medical learners across Cheshire and Merseyside.

  • Currently OSCA clinics are available to pre-registration nursing students. This will soon be extended to other non-medical students as part of a phased approach(Trainee Nursing Associates, Student Midwifes and AHPs).
  • To attend OSCA, you must be a enrolled learner at one of the four Cheshire and Merseyside Universities (University of Chester, Liverpool John Moores, Edge Hill University and University of Liverpool).
  • Learners accessing OSCA must be at an appropriate stage in their education in order to access certain skills which will be cited on the ticket. (Year Two & Year Three)

What will my session at OSCA look like and how will I be assessed?

Please see the below link, regarding the assessment for the blood transfusion session (you will need to copy and paste this into your internet browser. If you are having challenges regarding accessing the link, please contact: [email protected]

Using simulation, you will be assessed performing the procedure in a safe manner and demonstrating your knowledge of

***DISCLAIMER- EVEN IF YOU ACHEIVE THIS PROFICINCY DURING THIS SESSION, PLEASE NOTE THAT THIS DOES NOT PERMIT YOU TO CONDUCT THIS IN PRACTICE- ALL STUDENTS MUST ALWAYS FOLLOW LOCAL POLICY AND ANY SKILLS CARRIED OUT UNDER DIRECT SUPERVISION

PART TWO: * 24 Undertakes an effective cardiac assessment and demonstrates the ability to undertake an ECG and interpret findings.

* * students do not require theory session prior as this will be provided on the day, however we advise reading any resources and local policy available to you prior to the session.

Site: University Centre Birkenehad (Marris House)

Addresss: Hamilton Street, Birkenhead, CH41 5AL

Room: HMA209

Time: 09.30-16.30

This session is available for Pre-registration nursing students in year (PART) 2 or 3

Additional information

In accordance with Practice and University regulations, you are required to wear your student Uniform as this is classed as a practice activity/placement.

You will not be permitted to attend without a valid Eventbrite booking due to room capacity.

You will be assessed by a member of the OSCA team who is a practice assessor using direct observation using agreed set of criteria that has been peer-reviewed to ensure that the skill/proficiency has been fully covered.

  • United Kingdom Events
  • Wirral Events
  • Things to do in Birkenhead
  • Birkenhead Classes
  • Birkenhead Health Classes
  • #assessment

Organized by

IMAGES

  1. 75 Formative Assessment Examples (2024)

    formative assessment nursing education

  2. How Formative Assessment Helps Nursing Students Succeed in the

    formative assessment nursing education

  3. 39 Printable Nursing Assessment Forms (+Examples)

    formative assessment nursing education

  4. Formative Assessment in Nursing Education by Emily Cavanagh on Prezi Video

    formative assessment nursing education

  5. Formative and Summative Assessment is Integrated with Instruction

    formative assessment nursing education

  6. How Formative Assessment Helps Nursing Students Succeed in the

    formative assessment nursing education

VIDEO

  1. DIAGNOSTIC ASSESSMENT, FORMATIVE A.SSESSMENT & SUMMATIVE ASSESSMENT

  2. FORMATIVE ASSESSMENT BY ANJALIPRIYA B ED NS 23-25 BATCH

  3. FORMATIVE ASSESSMENT: INDIVIDUAL MARKS REGISTER C C E DOCUMENT

  4. Formative Assessment

  5. FORMATIVE AND SUMMATIVE ASSESSMENT

  6. Understanding Formative Assessment Versus Summative Assessment

COMMENTS

  1. How Formative Assessment Helps Nursing Students Succeed in the

    Formative assessment is a bridge between learning and teaching. It allows instructors to gather real data about students as they work, then adjust their instruction to better serve students at their current learning level. In nursing education, formative assessment has been proven to be highly effective not only for student learning, but for ...

  2. Comparing formative and summative simulation-based assessment in

    Background. Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [1-5].In addition, simulation-based educational programs have demonstrated to be ...

  3. How are formative assessment methods used in the clinical setting? A

    In recent years, however, formative assessment has become a strong theme in postgraduate medical education as a way to facilitate and enhance learning through-out the training period. 12-14 Formative assessment - or assessment for learning - aims to identify a trainee's strengths and weaknesses and to be conducive to progress by means of ...

  4. PDF Competency-Based Education and Assessment Model: Teaching, Learning

    Assessment •Design formative and summative performance assessments that include public criteria ... •Design course and program evaluation 8 7 8. The Essentials: Core Competencies for Professional Nursing Education can provide the framework for creating…or revising the existing…outcomes in your curriculum. Characteristics of Learner ...

  5. Comparing formative and summative simulation-based assessment in

    Background Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies. Methods Two hundred eighteen undergraduate nursing students ...

  6. Formative Assessment and Its Impact on Student Success

    Formative Assessment and Its Impact on Student Success : Nurse Educator ... Assistant Professor, and Rebecca Thal, MSN, FNP, Graduate Student, School of Nursing, MGH Institute of Health Professions, Boston, MA ([email protected]). Nurse Educator: 1/2 2019 - Volume 44 ... Emerging Technologies in Nursing Education; Curriculum Revision: Making ...

  7. Assessment and evaluation: Nursing education and ACEN ...

    Assessment and evaluation are essential components in nursing education used to determine program effectiveness, guide decision-making, determine if changes are needed, and to enhance the achievement of student learning (Halstead, 2019). Both formative and summative evaluations provide valuable information that can be used by students, faculty ...

  8. Using standardized exams for formative program evaluation

    The unprecedented, extended, and evolving nature of the pandemic prompted significant changes in nursing education, heightening the need for formative program evaluation. For example, face-to-face courses and some clinical experiences were shifted to virtual platforms, learning activities were modified, and assessment methods were adjusted.

  9. Assessment and Evaluation in Nursing Education: A Simulation ...

    Assessment as learning occurs when students reflect and self-assess their progress to inform their future learning goals (formative assessment). Through this process, students can learn about themselves as learners and become aware of how they learn. Examples of this type of assessment used in the undergraduate nursing space include immersive simulations using manikins or simulated participants.

  10. Perceptions, Practices, and Challenges of Formative Assessment in

    The purpose of this research is to explore, as a first step, how nursing teachers conceptualize formative assessment and how they judge its. usefulness in the teaching/learning process. Secondly ...

  11. An exploration of student nurses' experiences of formative assessment

    Journal of Nursing Education 37 (6), 275-277] fuelled the desire to explore student nurses experiences of being assessed formatively. Focus group discussion, within a UK Higher Education setting, captured the holistic, dynamic and individual experiences student nurses (n=14) have of formative assessment. Ethical approval was obtained.

  12. Formative online multiple-choice tests in nurse education: An

    Abstract. Aim: The aim of this integrative review is to explore how formative online multiple-choice tests used in nurse education promote self-regulated learning and report on pedagogies that support their design. Background: Online multiple-choice tests are widely used as learning and formative assessment tools in a range of educational contexts.

  13. Formative Assessment Strategies for Healthcare Educators

    Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities. 1. Pre-Assessment: Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas. When instructors have feedback from student assessments ...

  14. PDF Guiding Principles for Competency-Based Education

    Formative assessment is intended to enhance learning without consequences or to inform progression decisions. Summative assessment is intended for making a decision regarding attainment of the competency or a key step towards competency demonstration, ability to perform the competency without or limited supervision, or pass/fail.

  15. Formative peer assessment in higher healthcare education programmes: a

    The following inclusion criteria were applied in the search: (1) articles addressing formative peer assessment in higher education; (2) students and teachers in medicine, nursing, midwifery, dentistry, physical or occupational therapy and radiology and (3) peer-reviewed articles, grey literature (books, discussion papers, posters, etc).

  16. Academic staff perspectives of formative assessment in nurse education

    Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse ...

  17. New Trends in Formative-Summative Evaluations for Adult Education

    4. Assessment goal is formative or assessment for learning, that is, to improve the performance during the process but evaluation is summative since it is preformed after the program has been completed to judge the quality. 5. Assessment targets the process, whereas evaluation is aimed to the outcome. 6.

  18. (PDF) Assessment in Nursing education

    educational system, assessment is an essential part of the nursing education. conducted in order to evaluate the effectiveness of the theoretical and practical. knowledge of both student and q ...

  19. Using Rapid Mini-Simulations as a Strategy to Increase Competency-Based

    Introduction. Nursing education has been tasked with adopting competency-based education (CBE) to improve nursing students' clinical judgment (Lewis et al., 2022).In response to this call, three rapid mini-simulations on postpartum hemorrhage, infant jaundice, and mastitis at 1-week postpartum were introduced to a cohort of second-semester accelerated baccalaureate nursing students to improve ...

  20. (PDF) Formative Assessment as A Factor in The ...

    The formative assessment method is considered as an important factor in the development of self-esteem and motivational sphere of a young person. ... The results suggest that nursing education ...

  21. Education Sciences

    Assessment is critical in postsecondary education, as it is at all levels. Assessments are classified into four types: diagnostic, summative, evaluative, and formative. Recent trends in assessment have migrated away from summative to formative evaluations. Formative evaluations help students develop expertise and concentrate their schedules, ease student anxiety, instill a feeling of ownership ...

  22. Exploring the formal assessment discussions in clinical nursing

    Background. This study focuses on formal assessment practice of nursing students in clinical education in nursing homes. Enabling nursing students to acquire professional competence through clinical education in a variety of healthcare settings is a cornerstone of contemporary nurse education programs [].According to EU standards, 50% of the bachelor education program should take place in ...

  23. Improve Learning With Formative Assessment Tools

    What is a Common Formative Assessment? Formative assessments come in various shapes and sizes, tailored to fit different subjects and instructional styles. Some common formative assessments include a quick quiz at the beginning of a class to gauge prior knowledge, peer evaluations during group activities, and open-ended class discussions.

  24. The Ottawa resident observation form for nurses (O-RON): evaluation of

    Workplace-based assessment (WBA) used in post-graduate medical education relies on physician supervisors' feedback. However, in a training environment where supervisors are unavailable to assess certain aspects of a resident's performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses' assessment of trainee ...

  25. Formative assessment

    Formative assessment is an ongoing and integral part of the assessment and planning cycle used by teachers and educators in their everyday practice under the National Quality Standard, Quality Area 1 (Standard 1.3), and the Early Years Learning Framework V2.0.

  26. Zone of Proximal Development, Scaffolding and Teaching Practice

    Number of publications on scaffolding in 2000-2020, Education, Educational Research and Psychology sections, Web of Science CC ... 21], formative assessment [25] and online moni-toring or online ...

  27. Risk Assessment and Management

    Risk assessment and management is everybody's business; it is a major component of daily living and nursing practice. Risk assessment is simply an assessment or calculation of risk or potential risk of an issue which cannot be easily rectified. Nurses monitor risk each time they meet patients; it may be through experience and intuition or ...

  28. Outreach Skills Clinic for Assessment (OSCA): Cardiac ...

    ASSESSMENT- During the sessions, learners will have the opportunity to book and complete an assessment (both formative and summative) regarding a specific skill. On successful completion of the assessment, learners can be signed off (where appropriate to the discipline) the relevant proficiency/skill within their practice assessment documents.