U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Environ Res Public Health

Logo of ijerph

Research Project Evaluation—Learnings from the PATHWAYS Project Experience

Aleksander galas.

1 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland; [email protected] (A.G.); [email protected] (A.P.)

Aleksandra Pilat

Matilde leonardi.

2 Fondazione IRCCS, Neurological Institute Carlo Besta, 20-133 Milano, Italy; [email protected]

Beata Tobiasz-Adamczyk

Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project’s evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis. Results: A methodology for longitudinal EU projects’ evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%. Conclusions: There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

1. Introduction

Over the last few decades, a strong discussion on the role of the evaluation process in research has developed, especially in interdisciplinary or multidimensional research [ 1 , 2 , 3 , 4 , 5 ]. Despite existing concepts and definitions, the importance of the role of evaluation is often underestimated. These dismissive attitudes towards the evaluation process, along with a lack of real knowledge in this area, demonstrate why we need research evaluation and how research evaluation can improve the quality of research. Having firm definitions of ‘evaluation’ can link the purpose of research, general questions associated with methodological issues, expected results, and the implementation of results to specific strategies or practices.

Attention paid to projects’ evaluation shows two concurrent lines of thought in this area. The first is strongly associated with total quality management practices and operational performance; the second focuses on the evaluation processes needed for public health research and interventions [ 6 , 7 ].

The design and implementation of process’ evaluations in fields different from public health have been described as multidimensional. According to Baranowski and Stables, process evaluation consists of eleven components: recruitment (potential participants for corresponding parts of the program); maintenance (keeping participants involved in the program and data collection); context (an aspect of environment of intervention); resources (the materials necessary to attain project goals); implementation (the extent to which the program is implemented as designed); reach (the extent to which contacts are received by the targeted group); barriers (problems encountered in reaching participants); exposure (the extent to which participants view or read material); initial use (the extent to which a participant conducts activities specified in the materials); continued use (the extent to which a participant continues to do any of the activities); contamination (the extent to which participants receive interventions from outside the program and the extent to which the control group receives the treatment) [ 8 ].

There are two main factors shaping the evaluation process. These are: (1) what is evaluated (whether the evaluation process revolves around project itself or the outcomes which are external to the project), and (2) who is an evaluator (whether an evaluator is internal or external to the project team and program). Although there are several existing gaps in current knowledge about the evaluation process of external outcomes, the use of a formal evaluation process of a research project itself is very rare.

To define a clear evaluation and monitoring methodology we performed different steps. The purpose of this article is to present experiences from the project evaluation process implemented in the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS project. The manuscript describes key project evaluation issues as: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits. The PATHWAYS project can be understood as a specific case study—presented through a multidimensional approach—and based on the experience associated with general evaluation, we can develop patterns of good practices which can be used in other projects.

1.1. Theoretical Framework

The first step has been the clear definition of what is an evaluation strategy or methodology . The term evaluation is defined by the Cambridge Dictionary as the process of judging something’s quality, importance, or value, or a report that includes this information [ 9 ] or in a similar way by the Oxford Dictionary as the making of a judgment about the amount, number, or value of something [ 10 ]; assessment and in the activity, it is frequently understood as associated with the end rather than with the process. Stufflebeam, in its monograph, defines evaluation as a study designed and conducted to assist some audience to assess an object’s merit and worth. Considering this definition, there are four categories of evaluation approaches: (1) pseudo-evaluation; (2) questions and/or methods-oriented evaluation; (3) improvement/accountability evaluation; (4) social agenda/advocacy evaluation [ 11 ].

In brief, considering Stufflebeam’s classification, pseudo-evaluations promote invalid or incomplete findings. This happens when findings are selectively released or falsified. There are two pseudo-evaluation types proposed by Stufflebeam: (1) public relations-inspired studies (studies which do not seek truth but gather information to solicit positive impressions of program), and (2) politically controlled studies (studies which seek the truth but inappropriately control the release of findings to right-to-know audiences).

The questions and/or methods-oriented approach uses rather narrow questions, which are oriented on operational objectives of the project. Questions oriented uses specific questions, which are of interest by accountability requirements or an expert’s opinions of what is important, while method oriented evaluations favor the technical qualities of program/process. The general concept of these two is that it is better to ask a few pointed questions well to get information on program merit and worth [ 11 ]. In this group, one may find the following evaluation types: (a) objectives-based studies: typically focus on whether the program objectives have been achieved through an internal perspective (by project executors); (b) accountability, particularly payment by results studies: stress the importance of obtaining an external, impartial perspective; (c) objective testing program: uses standardized, multiple-choice, norm-referenced tests; (d) outcome evaluation as value-added assessment: a recurrent evaluation linked with hierarchical gain score analysis; (e) performance testing: incorporates the assessment of performance (by written or spoken answers, or psychomotor presentations) and skills; (f) experimental studies: program evaluators perform a controlled experiment and contrast the outcomes observed; (g) management information system: provide information needed for managers to conduct their programs; (h) benefit-cost analysis approach: mainly sets of quantitative procedures to assess the full cost of a program and its returns; (i) clarification hearing: an evaluation of a trial in which role-playing evaluators competitively implement both a damning prosecution of a program—arguing that it failed, and a defense of the program—and arguing that it succeeded. Next, a judge hears arguments within the framework of a jury trial and controls the proceedings according to advance agreements on rules of evidence and trial procedures; (j) case study evaluation: focused, in-depth description, analysis, and synthesis of a particular program; (k) criticism and connoisseurship: certain experts in a given area do in-depth analysis and evaluation that could not be done in other way; (l) program theory-based evaluation: based on the theory beginning with another validated theory of how programs of a certain type within similar settings operate to produce outcomes (e.g., Health Believe Model, Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation and Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development - thus so called PRECEDE-PROCEED model proposed by L. W. Green or Stage of Change Theory by Prochaska); (m) mixed method studies: include different qualitative and quantitative methods.

The third group of methods considered in evaluation theory are improvement/accountability-oriented evaluation approaches. Among these, there are the following: (a) decision/accountability oriented studies: emphasizes that evaluation should be used proactively to help improve a program and retroactively to assess its merit and worth; (b) consumer-oriented studies: wherein the evaluator is a surrogate consumer who draws direct conclusions about the evaluated program; (c) accreditation/certification approach: an accreditation study to verify whether certification requirements have been/are fulfilled.

Finally, a social agenda/advocacy evaluation approach focuses on the assessment of difference, which is/was intended to be the effect of the program evaluation. The evaluation process in this type of approach works in a loop, starting with an independent evaluator who provides counsel and advice towards understanding, judging and improving programs as evaluations to serve the client’s needs. In this group, there are: (a) client-centered studies (or responsive evaluation): evaluators work with, and for, the support of diverse client groups; (b) constructivist evaluation: evaluators are authorized and expected to maneuver the evaluation to emancipate and empower involved and affected disenfranchised people; (c) deliberative democratic evaluation: evaluators work within an explicit democratic framework and uphold democratic principles in reaching defensible conclusions; (d) utilization-focused evaluation: explicitly geared to ensure that program evaluations make an impact.

1.2. Implementation of the Evaluation Process in the EU PATHWAYS Project

The idea to involve the evaluation process as an integrated goal of the PATHWAYS project was determined by several factors relating to the main goal of the project, defined as a special intervention to existing attitudes to occupational mobility and work activity reintegration of people of working age, suffering from specific chronic conditions into the labor market in 12 European Countries. Participating countries had different cultural and social backgrounds and different pervasive attitudes towards people suffering from chronic conditions.

The components of evaluation processes previously discussed proved helpful when planning the PATHWAYS evaluation, especially in relation to different aspects of environmental contexts. The PATHWAYS project focused on chronic conditions including: mental health issues, neurological diseases, metabolic disorders, musculoskeletal disorders, respiratory diseases, cardiovascular diseases, and persons with cancer. Within this group, the project found a hierarchy of patients and social and medical statuses defined by the nature of their health conditions.

According to the project’s monitoring and evaluation plan, the evaluation process followed specific challenges defined by the project’s broad and specific goals and monitored the progress of implementing key components by assessing the effectiveness of consecutive steps and identifying conditions supporting the contextual effectiveness. Another significant aim of the evaluation component on the PATHWAYS project was to recognize the value and effectiveness of using a purposely developed methodology—consisting of a wide set of quantitative and qualitative methods. The triangulation of methods was very useful and provided the opportunity to develop a multidimensional approach to the project [ 12 ].

From the theoretical framework, special attention was paid to the explanation of medical, cultural, social and institutional barriers influencing the chance of employment of chronically ill persons in relation to the characteristics of the participating countries.

Levels of satisfaction with project participation, as well as with expected or achieved results and coping with challenges on local–community levels and macro-social levels, were another source of evaluation.

In the PATHWAYS project, the evaluation was implemented for an unusual purpose. This quasi-experimental design was developed to assess different aspects of the multidimensional project that used a variety of methods (systematic review of literature, content analysis of existing documents, acts, data and reports, surveys on different country-levels, deep interviews) in the different phases of the 3 years. The evaluation monitored each stage of the project and focused on process implementation, with the goal of improving every step of the project. The evaluation process allowed to perform critical assessments and deep analysis of benefits and shortages of the specific phase of the project.

The purpose of the evaluation was to monitor the main steps of the Project, including the expectations associated with a multidimensional, methodological approach used by PATHWAYS partners, as well as improving communication between partners, from different professional and methodological backgrounds involved in the project in all its phases, so as to avoid errors in understanding the specific steps as well as the main goals.

2. Materials and Methods

The paper describes methodology and results gathered during the implementation of Work Package 3, Evaluation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the PATHWAYS) project. The work package was intended to keep internal control over the run of the project to achieve timely fulfillment of tasks, milestones, and purpose by all project partners.

2.1. Participants

The project consortium involved 12 partners from 10 different European countries. There were academics (representing cross-disciplinary research including socio-environmental determinants of health, clinicians), institutions actively working for the integration of people with chronic and mental health problems and disability, educational bodies (working in the area of disability and focusing on inclusive education), national health institutes (for rehabilitation of patients with functional and workplace impairments), an institution for inter-professional rehabilitation at a country level (coordinating medical, social, educational, pre-vocational and vocational rehabilitation), a company providing patient-centered services (in neurorehabilitation). All the partners represented vast knowledge and high-level expertise in the area of interest and all agreed with the World Health Organization’s (WHO) International Classification of Functioning, Disability and Health-ICF and of the biopsychosocial model of health and functioning. The consortium was created based on the following criteria:

  • vision, mission, and activities in the area of project purposes,
  • high level of experience in the area (supported by publications) and in doing research (being involved in international projects, collaboration with the coordinator and/or other partners in the past),
  • being able to get broad geographical, cultural and socio-political representation from EU countries,
  • represent different stakeholder type in the area.

2.2. Project Evaluation Tool

The tool development process involved the following steps:

  • (1) Review definitions of ‘evaluation’ and adopt one which consorts best with the reality of public health research area;
  • (2) Review evaluation approaches and decide on the content which should be applicable in the public health research;
  • (3) Create items to be used in the evaluation tool;
  • (4) Decide on implementation timing.

According to the PATHWAYS project protocol, an evaluation tool for the internal project evaluation was required to collect information about: (1) structure and resources; (2) process, management and communication; (3) achievements and/or outcomes and (4) SWOT analysis. A mixed methods approach was chosen. The specific evaluation process purpose and approach are presented in Table 1 .

Evaluation purposes and approaches adopted for the purpose in the PATHWAYS project.

* Open ended questions are not counted here.

The tool was prepared following different steps. In the paragraph to assess structure and resources, there were questions about the number of partners, professional competences, assigned roles, human, financial and time resources, defined activities and tasks, and the communication plan. The second paragraph, process, management and communication, collected information about the coordination process, consensus level, quality of communication among coordinators, work package leaders, and partners, whether project was carried out according to the plan, involvement of target groups, usefulness of developed materials, and any difficulties in the project realization. Finally, the paragraph achievements and outcomes gathered information about project specific activities such as public-awareness raising, stakeholder participation and involvement, whether planned outcomes (e.g., milestones) were achieved, dissemination activities, and opinions on whether project outcomes met the needs of the target groups. Additionally, it was decided to implement SWOT analysis as a part of the evaluation process. SWOT analysis derives its name from the evaluation of Strengths (S), Weaknesses (W), Opportunities (O), and Threats (T) faced by a company, industry or, in this case, project consortium. SWOT analysis comes from the business world and was developed in the 1960s at Harvard Business School as a tool for improving management strategies among companies, institutions, or organization [ 13 , 14 ]. However, in recent years, SWOT analysis has been adapted in the context of research to improve programs or projects.

For a better understanding of SWOT analysis, it is important to highlight the internal features of Strengths and Weaknesses, which are considered controllable. Strengths refers to work inside the project such as capabilities and competences of partners, whereas weaknesses refers to aspects, which needs improvement, such as resources. Conversely, Opportunities and Threats are considered outside factors and uncontrollable [ 15 ]. Opportunities are maximized to fit the organization’s values and resources and threats are the factors that the organization is not well equipped to deal with [ 9 ].

The PATHWAYS project members participated in SWOT analyses every three months. They answered four open questions about strengths, weaknesses, opportunities, and threats identified in evaluated period (last three months). They were then asked to assess those items on 10-point scale. The sample included results from nine evaluated periods from partners from ten different countries.

The tool for the internal evaluation of the PATHWAYS project is presented in Appendix A .

2.3. Tool Implementation and Data Collection

The PATHWAYS on-going evaluation took place at three-month intervals. It consisted of on-line surveys, and every partner assigned a representative who was expected to have good knowledge on the progress of project’s progress. The structure and resources were assessed only twice, at the beginning (3rd month) and at the end (36th month) of the project. The process, management, and communication questions, as well as SWOT analysis questions, were asked every three months. The achievements and outcomes questions started after the first year of implementation (i.e., after 15th month), and some of items in this paragraph, (results achieved, whether project outcomes meet the needs of the target groups and published regular publications), were only implemented at the end of the project (36th month).

2.4. Evaluation Team

The evaluation team was created from professionals with different backgrounds and extensive experience in research methodology, sociology, social research methods and public health.

The project started in 2015 and was carried out for 36 months. There were 12 partners in the PATHWAYS project, representing Austria, Belgium, Czech Republic, Germany, Greece, Italy, Norway, Poland, Slovenia and Spain and a European Organization. The on-line questionnaire was sent to all partners one week after the specified period ended and project partners had at least 2 weeks to fill in/answer the survey. Eleven rounds of the survey were performed.

The participation rate in the consecutive evaluation surveys was 11 (91.7%), 12 (100%), 12 (100%), 11 (91.7%), 10 (83.3%), 11 (91.7%), 11 (91.7%), 10 (83.3%), and 11 (91.7%) till the project end. Overall, it rarely covered the whole group, which may have resulted from a lack of coercive mechanisms at a project level to answer project evaluation questions.

3.1. Evaluation Results Considering Structure and Resources (3rd Month Only)

A total of 11 out of 12 project partners participated in the first evaluation survey. The structure and resources of the project were not assessed by the project coordinator and as such, the results in represent the opinions of the other 10 participating partners. The majority of respondents rated the project consortium as having at least adequate professional competencies. In total eight to nine project partners found human, financial and time resources ‘just right’ and the communication plan ‘clear’. More concerns were observed regarding the clarity of tasks, what is expected from each partner, and how specific project activities should be or were assigned.

3.2. Evaluation Results Considering Process, Management and Communication

The opinions about project coordination, communication processes (with coordinator, between WP leaders, and between individual partners/researchers) were assessed as ‘good’ and ‘very good’, along the whole period. There were some issues, however, when it came to the realization of specific goals, deliverables, or milestones of the project.

Given the broad scope of the project and participating partner countries, we created a glossary to unify the common terms used in the project. It was a challenge, as during the project implementation there were several discussions and inconsistencies in the concepts provided ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g001.jpg

Partners’ opinions about the consensus around terms (shared glossary) in the project consortium across evaluation waves (W1—after 3-month realization period, and at 3-month intervals thereafter).

Other issues, which appeared during project implementation, were recruitment of, involvement with, and cooperation with stakeholders. There was a range of groups to be contacted and investigated during the project including individual patients suffering from chronic conditions, patients’ advocacy groups and national governmental organizations, policy makers, employers, and international organizations. It was found that during the project, the interest and the involvement level of the aforementioned groups was quite low and difficult to achieve, which led to some delays in project implementation ( Figure 2 ). This was the main cause of smaller percentages of “what was expected to be done in designated periods of project realization time”. The issue was monitored and eliminated by intensification of activities in this area ( Figure 3 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g002.jpg

Partners’ reports on whether the project had been carried out according to the plan ( a ) and the experience of any problems in the process of project realization ( b ) (W1—after 3-month realization period, and at 3-month intervals thereafter).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g003.jpg

Partners’ reports on an approximate estimation (in percent) of the project plan implementation (what has been done according to the plan) ( a ) and the involvement of target groups (W1—after 3-month realization period, and at 3-month intervals thereafter) ( b ).

3.3. Evaluation Results Considering Achievements and Outcomes

The evaluation process was prepared to monitor project milestones and deliverables. One of the PATHWAYS project goals was to raise public awareness surrounding the reintegration of chronically ill people into the labor market. This was assessed subjectively by cooperating partners and only half (six) felt they achieved complete success on that measure. The evaluation process monitored planned outcomes according to: (1) determination of strategies for awareness rising activities, (2) assessment of employment-related needs, and (3) development of guidelines (which were planned by the project). The majority of partners completely fulfilled this task. Furthermore, the dissemination process was also carried out according to the plan.

3.4. Evaluation Results from SWOT

3.4.1. strengths.

Amongst the key issues identified across all nine evaluated periods ( Figure 4 ), the “strong consortium” was highlighted as the most important strength of the PATHWAYS project. The most common arguments for this assessment were the coordinator’s experience in international projects, involvement of interdisciplinary experts who could guarantee a holistic approach to the subject, and a highly motivated team. This was followed by the uniqueness of the topic. Project implementers pointed to the relevance of the analyzed issues, which are consistent with social needs. They also highlighted that this topic concerned an unexplored area in employment policy. The interdisciplinary and international approach was also emphasized. According to the project implementers, the international approach allowed mapping of vocational and prevocational processes among patients with chronic conditions and disability throughout Europe. The interdisciplinary approach, on the other hand, enabled researchers to create a holistic framework that stimulates innovation by thinking across boundaries of particular disciplines—especially as the PATHWAYS project brings together health scientists from diverse fields (physicians, psychologists, medical sociologists, etc.) from ten European countries. This interdisciplinary approach is also supported by the methodology, which is based on a mixed-method approach (qualitative and quantitative data). The involvement of an advocacy group was another strength identified by the project implementers. It was stressed that the involvement of different types of stakeholders increased validity and social triangulation. It was also assumed that it would allow for the integration of relevant stakeholders. The last strength, the usefulness of results, was identified only in the last two evaluation waves, when the first results had been measured.

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g004.jpg

SWOT Analysis—a summary of main issues reported by PATHWAYS project partners.

3.4.2. Weaknesses

The survey respondents agreed that the main weaknesses of the project were time and human resources. The subject of the PATHWAYS project turned out to be very broad, and therefore the implementers pointed to the insufficient human resources and inadequate time for the implementation of individual tasks, as well as the project overall. This was related to the broad categories of chronic diseases chosen for analysis in the project. On one hand, the implementers complained about the insufficient number of chronic diseases taken into account in the project. On the other hand, they admitted that it was not possible to cover all chronic diseases in details. The scope of the project was reported as another weakness. In the successive waves of evaluation, the implementers more often pointed out that it was hard to cover all relevant topics.

Nevertheless, some of the major weaknesses reported during the project evaluation were methodological problems. Respondents pointed to problems with the implementation of tasks on a regular basis. For example, survey respondents highlighted the need for more open questions in the survey that the questionnaire was too long or too complicated, that the tools were not adjusted for relevancy in the national context, etc. Another issue was that the working language was English, but all tools or survey questionnaire needed to be translated into different languages and this issue was not always considered by the Commission in terms of timing and resources. This issue could provide useful for further projects, as well as for future collaborations.

The difficulties of involving stakeholders were reported, especially during tasks, which required their active commitment, like participation in in-depth interviews or online questionnaires. Interestingly, the international approach was considered both strength and weakness of the project. The implementers highlighted the complexity of making comparisons between health care and/or social care in different countries. The budget was also identified as a weakness by the project implementers. More funds obtained from the partners could have helped PATHWAYS enhance dissemination and stakeholders’ participation.

3.4.3. Opportunities

A list of seven issues within the opportunities category reflects the positive outlook of survey respondents from the beginning of the project to its final stage. Social utility was ranked as the top opportunity. The implementers emphasized that the project could fill a gap between the existing solutions and the real needs of people with chronic diseases and mental disorders. The implementers also highlighted the role of future recommendations, which would consist of proposed solutions for professionals, employees, employers, and politicians. These advantages are strongly associated with increasing awareness of employment situations of people with chronic diseases in Europe and the relevance of the problem. Alignment with policies, strategies, and stakeholders’ interests were also identified as opportunities. The topic is actively discussed on the European and national level, and labor market and employment issues are increasingly emphasized in the public discourse. What is more relevant is that the European Commission considers the issue crucial, and the results of the project are in line with its requests for the future. The implementers also observed increasing interest from the stakeholders, which is very important for the future of the project. Without doubt, the social network of project implementers provides a huge opportunity for the sustainability of results and the implementation of recommendations.

3.4.4. Threats

Insufficient response from stakeholders was the top perceived threat selected by survey respondents. The implementers indicated that insufficient involvement of stakeholders resulted in low response rates in the research phase, which posed a huge threat for the project. The interdisciplinary nature of the PATHWAYS project was highlighted as a potential threat due to differences in technical terminology and different systems of regulating the employment of persons with reduced work capacity in each country, as well as many differences in the legislation process. Insufficient funding and lack of existing data were identified as the last two threats.

One novel aspect of the evaluation process in the PATHWAYS project was a numerical SWOT analysis. Participants were asked to score strengths, weaknesses, opportunities, and threats from 0 (meaning the lack of/no strengths, weaknesses) to 10 (meaning a lot of ... several ... strengths, weaknesses). This concept enabled us to get a subjective score of how partners perceive the PATHWAYS project itself and the performance of the project, as well as how that perception changes over time. Data showed an increase in both strengths and opportunities and a decrease in weaknesses and threats over the course of project implementation ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g005.jpg

Numerical SWOT, combined, over a period of 36 months of project realization (W1—after 3-month realization period, and at 3-month intervals thereafter).

4. Discussion

The need for project evaluation was born from an industry facing challenges regarding how to achieve market goals in more efficient way. Nowadays, every process, including research project implementation, faces questions regarding its effectiveness and efficiency.

The challenge of a research project evaluation is that the majority of research projects are described as unique, although we believe several projects face similar issues and challenges as those observed in the PATHWAYS project.

The main objectives of the PATHWAYS Project were (a) to identify integration and re-integration strategies that are available in Europe and beyond for individuals with chronic diseases and mental disorders experiencing work-related problems (such as unemployment, absenteeism, reduced productivity, stigmatization), (b) to determine their effectiveness, (c) to assess the specific employment-related needs of those people, and (d) to develop guidelines supporting the implementation of effective strategies of professional integration and reintegration. The broad area of investigation, partial knowledge in the field, diversity of determinants across European Union countries, and involvement with stakeholders representing different groups caused several challenges in the project, including:

  • problem : uncovered, challenging, demanding (how to encourage stakeholders to participate, share experiences),
  • diversity : different European regions; different determinants: political, social, cultural; different public health and welfare systems; differences in law regulations; different employment policies and issues in the system,
  • multidimensionality of research: some quantitative, qualitative studies including focus groups, opinions from professionals, small surveys in target groups (workers with chronic conditions).

The challenges to the project consequently led to several key issues, which should be taken, into account during project realization:

  • partners : with their own expertise and interests; different expectations; different views on what is more important to focused on and highlighted;
  • issues associated with unification : between different countries with different systems (law, work-related and welfare definitions, disability classification, others);
  • coordination : as multidimensionality of the project may have caused some research activities by partners to move in a wrong direction (data, knowledge which is not needed for the project purposes), a lack of project vision in (some) partners might postpone activities through misunderstanding;
  • exchange of information : multidimensionality, the fact that different tasks were accomplished by different centers and obstacles to data collection required good communication methods and smooth exchange of information.

Identified Issues and Implemented Solutions

There were several issues identified through the semi-internal evaluation process performed during the project. Those, which might be more relevant for the project realization, are mentioned in the Table 2 .

Issues identified by the evaluation process and solutions implemented.

The PATHWAYS project included diverse partners representing different areas of expertise and activity (considering broad aspect of chronic diseases, decline in functioning and of disability, and its role in a labor market) in different countries and social security systems, which caused a challenge when developing a common language to achieve effective communication and better understanding of facts and circumstances in different countries. The implementation of continuous project process monitoring, and proper adjustment, enabled the team to overcome these challenges.

The evaluation tool has several benefits. First, it covers all key areas of the research project including structure and available resources, the run of the process, quality and timing of management and communication, as well as project achievements and outcomes. Continuous evaluation of all of these areas provides in-depth knowledge about project performance. Second, the implementation of SWOT tool provided opportunities to share out good and bad experiences by all project partners, and the use of a numerical version of SWOT provided a good picture about inter-relations strengths—weaknesses and opportunities—threats in the project and showed the changes in their intensity over time. Additionally, numerical SWOT may verify whether perception of a project improves over time (as was observed in the PATHWAYS project) showing an increase in strengths and opportunities and a decrease in weaknesses and threats. Third, the intervals in which partners were ‘screened’ by the evaluation questionnaire seems to be appropriate, as it was not very demanding but frequent enough to diagnose on-time some issues in the project process.

The experiences with the evaluation also revealed some limitations. There were no coercive mechanisms for participation in the evaluation questionnaires, which may have caused a less than 100% response rate in some screening surveys. Practically, that was not a problem in the PATHWAYS project. Theoretically, however, this might lead to unrevealed problems, as partners experiencing troubles might not report them. Another point is asking about quality of the consortium to the project coordinator, which has no great value (the consortium is created by the coordinator in the best achievable way and it is hard to expect other comments especially at the beginning of the project). Regarding the tool itself, the question Could you give us approximate estimation (in percent) of the project plan realization (what has been done according to the plan)? was expected to collect information about the project partners collecting data on what has been done out of what should be done during each evaluation period, meaning that 100% was what should be done in 3-month time in our project. This question, however, was slightly confusing at the beginning, as it was interpreted as percentage of all tasks and activities planned for the whole duration of the project. Additionally, this question only works provided that precise, clear plans on the type and timing of tasks were allocated to the project partners. Lastly, there were some questions with very low variability in answer types across evaluation surveys (mainly about coordination and communication). Our opinion is that if the project runs/performs in a smooth manner, one may think such questions useless, but in more complicated projects, these questions may reveal potential causes of troubles.

5. Conclusions

The PATHWAYS project experience shows a need for the implementation of structured evaluation processes in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every project and we suggest the following steps while doing multidisciplinary research:

  • Define area/s of interest (decision maker level/s; providers; beneficiaries: direct, indirect),
  • Identify 2–3 possible partners for each area (chain sampling easier, more knowledge about; check for publications),
  • Prepare a research plan (propose, ask for supportive information, clarify, negotiate),
  • Create a cross-partner groups of experts,
  • Prepare a communication strategy (communication channels, responsible individuals, timing),
  • Prepare a glossary covering all the important issues covered by the research project,
  • Monitor the project process and timing, identify concerns, troubles, causes of delays,
  • Prepare for the next steps in advance, inform project partners about the upcoming activities,
  • Summarize, show good practices, successful strategies (during project realization, to achieve better project performance).

Acknowledgments

The current study was part of the PATHWAYS project, that has received funding from the European Union’s Health Program (2014–2020) Grant agreement no. 663474.

The evaluation questionnaire developed for the PATHWAYS Project.

SWOT analysis:

What are strengths and weaknesses of the project? (list, please)

What are threats and opportunities? (list, please)

Visual SWOT:

Please, rate the project on the following continua:

How would you rate:

(no strengths) 0 1 2 3 4 5 6 7 8 9 10 (a lot of strengths, very strong)

(no weaknesses) 0 1 2 3 4 5 6 7 8 9 10 (a lot of weaknesses, very weak)

(no risks) 0 1 2 3 4 5 6 7 8 9 10 (several risks, inability to accomplish the task(s))

(no opportunities) 0 1 2 3 4 5 6 7 8 9 10 (project has a lot of opportunities)

Author Contributions

A.G., A.P., B.T.-A. and M.L. conceived and designed the concept; A.G., A.P., B.T.-A. finalized evaluation questionnaire and participated in data collection; A.G. analyzed the data; all authors contributed to writing the manuscript. All authors agreed on the content of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

  • Project Management Methodologies
  • Project Management Metrics
  • Project Portfolio Management
  • Proof of Concept Templates
  • Punch List Templates
  • Requirements Traceability Matrix
  • Resource Scheduling
  • Roles and Responsibilities Template
  • Stakeholder Mapping
  • Team Charter
  • What is Project Baseline
  • Work Log Templates
  • Workback Schedule
  • Workload Management
  • Work Breakdown Structures
  • Agile Team Structure
  • Cross-Functional Flowcharts
  • Creating Project Charters
  • Guide to Team Communication
  • How to Prioritize Tasks
  • Mastering RAID Logs
  • Overcoming Analysis Paralysis
  • Understanding RACI Model
  • Eisenhower Matrix Guide
  • Guide to Multi Project Management
  • Procure-to-Pay Best Practices
  • Procurement Management Plan Template to Boost Project Success
  • Project Execution and Change Management
  • Project Plan and Schedule Templates
  • Resource Planning Templates for Smooth Project Execution
  • Risk Management and Quality Management Plan Templates
  • Risk Management in Software Engineering
  • Stage Gate Process
  • Stakeholder Management Planning
  • Understanding the S-Curve
  • Visualizing Your To-Do List
  • 30-60-90 Day Plan
  • Work Plan Template
  • Weekly Planner Template
  • Task Analysis Examples
  • Cross-Functional Flowcharts for Planning
  • Inventory Management Tecniques
  • Inventory Templates
  • Six Sigma DMAIC Method
  • Visual Process Improvement
  • Value Stream Mapping
  • Creating a Workflow
  • Fibonacci Scale Template
  • Supply Chain Diagram
  • Kaizen Method
  • Procurement Process Flow Chart
  • UML Activity Diagrams
  • Class Diagrams & their Relationships
  • Visualize flowcharts for software
  • Wire-Frame Benefits
  • Applications of UML
  • Selecting UML Diagrams
  • Create Sequence Diagrams Online
  • Activity Diagram Tool
  • Archimate Tool
  • Class Diagram Tool
  • Graphic Organizers
  • Social Work Assessment Tools
  • Using KWL Charts to Boost Learning
  • Editable Timeline Templates
  • Guides & Best Practices
  • Kinship Diagram Guide
  • Graphic Organizers for Teachers & Students
  • Visual Documentation Techniques
  • Visual Tool for Visual Documentation
  • Visualizing a Dichotomous Key
  • 5 W's Chart
  • Circular Flow Diagram Maker
  • Cladogram Maker
  • Comic Strip Maker
  • Course Design Template
  • AI Buyer Persona
  • AI Data Visualization
  • AI Diagrams
  • AI Project Management
  • AI SWOT Analysis
  • Best AI Templates
  • Brainstorming AI
  • Pros & Cons of AI
  • AI for Business Strategy
  • Using AI for Business Plan
  • AI for HR Teams
  • BPMN Symbols
  • BPMN vs UML
  • Business Process Analysis
  • Business Process Modeling
  • Capacity Planning Guide
  • Case Management Process
  • How to Avoid Bottlenecks in Processes
  • Innovation Management Process
  • Project vs Process
  • Solve Customer Problems
  • Streamline Purchase Order Process
  • What is BPMN
  • Approval Process
  • Employee Exit Process
  • Iterative Process
  • Process Documentation
  • Process Improvement Ideas
  • Risk Assessment Process
  • Tiger Teams
  • Work Instruction Templates
  • Workflow Vs. Process
  • Process Mapping
  • Business Process Reengineering
  • Meddic Sales Process
  • SIPOC Diagram
  • What is Business Process Management
  • Process Mapping Software
  • Business Analysis Tool
  • Business Capability Map
  • Decision Making Tools and Techniques
  • Operating Model Canvas
  • Mobile App Planning
  • Product Development Guide
  • Product Roadmap
  • Timeline Diagrams
  • Visualize User Flow
  • Sequence Diagrams
  • Flowchart Maker
  • Online Class Diagram Tool
  • Organizational Chart Maker
  • Mind Map Maker
  • Retro Software
  • Agile Project Charter
  • Critical Path Software
  • Brainstorming Guide
  • Brainstorming Tools
  • Visual Tools for Brainstorming
  • Brainstorming Content Ideas
  • Brainstorming in Business
  • Brainstorming Questions
  • Brainstorming Rules
  • Brainstorming Techniques
  • Brainstorming Workshop
  • Design Thinking and Brainstorming
  • Divergent vs Convergent Thinking
  • Group Brainstorming Strategies
  • Group Creativity
  • How to Make Virtual Brainstorming Fun and Effective
  • Ideation Techniques
  • Improving Brainstorming
  • Marketing Brainstorming
  • Rapid Brainstorming
  • Reverse Brainstorming Challenges
  • Reverse vs. Traditional Brainstorming
  • What Comes After Brainstorming
  • Spider Diagram Guide
  • 5 Whys Template
  • Assumption Grid Template
  • Brainstorming Templates
  • Brainwriting Template
  • Innovation Techniques
  • 50 Business Diagrams
  • Business Model Canvas
  • Change Control Process
  • Change Management Process
  • NOISE Analysis
  • Profit & Loss Templates
  • Scenario Planning
  • Winning Brand Strategy
  • Work Management Systems
  • Developing Action Plans
  • How to Write a Memo
  • Improve Productivity & Efficiency
  • Mastering Task Batching
  • Monthly Budget Templates
  • Top Down Vs. Bottom Up
  • Weekly Schedule Templates
  • Kaizen Principles
  • Opportunity Mapping
  • Strategic-Goals
  • Strategy Mapping
  • T Chart Guide
  • Business Continuity Plan
  • Developing Your MVP
  • Incident Management
  • Needs Assessment Process
  • Product Development From Ideation to Launch
  • Visualizing Competitive Landscape
  • Communication Plan
  • Graphic Organizer Creator
  • Fault Tree Software
  • Bowman's Strategy Clock Template
  • Decision Matrix Template
  • Communities of Practice
  • Goal Setting for 2024
  • Meeting Templates
  • Meetings Participation
  • Microsoft Teams Brainstorming
  • Retrospective Guide
  • Skip Level Meetings
  • Visual Documentation Guide
  • Weekly Meetings
  • Affinity Diagrams
  • Business Plan Presentation
  • Post-Mortem Meetings
  • Team Building Activities
  • WBS Templates
  • Online Whiteboard Tool
  • Communications Plan Template
  • Idea Board Online
  • Meeting Minutes Template
  • Genograms in Social Work Practice
  • How to Conduct a Genogram Interview
  • How to Make a Genogram
  • Genogram Questions
  • Genograms in Client Counseling
  • Understanding Ecomaps
  • Visual Research Data Analysis Methods
  • House of Quality Template
  • Customer Problem Statement Template
  • Competitive Analysis Template
  • Creating Operations Manual
  • Knowledge Base
  • Folder Structure Diagram
  • Online Checklist Maker
  • Lean Canvas Template
  • Instructional Design Examples
  • Genogram Maker
  • Work From Home Guide
  • Strategic Planning
  • Employee Engagement Action Plan
  • Huddle Board
  • One-on-One Meeting Template
  • Story Map Graphic Organizers
  • Introduction to Your Workspace
  • Managing Workspaces and Folders
  • Adding Text
  • Collaborative Content Management
  • Creating and Editing Tables
  • Adding Notes
  • Introduction to Diagramming
  • Using Shapes
  • Using Freehand Tool
  • Adding Images to the Canvas
  • Accessing the Contextual Toolbar
  • Using Connectors
  • Working with Tables
  • Working with Templates
  • Working with Frames
  • Using Notes
  • Access Controls
  • Exporting a Workspace
  • Real-Time Collaboration
  • Notifications
  • Meet Creately VIZ
  • Unleashing the Power of Collaborative Brainstorming
  • Uncovering the potential of Retros for all teams
  • Collaborative Apps in Microsoft Teams
  • Hiring a Great Fit for Your Team
  • Project Management Made Easy
  • Cross-Corporate Information Radiators
  • Creately 4.0 - Product Walkthrough
  • What's New

What is Project Evaluation? The Complete Guide with Templates

hero-img

Project evaluation is an important part of determining the success or failure of a project. Properly evaluating a project helps you understand what worked well and what could be improved for future projects. This blog post will provide an overview of key components of project evaluation and how to conduct effective evaluations.

What is Project Evaluation?

Project evaluation is a key part of assessing the success, progress and areas for improvement of a project. It involves determining how well a project is meeting its goals and objectives. Evaluation helps determine if a project is worth continuing, needs adjustments, or should be discontinued.

A good evaluation plan is developed at the start of a project. It outlines the criteria that will be used to judge the project’s performance and success. Evaluation criteria can include things like:

  • Meeting timelines and budgets - Were milestones and deadlines met? Was the project completed within budget?
  • Delivering expected outputs and outcomes - Were the intended products, results and benefits achieved?
  • Satisfying stakeholder needs - Were customers, users and other stakeholders satisfied with the project results?
  • Achieving quality standards - Were quality metrics and standards defined and met?
  • Demonstrating effectiveness - Did the project accomplish its intended purpose?

Project evaluation provides valuable insights that can be applied to the current project and future projects. It helps organizations learn from their projects and continuously improve their processes and outcomes.

Project Evaluation Templates

These templates will help you evaluate your project by providing a clear structure to assess how it was planned, carried out, and what it achieved. Whether you’re managing the project, part of the team, or a stakeholder, these template assist in gathering information systematically for a thorough evaluation.

Project Evaluation Template 1

exit full-screen

Project Evaluation Template 2

Project Evaluation Methods

Project evaluation involves using various methods to assess the performance and impact of a project. The choice of methods depends on the nature of the project, its objectives, and the available resources. Here are some common project evaluation methods:

Pre-project evaluation

Pre-project evaluations are done before a project begins. This involves evaluating the project plan, scope, objectives, resources, and budget. This helps determine if the project is feasible and identifies any potential issues or risks upfront. It establishes a baseline for later evaluations.

Ongoing evaluation

Ongoing evaluations happen during the project lifecycle. Regular status reports track progress against the project plan, budget, and deadlines. Any deviations or issues are identified and corrective actions can be taken promptly. This allows projects to stay on track and make adjustments as needed.

Post-project evaluation

Post-project evaluations occur after a project is complete. This final assessment determines if the project objectives were achieved and customer requirements were met. Key metrics like timeliness, budget, and quality are examined. Lessons learned are documented to improve processes for future projects. Stakeholder feedback is gathered through surveys, interviews, or focus groups .

Project Evaluation Steps

When evaluating a project, there are several key steps you should follow. These steps will help you determine if the project was successful and identify areas for improvement in future initiatives.

Step 1: Set clear goals

The first step is establishing clear goals and objectives for the project before it begins. Make sure these objectives are SMART: specific, measurable, achievable, relevant and time-bound. Having clear goals from the outset provides a benchmark for measuring success later on.

Step 2: Monitor progress

Once the project is underway, the next step is monitoring progress. Check in regularly with your team to see if you’re on track to meet your objectives and deadlines. Identify and address any issues as early as possible before they become major roadblocks. Monitoring progress also allows you to course correct if needed.

Step 3: Collect data

After the project is complete, collect all relevant data and metrics. This includes both quantitative data like budget information, timelines and deliverables, as well customer feedback and qualitative data from surveys or interviews. Analyzing this data will show you how well the project performed against your original objectives.

Step 4: Analyze and interpret

Identify what worked well and what didn’t during the project. Highlight best practices to replicate and lessons learned to improve future initiatives. Get feedback from all stakeholders involved, including project team members, customers and management.

Step 5: Develop an action plan

Develop an action plan to apply what you’ve learned for the next project. Update processes, procedures and resource allocations based on your evaluation. Communicate changes across your organization and train employees on any new best practices. Implementing these changes will help you avoid similar issues the next time around.

Benefits of Project Evaluation

Project evaluation is a valuable tool for organizations, helping them learn, adapt, and improve their project outcomes over time. Here are some benefits of project evaluation.

  • Helps in making informed decisions by providing a clear understanding of the project’s strengths, weaknesses, and areas for improvement.
  • Holds the project team accountable for meeting goals and using resources effectively, fostering a sense of responsibility.
  • Facilitates organizational learning by capturing valuable insights and lessons from both successful and challenging aspects of the project.
  • Allows for the efficient allocation of resources by identifying areas where adjustments or reallocations may be needed.
  • Provides evidence of the project’s value by assessing its impact, cost-effectiveness, and alignment with organizational objectives.
  • Involves stakeholders in the evaluation process, fostering collaboration, and ensuring that diverse perspectives are considered.

Project Evaluation Best Practices

Follow these best practices to do a more effective and meaningful project evaluation, leading to better project outcomes and organizational learning.

  • Clear objectives : Clearly define the goals and questions you want the evaluation to answer.
  • Involve stakeholders : Include the perspectives of key stakeholders to ensure a comprehensive evaluation.
  • Use appropriate methods : Choose evaluation methods that suit your objectives and available resources.
  • Timely data collection : Collect data at relevant points in the project timeline to ensure accuracy and relevance.
  • Thorough analysis : Analyze the collected data thoroughly to draw meaningful conclusions and insights.
  • Actionable recommendations : Provide practical recommendations that can lead to tangible improvements in future projects.
  • Learn and adapt : Use evaluation findings to learn from both successes and challenges, adapting practices for continuous improvement.
  • Document lessons : Document lessons learned from the evaluation process for organizational knowledge and future reference.

How to Use Creately to Evaluate Your Projects

Use Creately’s visual collaboration platform to evaluate your project and improve communication, streamline collaboration, and provide a visual representation of project data effectively.

Task tracking and assignment

Use the built-in project management tools to create, assign, and track tasks right on the canvas. Assign responsibilities, set due dates, and monitor progress with Agile Kanban boards, Gantt charts, timelines and more. Create task cards containing detailed information, descriptions, due dates, and assigned responsibilities.

Notes and attachments

Record additional details and attach documents, files, and screenshots related to your tasks and projects with per item integrated notes panel and custom data fields. Or easily embed files and attachments right on the workspace to centralize project information. Work together on project evaluation with teammates with full multiplayer text and visual collaboration.

Real-time collaboration

Get any number of participants on the same workspace and track their additions to the progress report in real-time. Collaborate with others in the project seamlessly with true multi-user collaboration features including synced previews and comments and discussion threads. Use Creately’s Microsoft Teams integration to brainstorm, plan, run projects during meetings.

Pre-made templates

Get a head start with ready-to-use progress evaluation templates and other project documentation templates available right inside the app. Explore 1000s more templates and examples for various scenarios in the community.

In summary, project evaluation is like a compass for projects, helping teams understand what worked well and what can be improved. It’s a tool that guides organizations to make better decisions and succeed in future projects. By learning from the past and continuously improving, project evaluation becomes a key factor in the ongoing journey of project management, ensuring teams stay on the path of excellence and growth.

More project management related guides

  • 8 Essential Metrics to Measure Project Success
  • How to Manage Your Project Portfolio Like a Pro
  • What is Project Baseline in Project Management?
  • How to Create a Winning Project Charter: Your Blueprint for Success
  • Your Comprehensive Guide to Creating Effective Workback Schedules
  • What is a Work Breakdown Structure? and How To Create a WBS?
  • The Practical Guide to Creating a Team Charter
  • Your Guide to Multi-Project Management
  • How AI Is Transforming Project Management
  • A Practical Guide to Resource Scheduling in Project Management

Join over thousands of organizations that use Creately to brainstorm, plan, analyze, and execute their projects successfully.

More Related Articles

What is Dependency  Mapping in Project Management?

Amanda Athuraliya is the communication specialist/content writer at Creately, online diagramming and collaboration tool. She is an avid reader, a budding writer and a passionate researcher who loves to write about all kinds of topics.

  • Contact sales

Start free trial

Project Evaluation Process: Definition, Methods & Steps

ProjectManager

Managing a project with copious moving parts can be challenging to say the least, but project evaluation is designed to make the process that much easier. Every project starts with careful planning —t his sets the stage for the execution phase of the project while estimations, plans and schedules guide the project team as they complete tasks and deliverables.

But even with the project evaluation process in place, managing a project successfully is not as simple as it sounds. Project managers need to keep track of costs , tasks and time during the entire project life cycle to make sure everything goes as planned. To do so, they utilize the project evaluation process and make use of project management software to help manage their team’s work in addition to planning and evaluating project performance.

What Is Project Evaluation?

Project evaluation is the process of measuring the success of a project, program or portfolio . This is done by gathering data about the project and using an evaluation method that allows evaluators to find performance improvement opportunities. Project evaluation is also critical to keep stakeholders updated on the project status and any changes that might be required to the budget or schedule.

Every aspect of the project such as costs, scope, risks or return on investment (ROI) is measured to determine if it’s proceeding as planned. If there are road bumps, this data can inform how projects can improve. Basically, you’re asking the project a series of questions designed to discover what is working, what can be improved and whether the project is useful. Tools such as project dashboards and trackers help in the evaluation process by making key data readily available.

project research assessment

Get your free

Project Review Template

Use this free Project Review Template for Word to manage your projects better.

The project evaluation process has been around as long as projects themselves. But when it comes to the science of project management , project evaluation can be broken down into three main types or methods: pre-project evaluation, ongoing evaluation and post-project evaluation. Let’s look at the project evaluation process, what it entails and how you can improve your technique.

Project Evaluation Criteria

The specific details of the project evaluation criteria vary from one project or one organization to another. In general terms, a project evaluation process goes over the project constraints including time, cost, scope, resources, risk and quality. In addition, organizations may add their own business goals, strategic objectives and other project metrics .

Project Evaluation Methods

There are three points in a project where evaluation is most needed. While you can evaluate your project at any time, these are points where you should have the process officially scheduled.

1. Pre-Project Evaluation

In a sense, you’re pre-evaluating your project when you write your project charter to pitch to the stakeholders. You cannot effectively plan, staff and control a new project if you’ve first not evaluated it. Pre-project evaluation is the only sure way you can determine the effectiveness of the project before executing it.

2. Ongoing Project Evaluation

To make sure your project is proceeding as planned and hitting all of the scheduling and budget milestones you’ve set, it’s crucial that you constantly monitor and report on your work in real-time. Only by using project metrics can you measure the success of your project and whether or not you’re meeting the project’s goals and objectives. It’s strongly recommended that you use project management dashboards and tracking tools for ongoing evaluation.

Related: Free Project Dashboard Template for Excel

3. Post-Project Evaluation

Think of this as a postmortem. Post-project evaluation is when you go through the project’s paperwork, interview the project team and principles and analyze all relevant data so you can understand what worked and what went wrong. Only by developing this clear picture can you resolve issues in upcoming projects.

Free Project Review Template for Word

The project review template for Word is the perfect way to evaluate your project, whether it’s an ongoing project evaluation or post-project. It takes a holistic approach to project evaluation and covers such areas as goals, risks, staffing, resources and more. Download yours today.

Project review template

Project Evaluation Steps

Regardless of when you choose to run a project evaluation, the process always has four phases: planning, implementation, completion and dissemination of reports.

1. Planning

The ultimate goal of this step is to create a project evaluation plan, a document that explains all details of your organization’s project evaluation process. When planning for a project evaluation, it’s important to identify the stakeholders and what their short-and-long-term goals are. You must make sure that your goals and objectives for the project are clear, and it’s critical to have settled on criteria that will tell you whether these goals and objects are being met.

So, you’ll want to write a series of questions to pose to the stakeholders. These queries should include subjects such as the project framework, best practices and metrics that determine success.

By including the stakeholders in your project evaluation plan, you’ll receive direction during the course of the project while simultaneously developing a relationship with the stakeholders. They will get progress reports from you throughout the project life cycle , and by building this initial relationship, you’ll likely earn their belief that you can manage the project to their satisfaction.

project plan template for word

2. Implementation

While the project is running, you must monitor all aspects to make sure you’re meeting the schedule and budget. One of the things you should monitor during the project is the percentage completed. This is something you should do when creating status reports and meeting with your team. To make sure you’re on track, hold the team accountable for delivering timely tasks and maintain baseline dates to know when tasks are due.

Don’t forget to keep an eye on quality. It doesn’t matter if you deliver the project within the allotted time frame if the product is poor. Maintain quality reviews, and don’t delegate that responsibility. Instead, take it on yourself.

Maintaining a close relationship with the project budget is just as important as tracking the schedule and quality. Keep an eye on costs. They will fluctuate throughout the project, so don’t panic. However, be transparent if you notice a need growing for more funds. Let your steering committee know as soon as possible, so there are no surprises.

3. Completion

When you’re done with your project, you still have work to do. You’ll want to take the data you gathered in the evaluation and learn from it so you can fix problems that you discovered in the process. Figure out the short- and long-term impacts of what you learned in the evaluation.

4. Reporting and Disseminating

Once the evaluation is complete, you need to record the results. To do so, you’ll create a project evaluation report, a document that provides lessons for the future. Deliver your report to your stakeholders to keep them updated on the project’s progress.

How are you going to disseminate the report? There might be a protocol for this already established in your organization. Perhaps the stakeholders prefer a meeting to get the results face-to-face. Or maybe they prefer PDFs with easy-to-read charts and graphs. Make sure that you know your audience and tailor your report to them.

Benefits of Project Evaluation

Project evaluation is always advisable and it can bring a wide array of benefits to your organization. As noted above, there are many aspects that can be measured through the project evaluation process. It’s up to you and your stakeholders to decide the most critical factors to consider. Here are some of the main benefits of implementing a project evaluation process.

  • Better Project Management: Project evaluation helps you easily find areas of improvement when it comes to managing your costs , tasks, resources and time.
  • Improves Team performance: Project evaluation allows you to keep track of your team’s performance and increases accountability.
  • Better Project Planning: Helps you compare your project baseline against actual project performance for better planning and estimating.
  • Helps with Stakeholder Management: Having a good relationship with stakeholders is key to success as a project manager. Creating a project evaluation report is very important to keep them updated.

How ProjectManager Improves the Project Evaluation Process

To take your project evaluation to the next level, you’ll want ProjectManager , an online work management tool with live dashboards that deliver real-time data so you can monitor what’s happening now as opposed to what happened yesterday.

With ProjectManager’s real-time dashboard, project evaluation is measured in real-time to keep you updated. The numbers are then displayed in colorful graphs and charts. Filter the data to show the data you want or to drill down to get a deeper picture. These graphs and charts can also be shared with a keystroke. You can track workload and tasks, because your team is updating their status in real-time, wherever they are and at whatever time they complete their work.

ProjectManager’s dashboard view, which shows six key metrics on a project

Project evaluation with ProjectManager’s real-time dashboard makes it simple to go through the evaluation process during the evolution of the project. It also provides valuable data afterward. The project evaluation process can even be fun, given the right tools. Feel free to use our automated reporting tools to quickly build traditional project reports, allowing you to improve both the accuracy and efficiency of your evaluation process.

ProjectManager's status report filter

ProjectManager is a cloud-based project management software that has a suite of powerful tools for every phase of your project, including live dashboards and reporting tools. Our software collects project data in real-time and is constantly being fed information by your team as they progress through their tasks. See how monitoring, evaluation and reporting can be streamlined by taking a free 30-day trial today!

Click here to browse ProjectManager's free templates

Deliver your projects on time and under budget

Start planning your projects.

  • Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, what is meant by impact, 2. why evaluate research impact, 3. evaluating research impact, 4. impact and the ref, 5. the challenges of impact evaluation, 6. developing systems and taxonomies for capturing impact, 7. indicators, evidence, and impact within systems, 8. conclusions and recommendations.

  • < Previous

Assessment, evaluations, and definitions of research impact: A review

  • Article contents
  • Figures & tables
  • Supplementary Data

Teresa Penfield, Matthew J. Baker, Rosa Scoble, Michael C. Wykes, Assessment, evaluations, and definitions of research impact: A review, Research Evaluation , Volume 23, Issue 1, January 2014, Pages 21–32, https://doi.org/10.1093/reseval/rvt021

  • Permissions Icon Permissions

This article aims to explore what is understood by the term ‘research impact’ and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems.

When considering the impact that is generated as a result of research, a number of authors and government recommendations have advised that a clear definition of impact is required ( Duryea, Hochman, and Parfitt 2007 ; Grant et al. 2009 ; Russell Group 2009 ). From the outset, we note that the understanding of the term impact differs between users and audiences. There is a distinction between ‘academic impact’ understood as the intellectual contribution to one’s field of study within academia and ‘external socio-economic impact’ beyond academia. In the UK, evaluation of academic and broader socio-economic impact takes place separately. ‘Impact’ has become the term of choice in the UK for research influence beyond academia. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research.

an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia

Impact is assessed alongside research outputs and environment to provide an evaluation of research taking place within an institution. As such research outputs, for example, knowledge generated and publications, can be translated into outcomes, for example, new products and services, and impacts or added value ( Duryea et al. 2007 ). Although some might find the distinction somewhat marginal or even confusing, this differentiation between outputs, outcomes, and impacts is important, and has been highlighted, not only for the impacts derived from university research ( Kelly and McNicol 2011 ) but also for work done in the charitable sector ( Ebrahim and Rangan, 2010 ; Berg and Månsson 2011 ; Kelly and McNicoll 2011 ). The Social Return on Investment (SROI) guide ( The SROI Network 2012 ) suggests that ‘The language varies “impact”, “returns”, “benefits”, “value” but the questions around what sort of difference and how much of a difference we are making are the same’. It is perhaps assumed here that a positive or beneficial effect will be considered as an impact but what about changes that are perceived to be negative? Wooding et al. (2007) adapted the terminology of the Payback Framework, developed for the health and biomedical sciences from ‘benefit’ to ‘impact’ when modifying the framework for the social sciences, arguing that the positive or negative nature of a change was subjective and can also change with time, as has commonly been highlighted with the drug thalidomide, which was introduced in the 1950s to help with, among other things, morning sickness but due to teratogenic effects, which resulted in birth defects, was withdrawn in the early 1960s. Thalidomide has since been found to have beneficial effects in the treatment of certain types of cancer. Clearly the impact of thalidomide would have been viewed very differently in the 1950s compared with the 1960s or today.

In viewing impact evaluations it is important to consider not only who has evaluated the work but the purpose of the evaluation to determine the limits and relevance of an assessment exercise. In this article, we draw on a broad range of examples with a focus on methods of evaluation for research impact within Higher Education Institutions (HEIs). As part of this review, we aim to explore the following questions:

What are the reasons behind trying to understand and evaluate research impact?

What are the methodologies and frameworks that have been employed globally to assess research impact and how do these compare?

What are the challenges associated with understanding and evaluating research impact?

What indicators, evidence, and impacts need to be captured within developing systems

What are the reasons behind trying to understand and evaluate research impact? Throughout history, the activities of a university have been to provide both education and research, but the fundamental purpose of a university was perhaps described in the writings of mathematician and philosopher Alfred North Whitehead (1929) .

‘The justification for a university is that it preserves the connection between knowledge and the zest of life, by uniting the young and the old in the imaginative consideration of learning. The university imparts information, but it imparts it imaginatively. At least, this is the function which it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration transforms knowledge.’

In undertaking excellent research, we anticipate that great things will come and as such one of the fundamental reasons for undertaking research is that we will generate and transform knowledge that will benefit society as a whole.

One might consider that by funding excellent research, impacts (including those that are unforeseen) will follow, and traditionally, assessment of university research focused on academic quality and productivity. Aspects of impact, such as value of Intellectual Property, are currently recorded by universities in the UK through their Higher Education Business and Community Interaction Survey return to Higher Education Statistics Agency; however, as with other public and charitable sector organizations, showcasing impact is an important part of attracting and retaining donors and support ( Kelly and McNicoll 2011 ).

The reasoning behind the move towards assessing research impact is undoubtedly complex, involving both political and socio-economic factors, but, nevertheless, we can differentiate between four primary purposes.

HEIs overview. To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities.

Accountability. To demonstrate to government, stakeholders, and the wider public the value of research. There has been a drive from the UK government through Higher Education Funding Council for England (HEFCE) and the Research Councils ( HM Treasury 2004 ) to account for the spending of public money by demonstrating the value of research to tax payers, voters, and the public in terms of socio-economic benefits ( European Science Foundation 2009 ), in effect, justifying this expenditure ( Davies Nutley, and Walter 2005 ; Hanney and González-Block 2011 ).

Inform funding. To understand the socio-economic value of research and subsequently inform funding decisions. By evaluating the contribution that research makes to society and the economy, future funding can be allocated where it is perceived to bring about the desired impact. As Donovan (2011) comments, ‘Impact is a strong weapon for making an evidence based case to governments for enhanced research support’.

Understand. To understand the method and routes by which research leads to impacts to maximize on the findings that come out of research and develop better ways of delivering impact.

The growing trend for accountability within the university system is not limited to research and is mirrored in assessments of teaching quality, which now feed into evaluation of universities to ensure fee-paying students’ satisfaction. In demonstrating research impact, we can provide accountability upwards to funders and downwards to users on a project and strategic basis ( Kelly and McNicoll 2011 ). Organizations may be interested in reviewing and assessing research impact for one or more of the aforementioned purposes and this will influence the way in which evaluation is approached.

It is important to emphasize that ‘Not everyone within the higher education sector itself is convinced that evaluation of higher education activity is a worthwhile task’ ( Kelly and McNicoll 2011 ). The University and College Union ( University and College Union 2011 ) organized a petition calling on the UK funding councils to withdraw the inclusion of impact assessment from the REF proposals once plans for the new assessment of university research were released. This petition was signed by 17,570 academics (52,409 academics were returned to the 2008 Research Assessment Exercise), including Nobel laureates and Fellows of the Royal Society ( University and College Union 2011 ). Impact assessments raise concerns over the steer of research towards disciplines and topics in which impact is more easily evidenced and that provide economic impacts that could subsequently lead to a devaluation of ‘blue skies’ research. Johnston ( Johnston 1995 ) notes that by developing relationships between researchers and industry, new research strategies can be developed. This raises the questions of whether UK business and industry should not invest in the research that will deliver them impacts and who will fund basic research if not the government? Donovan (2011) asserts that there should be no disincentive for conducting basic research. By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that ‘…inclusion of impact in the REF will create “selection pressure,” promoting academic research that has “more direct economic impact” or which is easier to explain to the public’ ( Corbyn 2009 ).

Despite the concerns raised, the broader socio-economic impacts of research will be included and count for 20% of the overall research assessment, as part of the REF in 2014. From an international perspective, this represents a step change in the comprehensive nature to which impact will be assessed within universities and research institutes, incorporating impact from across all research disciplines. Understanding what impact looks like across the various strands of research and the variety of indicators and proxies used to evidence impact will be important to developing a meaningful assessment.

What are the methodologies and frameworks that have been employed globally to evaluate research impact and how do these compare? The traditional form of evaluation of university research in the UK was based on measuring academic impact and quality through a process of peer review ( Grant 2006 ). Evidence of academic impact may be derived through various bibliometric methods, one example of which is the H index, which has incorporated factors such as the number of publications and citations. These metrics may be used in the UK to understand the benefits of research within academia and are often incorporated into the broader perspective of impact seen internationally, for example, within the Excellence in Research for Australia and using Star Metrics in the USA, in which quantitative measures are used to assess impact, for example, publications, citation, and research income. These ‘traditional’ bibliometric techniques can be regarded as giving only a partial picture of full impact ( Bornmann and Marx 2013 ) with no link to causality. Standard approaches actively used in programme evaluation such as surveys, case studies, bibliometrics, econometrics and statistical analyses, content analysis, and expert judgment are each considered by some (Vonortas and Link, 2012) to have shortcomings when used to measure impacts.

Incorporating assessment of the wider socio-economic impact began using metrics-based indicators such as Intellectual Property registered and commercial income generated ( Australian Research Council 2008 ). In the UK, more sophisticated assessments of impact incorporating wider socio-economic benefits were first investigated within the fields of Biomedical and Health Sciences ( Grant 2006 ), an area of research that wanted to be able to justify the significant investment it received. Frameworks for assessing impact have been designed and are employed at an organizational level addressing the specific requirements of the organization and stakeholders. As a result, numerous and widely varying models and frameworks for assessing impact exist. Here we outline a few of the most notable models that demonstrate the contrast in approaches available.

The Payback Framework is possibly the most widely used and adapted model for impact assessment ( Wooding et al. 2007 ; Nason et al. 2008 ), developed during the mid-1990s by Buxton and Hanney, working at Brunel University. It incorporates both academic outputs and wider societal benefits ( Donovan and Hanney 2011 ) to assess outcomes of health sciences research. The Payback Framework systematically links research with the associated benefits ( Scoble et al. 2010 ; Hanney and González-Block 2011 ) and can be thought of in two parts: a model that allows the research and subsequent dissemination process to be broken into specific components within which the benefits of research can be studied, and second, a multi-dimensional classification scheme into which the various outputs, outcomes, and impacts can be placed ( Hanney and Gonzalez Block 2011 ). The Payback Framework has been adopted internationally, largely within the health sector, by organizations such as the Canadian Institute of Health Research, the Dutch Public Health Authority, the Australian National Health and Medical Research Council, and the Welfare Bureau in Hong Kong ( Bernstein et al. 2006 ; Nason et al. 2008 ; CAHS 2009; Spaapen et al. n.d. ). The Payback Framework enables health and medical research and impact to be linked and the process by which impact occurs to be traced. For more extensive reviews of the Payback Framework, see Davies et al. (2005) , Wooding et al. (2007) , Nason et al. (2008) , and Hanney and González-Block (2011) .

A very different approach known as Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions (SIAMPI) was developed from the Dutch project Evaluating Research in Context and has a central theme of capturing ‘productive interactions’ between researchers and stakeholders by analysing the networks that evolve during research programmes ( Spaapen and Drooge, 2011 ; Spaapen et al. n.d. ). SIAMPI is based on the widely held assumption that interactions between researchers and stakeholder are an important pre-requisite to achieving impact ( Donovan 2011 ; Hughes and Martin 2012 ; Spaapen et al. n.d. ). This framework is intended to be used as a learning tool to develop a better understanding of how research interactions lead to social impact rather than as an assessment tool for judging, showcasing, or even linking impact to a specific piece of research. SIAMPI has been used within the Netherlands Institute for health Services Research ( SIAMPI n.d. ). ‘Productive interactions’, which can perhaps be viewed as instances of knowledge exchange, are widely valued and supported internationally as mechanisms for enabling impact and are often supported financially for example by Canada’s Social Sciences and Humanities Research Council, which aims to support knowledge exchange (financially) with a view to enabling long-term impact. In the UK, UK Department for Business, Innovation, and Skills provided funding of £150 million for knowledge exchange in 2011–12 to ‘help universities and colleges support the economic recovery and growth, and contribute to wider society’ ( Department for Business, Innovation and Skills 2012 ). While valuing and supporting knowledge exchange is important, SIAMPI perhaps takes this a step further in enabling these exchange events to be captured and analysed. One of the advantages of this method is that less input is required compared with capturing the full route from research to impact. A comprehensive assessment of impact itself is not undertaken with SIAMPI, which make it a less-suitable method where showcasing the benefits of research is desirable or where this justification of funding based on impact is required.

The first attempt globally to comprehensively capture the socio-economic impact of research across all disciplines was undertaken for the Australian Research Quality Framework (RQF), using a case study approach. The RQF was developed to demonstrate and justify public expenditure on research, and as part of this framework, a pilot assessment was undertaken by the Australian Technology Network. Researchers were asked to evidence the economic, societal, environmental, and cultural impact of their research within broad categories, which were then verified by an expert panel ( Duryea et al. 2007 ) who concluded that the researchers and case studies could provide enough qualitative and quantitative evidence for reviewers to assess the impact arising from their research ( Duryea et al. 2007 ). To evaluate impact, case studies were interrogated and verifiable indicators assessed to determine whether research had led to reciprocal engagement, adoption of research findings, or public value. The RQF pioneered the case study approach to assessing research impact; however, with a change in government in 2007, this framework was never implemented in Australia, although it has since been taken up and adapted for the UK REF.

In developing the UK REF, HEFCE commissioned a report, in 2009, from RAND to review international practice for assessing research impact and provide recommendations to inform the development of the REF. RAND selected four frameworks to represent the international arena ( Grant et al. 2009 ). One of these, the RQF, they identified as providing a ‘promising basis for developing an impact approach for the REF’ using the case study approach. HEFCE developed an initial methodology that was then tested through a pilot exercise. The case study approach, recommended by the RQF, was combined with ‘significance’ and ‘reach’ as criteria for assessment. The criteria for assessment were also supported by a model developed by Brunel for ‘measurement’ of impact that used similar measures defined as depth and spread. In the Brunel model, depth refers to the degree to which the research has influenced or caused change, whereas spread refers to the extent to which the change has occurred and influenced end users. Evaluation of impact in terms of reach and significance allows all disciplines of research and types of impact to be assessed side-by-side ( Scoble et al. 2010 ).

The range and diversity of frameworks developed reflect the variation in purpose of evaluation including the stakeholders for whom the assessment takes place, along with the type of impact and evidence anticipated. The most appropriate type of evaluation will vary according to the stakeholder whom we are wishing to inform. Studies ( Buxton, Hanney and Jones 2004 ) into the economic gains from biomedical and health sciences determined that different methodologies provide different ways of considering economic benefits. A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006) .

Evaluation of impact is becoming increasingly important, both within the UK and internationally, and research and development into impact evaluation continues, for example, researchers at Brunel have developed the concept of depth and spread further into the Brunel Impact Device for Evaluation, which also assesses the degree of separation between research and impact ( Scoble et al. working paper ).

Although based on the RQF, the REF did not adopt all of the suggestions held within, for example, the option of allowing research groups to opt out of impact assessment should the nature or stage of research deem it unsuitable ( Donovan 2008 ). In 2009–10, the REF team conducted a pilot study for the REF involving 29 institutions, submitting case studies to one of five units of assessment (in clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature) ( REF2014 2010 ). These case studies were reviewed by expert panels and, as with the RQF, they found that it was possible to assess impact and develop ‘impact profiles’ using the case study approach ( REF2014 2010 ).

From 2014, research within UK universities and institutions will be assessed through the REF; this will replace the Research Assessment Exercise, which has been used to assess UK research since the 1980s. Differences between these two assessments include the removal of indicators of esteem and the addition of assessment of socio-economic research impact. The REF will therefore assess three aspects of research:

Environment

Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of assessment, and second, using impact case studies that describe the impact taking place following excellent research within a unit of assessment ( REF2014 2011a ). HEFCE indicated that impact should merit a 25% weighting within the REF ( REF2014 2011b ); however, this has been reduced for the 2014 REF to 20%, perhaps as a result of feedback and lobbying, for example, from the Russell Group and Million + group of Universities who called for impact to count for 15% ( Russell Group 2009 ; Jump 2011 ) and following guidance from the expert panels undertaking the pilot exercise who suggested that during the 2014 REF, impact assessment would be in a developmental phase and that a lower weighting for impact would be appropriate with the expectation that this would be increased in subsequent assessments ( REF2014 2010 ).

The quality and reliability of impact indicators will vary according to the impact we are trying to describe and link to research. In the UK, evidence and research impacts will be assessed for the REF within research disciplines. Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? An alternative approach was suggested for the RQF in Australia, where it was proposed that types of impact be compared rather than impact from specific disciplines.

Providing advice and guidance within specific disciplines is undoubtedly helpful. It can be seen from the panel guidance produced by HEFCE to illustrate impacts and evidence that it is expected that impact and evidence will vary according to discipline ( REF2014 2012 ). Why should this be the case? Two areas of research impact health and biomedical sciences and the social sciences have received particular attention in the literature by comparison with, for example, the arts. Reviews and guidance on developing and evidencing impact in particular disciplines include the London School of Economics (LSE) Public Policy Group’s impact handbook (LSE n.d.), a review of the social and economic impacts arising from the arts produced by Reeve ( Reeves 2002 ), and a review by Kuruvilla et al. (2006) on the impact arising from health research. Perhaps it is time for a generic guide based on types of impact rather than research discipline?

What are the challenges associated with understanding and evaluating research impact? In endeavouring to assess or evaluate impact, a number of difficulties emerge and these may be specific to certain types of impact. Given that the type of impact we might expect varies according to research discipline, impact-specific challenges present us with the problem that an evaluation mechanism may not fairly compare impact between research disciplines.

5.1 Time lag

The time lag between research and impact varies enormously. For example, the development of a spin out can take place in a very short period, whereas it took around 30 years from the discovery of DNA before technology was developed to enable DNA fingerprinting. In development of the RQF, The Allen Consulting Group (2005) highlighted that defining a time lag between research and impact was difficult. In the UK, the Russell Group Universities responded to the REF consultation by recommending that no time lag be put on the delivery of impact from a piece of research citing examples such as the development of cardiovascular disease treatments, which take between 10 and 25 years from research to impact ( Russell Group 2009 ). To be considered for inclusion within the REF, impact must be underpinned by research that took place between 1 January 1993 and 31 December 2013, with impact occurring during an assessment window from 1 January 2008 to 31 July 2013. However, there has been recognition that this time window may be insufficient in some instances, with architecture being granted an additional 5-year period ( REF2014 2012 ); why only architecture has been granted this dispensation is not clear, when similar cases could be made for medicine, physics, or even English literature. Recommendations from the REF pilot were that the panel should be able to extend the time frame where appropriate; this, however, poses difficult decisions when submitting a case study to the REF as to what the view of the panel will be and whether if deemed inappropriate this will render the case study ‘unclassified’.

5.2 The developmental nature of impact

Impact is not static, it will develop and change over time, and this development may be an increase or decrease in the current degree of impact. Impact can be temporary or long-lasting. The point at which assessment takes place will therefore influence the degree and significance of that impact. For example, following the discovery of a new potential drug, preclinical work is required, followed by Phase 1, 2, and 3 trials, and then regulatory approval is granted before the drug is used to deliver potential health benefits. Clearly there is the possibility that the potential new drug will fail at any one of these phases but each phase can be classed as an interim impact of the original discovery work on route to the delivery of health benefits, but the time at which an impact assessment takes place will influence the degree of impact that has taken place. If impact is short-lived and has come and gone within an assessment period, how will it be viewed and considered? Again the objective and perspective of the individuals and organizations assessing impact will be key to understanding how temporal and dissipated impact will be valued in comparison with longer-term impact.

5.3 Attribution

Impact is derived not only from targeted research but from serendipitous findings, good fortune, and complex networks interacting and translating knowledge and research. The exploitation of research to provide impact occurs through a complex variety of processes, individuals, and organizations, and therefore, attributing the contribution made by a specific individual, piece of research, funding, strategy, or organization to an impact is not straight forward. Husbands-Fealing suggests that to assist identification of causality for impact assessment, it is useful to develop a theoretical framework to map the actors, activities, linkages, outputs, and impacts within the system under evaluation, which shows how later phases result from earlier ones. Such a framework should be not linear but recursive, including elements from contextual environments that influence and/or interact with various aspects of the system. Impact is often the culmination of work within spanning research communities ( Duryea et al. 2007 ). Concerns over how to attribute impacts have been raised many times ( The Allen Consulting Group 2005 ; Duryea et al. 2007 ; Grant et al. 2009 ), and differentiating between the various major and minor contributions that lead to impact is a significant challenge.

Figure 1 , replicated from Hughes and Martin (2012) , illustrates how the ease with which impact can be attributed decreases with time, whereas the impact, or effect of complementary assets, increases, highlighting the problem that it may take a considerable amount of time for the full impact of a piece of research to develop but because of this time and the increase in complexity of the networks involved in translating the research and interim impacts, it is more difficult to attribute and link back to a contributing piece of research.

Time, attribution, impact. Replicated from (Hughes and Martin 2012).

Time, attribution, impact. Replicated from ( Hughes and Martin 2012 ).

This presents particular difficulties in research disciplines conducting basic research, such as pure mathematics, where the impact of research is unlikely to be foreseen. Research findings will be taken up in other branches of research and developed further before socio-economic impact occurs, by which point, attribution becomes a huge challenge. If this research is to be assessed alongside more applied research, it is important that we are able to at least determine the contribution of basic research. It has been acknowledged that outstanding leaps forward in knowledge and understanding come from immersing in a background of intellectual thinking that ‘one is able to see further by standing on the shoulders of giants’.

5.4 Knowledge creep

It is acknowledged that one of the outcomes of developing new knowledge through research can be ‘knowledge creep’ where new data or information becomes accepted and gets absorbed over time. This is particularly recognized in the development of new government policy where findings can influence policy debate and policy change, without recognition of the contributing research ( Davies et al. 2005 ; Wooding et al. 2007 ). This is recognized as being particularly problematic within the social sciences where informing policy is a likely impact of research. In putting together evidence for the REF, impact can be attributed to a specific piece of research if it made a ‘distinctive contribution’ ( REF2014 2011a ). The difficulty then is how to determine what the contribution has been in the absence of adequate evidence and how we ensure that research that results in impacts that cannot be evidenced is valued and supported.

5.5 Gathering evidence

Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. The introduction of impact assessments with the requirement to collate evidence retrospectively poses difficulties because evidence, measurements, and baselines have, in many cases, not been collected and may no longer be available. While looking forward, we will be able to reduce this problem in the future, identifying, capturing, and storing the evidence in such a way that it can be used in the decades to come is a difficulty that we will need to tackle.

Collating the evidence and indicators of impact is a significant task that is being undertaken within universities and institutions globally. Decker et al. (2007) surveyed researchers in the US top research institutions during 2005; the survey of more than 6000 researchers found that, on average, more than 40% of their time was spent doing administrative tasks. It is desirable that the assignation of administrative tasks to researchers is limited, and therefore, to assist the tracking and collating of impact data, systems are being developed involving numerous projects and developments internationally, including Star Metrics in the USA, the ERC (European Research Council) Research Information System, and Lattes in Brazil ( Lane 2010 ; Mugabushaka and Papazoglou 2012 ).

Ideally, systems within universities internationally would be able to share data allowing direct comparisons, accurate storage of information developed in collaborations, and transfer of comparable data as researchers move between institutions. To achieve compatible systems, a shared language is required. CERIF (Common European Research Information Format) was developed for this purpose, first released in 1991; a number of projects and systems across Europe such as the ERC Research Information System ( Mugabushaka and Papazoglou 2012 ) are being developed as CERIF-compatible.

In the UK, there have been several Jisc-funded projects in recent years to develop systems capable of storing research information, for example, MICE (Measuring Impacts Under CERIF), UK Research Information Shared Service, and Integrated Research Input and Output System, all based on the CERIF standard. To allow comparisons between institutions, identifying a comprehensive taxonomy of impact, and the evidence for it, that can be used universally is seen to be very valuable. However, the Achilles heel of any such attempt, as critics suggest, is the creation of a system that rewards what it can measure and codify, with the knock-on effect of directing research projects to deliver within the measures and categories that reward.

Attempts have been made to categorize impact evidence and data, for example, the aim of the MICE Project was to develop a set of impact indicators to enable impact to be fed into a based system. Indicators were identified from documents produced for the REF, by Research Councils UK, in unpublished draft case studies undertaken at King’s College London or outlined in relevant publications (MICE Project n.d.). A taxonomy of impact categories was then produced onto which impact could be mapped. What emerged on testing the MICE taxonomy ( Cooke and Nadim 2011 ), by mapping impacts from case studies, was that detailed categorization of impact was found to be too prescriptive. Every piece of research results in a unique tapestry of impact and despite the MICE taxonomy having more than 100 indicators, it was found that these did not suffice. It is perhaps worth noting that the expert panels, who assessed the pilot exercise for the REF, commented that the evidence provided by research institutes to demonstrate impact were ‘a unique collection’. Where quantitative data were available, for example, audience numbers or book sales, these numbers rarely reflected the degree of impact, as no context or baseline was available. Cooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks of impacts that are generally found. The Goldsmith report ( Cooke and Nadim 2011 ) recommended making indicators ‘value free’, enabling the value or quality to be established in an impact descriptor that could be assessed by expert panels. The Goldsmith report concluded that general categories of evidence would be more useful such that indicators could encompass dissemination and circulation, re-use and influence, collaboration and boundary work, and innovation and invention.

While defining the terminology used to understand impact and indicators will enable comparable data to be stored and shared between organizations, we would recommend that any categorization of impacts be flexible such that impacts arising from non-standard routes can be placed. It is worth considering the degree to which indicators are defined and provide broader definitions with greater flexibility.

It is possible to incorporate both metrics and narratives within systems, for example, within the Research Outcomes System and Researchfish, currently used by several of the UK research councils to allow impacts to be recorded; although recording narratives has the advantage of allowing some context to be documented, it may make the evidence less flexible for use by different stakeholder groups (which include government, funding bodies, research assessment agencies, research providers, and user communities) for whom the purpose of analysis may vary ( Davies et al. 2005 ). Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of purposes (Scoble et al. n.d.). Systems need to be able to capture links between and evidence of the full pathway from research to impact, including knowledge exchange, outputs, outcomes, and interim impacts, to allow the route to impact to be traced. This database of evidence needs to establish both where impact can be directly attributed to a piece of research as well as various contributions to impact made during the pathway.

Baselines and controls need to be captured alongside change to demonstrate the degree of impact. In many instances, controls are not feasible as we cannot look at what impact would have occurred if a piece of research had not taken place; however, indications of the picture before and after impact are valuable and worth collecting for impact that can be predicted.

It is now possible to use data-mining tools to extract specific data from narratives or unstructured data ( Mugabushaka and Papazoglou 2012 ). This is being done for collation of academic impact and outputs, for example, Research Portfolio Online Reporting Tools, which uses PubMed and text mining to cluster research projects, and STAR Metrics in the US, which uses administrative records and research outputs and is also being implemented by the ERC using data in the public domain ( Mugabushaka and Papazoglou 2012 ). These techniques have the potential to provide a transformation in data capture and impact assessment ( Jones and Grant 2013 ). It is acknowledged in the article by Mugabushaka and Papazoglou (2012) that it will take years to fully incorporate the impacts of ERC funding. For systems to be able to capture a full range of systems, definitions and categories of impact need to be determined that can be incorporated into system development. To adequately capture interactions taking place between researchers, institutions, and stakeholders, the introduction of tools to enable this would be very valuable. If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. Capturing knowledge exchange events would greatly assist the linking of research with impact.

The transition to routine capture of impact data not only requires the development of tools and systems to help with implementation but also a cultural change to develop practices, currently undertaken by a few to be incorporated as standard behaviour among researchers and universities.

What indicators, evidence, and impacts need to be captured within developing systems? There is a great deal of interest in collating terms for impact and indicators of impact. Consortia for Advancing Standards in Research Administration Information, for example, has put together a data dictionary with the aim of setting the standards for terminology used to describe impact and indicators that can be incorporated into systems internationally and seems to be building a certain momentum in this area. A variety of types of indicators can be captured within systems; however, it is important that these are universally understood. Here we address types of evidence that need to be captured to enable an overview of impact to be developed. In the majority of cases, a number of types of evidence will be required to provide an overview of impact.

7.1 Metrics

Metrics have commonly been used as a measure of impact, for example, in terms of profit made, number of jobs provided, number of trained personnel recruited, number of visitors to an exhibition, number of items purchased, and so on. Metrics in themselves cannot convey the full impact; however, they are often viewed as powerful and unequivocal forms of evidence. If metrics are available as impact evidence, they should, where possible, also capture any baseline or control data. Any information on the context of the data will be valuable to understanding the degree to which impact has taken place.

Perhaps, SROI indicates the desire to be able to demonstrate the monetary value of investment and impact by some organizations. SROI aims to provide a valuation of the broader social, environmental, and economic impacts, providing a metric that can be used for demonstration of worth. This is a metric that has been used within the charitable sector ( Berg and Månsson 2011 ) and also features as evidence in the REF guidance for panel D ( REF2014 2012 ). More details on SROI can be found in ‘A guide to Social Return on Investment’ produced by The SROI Network (2012) .

Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. The main risks associated with the use of standardized metrics are that

The full impact will not be realized, as we focus on easily quantifiable indicators

We will focus attention towards generating results that enable boxes to be ticked rather than delivering real value for money and innovative research.

They risk being monetized or converted into a lowest common denominator in an attempt to compare the cost of a new theatre against that of a hospital.

7.2 Narratives

Narratives can be used to describe impact; the use of narratives enables a story to be told and the impact to be placed in context and can make good use of qualitative information. They are often written with a reader from a particular stakeholder group in mind and will present a view of impact from a particular perspective. The risk of relying on narratives to assess impact is that they often lack the evidence required to judge whether the research and impact are linked appropriately. Where narratives are used in conjunction with metrics, a complete picture of impact can be developed, again from a particular perspective but with the evidence available to corroborate the claims made. Table 1 summarizes some of the advantages and disadvantages of the case study approach.

The advantages and disadvantages of the case study approach

By allowing impact to be placed in context, we answer the ‘so what?’ question that can result from quantitative data analyses, but is there a risk that the full picture may not be presented to demonstrate impact in a positive light? Case studies are ideal for showcasing impact, but should they be used to critically evaluate impact?

7.3 Surveys and testimonies

One way in which change of opinion and user perceptions can be evidenced is by gathering of stakeholder and user testimonies or undertaking surveys. This might describe support for and development of research with end users, public engagement and evidence of knowledge exchange, or a demonstration of change in public opinion as a result of research. Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed.

The ability to record and log these type of data is important for enabling the path from research to impact to be established and the development of systems that can capture this would be very valuable.

7.4 Citations (outside of academia) and documentation

Citations (outside of academia) and documentation can be used as evidence to demonstrate the use research findings in developing new ideas and products for example. This might include the citation of a piece of research in policy documents or reference to a piece of research being cited within the media. A collation of several indicators of impact may be enough to convince that an impact has taken place. Even where we can evidence changes and benefits linked to our research, understanding the causal relationship may be difficult. Media coverage is a useful means of disseminating our research and ideas and may be considered alongside other evidence as contributing to or an indicator of impact.

The fast-moving developments in the field of altmetrics (or alternative metrics) are providing a richer understanding of how research is being used, viewed, and moved. The transfer of information electronically can be traced and reviewed to provide data on where and to whom research findings are going.

The understanding of the term impact varies considerably and as such the objectives of an impact assessment need to be thoroughly understood before evidence is collated.

While aspects of impact can be adequately interpreted using metrics, narratives, and other evidence, the mixed-method case study approach is an excellent means of pulling all available information, data, and evidence together, allowing a comprehensive summary of the impact within context. While the case study is a useful way of showcasing impact, its limitations must be understood if we are to use this for evaluation purposes. The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. It is time-intensive to both assimilate and review case studies and we therefore need to ensure that the resources required for this type of evaluation are justified by the knowledge gained. The ability to write a persuasive well-evidenced case study may influence the assessment of impact. Over the past year, there have been a number of new posts created within universities, such as writing impact case studies, and a number of companies are now offering this as a contract service. A key concern here is that we could find that universities which can afford to employ either consultants or impact ‘administrators’ will generate the best case studies.

The development of tools and systems for assisting with impact evaluation would be very valuable. We suggest that developing systems that focus on recording impact information alone will not provide all that is required to link research to ensuing events and impacts, systems require the capacity to capture any interactions between researchers, the institution, and external stakeholders and link these with research findings and outputs or interim impacts to provide a network of data. In designing systems and tools for collating data related to impact, it is important to consider who will populate the database and ensure that the time and capability required for capture of information is considered. Capturing data, interactions, and indicators as they emerge increases the chance of capturing all relevant information and tools to enable researchers to capture much of this would be valuable. However, it must be remembered that in the case of the UK REF, impact is only considered that is based on research that has taken place within the institution submitting the case study. It is therefore in an institution’s interest to have a process by which all the necessary information is captured to enable a story to be developed in the absence of a researcher who may have left the employment of the institution. Figure 2 demonstrates the information that systems will need to capture and link.

Research findings including outputs (e.g., presentations and publications)

Communications and interactions with stakeholders and the wider public (emails, visits, workshops, media publicity, etc)

Feedback from stakeholders and communication summaries (e.g., testimonials and altmetrics)

Research developments (based on stakeholder input and discussions)

Outcomes (e.g., commercial and cultural, citations)

Impacts (changes, e.g., behavioural and economic)

Overview of the types of information that systems need to capture and link.

Overview of the types of information that systems need to capture and link.

Attempting to evaluate impact to justify expenditure, showcase our work, and inform future funding decisions will only prove to be a valuable use of time and resources if we can take measures to ensure that assessment attempts will not ultimately have a negative influence on the impact of our research. There are areas of basic research where the impacts are so far removed from the research or are impractical to demonstrate; in these cases, it might be prudent to accept the limitations of impact assessment, and provide the potential for exclusion in appropriate circumstances.

This work was supported by Jisc [DIINN10].

Google Scholar

Google Preview

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

project research assessment

Research & Evaluation FAQs

What are assessment, evaluation, and research, and what distinguishes them from one another.

The terms assessment, evaluation, and research are often used interchangeably in the field of education, and there is no single accepted definition of what each term means. We know that all three terms refer to the systematic gathering, analysis, and interpretation of information (“data”), but the intended purpose of this data gathering differs.

Assessment has been defined as:

  • “The systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development.” (Palomba & Banta, 1999)
  • “The process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning.” (Huba & Freed, 2000)

Assessment can be used to provide individual students with feedback on their own learning process. To learn more about this, check out our Teaching & Learning team’s page, Assess Student Learning .

Evaluation is often described as going beyond assessment to involve a judgment of merit, worth, or value and to inform key decisions (Scriven, 1991).

  • Evaluators use assessment information to make a judgment, often based on whether the actual outcomes of an intervention, class, or program match the intended learning outcomes (Suskie, 2010)
  • Program evaluation has been defined as “the identification, clarification, and application of defensible criteria to determine an evaluation object’s value (worth or merit) in relation to those criteria.” (Fitzpatrick, Sanders, & Worthen, 2004)

Why should I do an assessment or an evaluation?

Whether you are just starting to design a new class or program, are implementing pedagogical changes in an existing class or program, or want to evaluate the outcomes of an established class or program, you can gather and interpret data to guide your thinking and decision-making.

Assessment and evaluation can be used to

  • Identify the needs or “gaps” between a course or program’s desired and actual outcomes (needs assessment).
  • Identify the strengths and resources that can be used to support student learning and success within a given context (strengths-based or asset-based assessment).
  • Assess the effectiveness of a new educational practice, course, or program (program evaluation).
  • Identify factors that support or inhibit student learning and success in a given context.
  • Examine inequities in students’ access to high-quality educational experiences.
  • Track progress toward attaining course- and program-level outcomes.

Do I need human subjects research approval from COUHES to conduct an assessment, evaluation, or research project?

Research projects that are aimed at contributing to generalizable knowledge and that involve human participants, or data collected from human participants, will generally need to receive approval or exemption from MIT’s Committee on the Use of Humans as Experimental Subjects (COUHES).

Some, but not all, assessment and evaluation projects do not need human subjects research approval from COUHES. To learn more about these definitions and guidelines, visit the COUHES definitions webpage.

It is important to determine whether you need to apply for COUHES approval before you engage in any recruitment or data collection efforts. There are several different levels of COUHES review that a project may go through, depending on the research procedures, the population, and the level of risk involved. TLL’s R&E team can work with you and COUHES to determine whether a particular project needs approval and to guide you through the process of requesting approval if necessary.

Child Care and Early Education Research Connections

Research assessment tools.

The purpose of the quantitative and qualitative research assessment tools is to provide users with a quick and simple means to evaluate the quality of research studies. The research assessment tools describe the information that should be available in study reports and the key features of a high quality study design. When using tools, higher scores indicate higher quality research.

Quantitative Research Assessment Tool  (PDF 46K)

Qualitative Research Assessment Tool  (PDF 62K)

See the following for additional information on assessing the quality of research.

Early childhood program evaluations: A decision-maker's guide

Early childhood assessment: Why, what, and how?  (PDF)

A Policymaker's Primer on Education Research, by the Education Commission of the States (ECS) and Mid-continent Research for Education and Learning (McREL)  (PDF)

  • Our Mission

Resources for Assessment in Project-Based Learning

Looking for tools and strategies for effective assessment in project-based learning? To support you, we’ve assembled this guide to helpful resources from Edutopia and beyond.

Graphic of lightbulb

Project-based learning (PBL) demands excellent assessment practices to ensure that all learners are supported in the learning process. With good assessment practices, PBL can create a culture of excellence for all students and ensure deeper learning for all. We’ve compiled some of the best resources from Edutopia and the web to support your use of assessment in PBL, including information about strategies, advice on how to address the demands of standardized tests, and summaries of the research.

PBL Assessment Foundations

Watch this video to discover how assessment can be integrated seamlessly into project-based learning to measure student understanding from the beginning to the end of a project:

Angela Haydel DeBarger describes research-based strategies for implementing PBL projects that are rigorous and engaging: the importance of having students create products that address the driving question, providing ongoing opportunities for feedback and reflection, and presenting work to an authentic audience.

Explore responses to questions directed toward teachers in the field, in this post by Andrew Larson. You'll find strategies for reporting on content and success skills; educators also describe the use of traditional assessments like quizzes and tests.

High school teacher Katie Piper shares honest feedback about the challenges associated with assessing students fairly during the PBL process, where collaboration is key and critical. Some strategies include conducting individual assessment of team products, as well as "weighted scoring" and "role-based" assessment practices. 

In this blog post, classroom teacher Matt Weyers explains how he shifted the conversation in his classroom from getting a grade to student learning. He shares his step-by-step plan and also the great results.

Read about how PBL school model Expeditionary Learning  approaches assessment within project-based learning, in this interview with Ron Berger. Berger emphasizes student ownership of the assessment process and references several videos of sample PBL project assessments. 

In this post from Michael Hernandez, find ideas for conducting multidimensional evaluation to encourage students, provide meaningful feedback, and set students up for success within project-based learning.

PBL and Formative Assessment Practices

In another blog post from Matt Weyers, find great tips on using formative assessment within the PBL process to drive student learning. Weyers explains how to use the driving question to prompt reflection and the "Need to Know" to check for understanding.

John Larmer, editor in chief for the Buck Institute for Education, shares practical strategies to ensure students submit their best work, including reflective questions for teachers to use: questions around rubrics, formative assessment, authenticity, and time for revision and reflection. These assessment practices help students improve and share exemplary work.

Writer Suzie Boss explains how formative assessment within project cyles can empower students to learn more and experience more success. Along the way, she underscores the value of framing mistakes as learning opportunities, the excitement of risk-taking, and the importance of describing clear learning goals throughout a project. 

PBL and Standardized Tests

In this article for District Administration, regular Edutopia blogger Suzie Boss tells the story of how schools are meeting the challenge of standardized tests and moving past the “bubble” exam; she also highlights how educators are overcoming fear and anxiety around assessing critical thinking and content.

This Knowledge in Action research project from the University of Washington explores how well-designed PBL can meet, and in many ways, surpass what the AP exam assesses, including both content learning objectives and goals around 21st-century skills. 

Edutopia blogger Andrew Miller provides specific and practical strategies to address the demands of standardized tests while doing great PBL projects. In addition to embedding standardized tests prompts within the project, Miller suggests implementing PBL projects where they fit, targeting power standards, and examining standardized tests to see what students will need to be successful. Because these projects are powerful learning tools, there's no need to wait for testing season to get started.

PBL Assessment Research

Read about PBL assessment that supports student success in this page from Edutopia's comprehensive research review on project-based learning.

The Buck Institute for Education has compiled, as well as conducted, comprehensive research on PBL. Many of the studies shared on this page specifically address assessment practices and results.

We hope these resources on PBL Assessment help ensure that students learn not only content but also the skills they need to be "future ready." Use these ideas and tools to alleviate concerns you have around assessment and PBL and to support the design of effective PBL projects.

  • Member Login

Search Filters

Research and assessment cycle toolkit.

The Research and Assessment Cycle Toolkit is a resource for library assessment practitioners to access information about assessment processes in libraries. The toolkit includes 23 training videos and supporting materials that: Review the principles and practices of library assessment Enable library workers to develop the skills necessary to assess library programs, services, resources, and spaces Support the launch of assessment projects from the development of the research question through to the conclusion, including action on findings Are modular, accessible to newcomers to library assessment, and easy to follow Can be self-paced or used as training materials for collaborative efforts Are supportive of a community of learning The toolkit is not intended to be comprehensive and covers information that can be used by library workers at all levels of assessment experience. A team of library assessment experts led by Megan Oakleaf (Syracuse University) and supported by Emily Daly (Duke University) and Becky Croxton (The University of North Carolina at Charlotte) developed the training modules. ARL staff Kevin Borden, Angela Pappalardo, and Anam Mian also contributed to the project. Toolkit Training Modules Overview Identify Discovering What You Need to Know Articulating Why You Need to Know It Anticipating How Knowing Will Lead to Action Articulate Identifying Research Questions Constructing User Stories Composing Hypotheses Collect Identifying Potential Data, Evidence, and Input Considering Your (Methods) Options Considering Your Sample Conducting Interviews Conducting Focus Groups Conducting Participatory Research Conducting Observational Research Conducting Surveys Organize and Analyze Cleaning and Organizing Quantitative Data Analyzing Quantitative Data Using Excel and Google Sheets to Visualize Data Using Tableau to Visualize Data Act Reflecting on Results Communicating Results: Identifying Audiences & Messages Communicating Results: Sharing Results with Stakeholders Realizing Outcomes of Assessment: Decision-Making and Action Taking  

1. Overview

This video provides an overview for the modules to follow, defines basic terms, and introduces the assessment cycle for library assessment projects. Additional helpful materials: Overview Slide Deck Please leave your feedback on this session 2. Identify This module focuses on identifying the needs, context, and goals of library assessment projects. Show More Close Discovering What You Need to Know This video describes motivations for library assessment projects including responding to imposed questions, connecting values and practices, responding to changing trends, reallocating resources, closing gaps, better understanding users, and aligning strategic priorities. Additional helpful materials: Discerning Institutional Areas of Interest, Priority, or Concern [Word document worksheet] Discovering What You Need to Know Slide Deck Example from the RLIF Project: Strategic Plan Alignment at Syracuse University [Report] Please leave your feedback on this session Articulating Why You Need to Know It This video describes key purposes and timeframes for library assessment and identifies possible obstacles to effective assessment. Additional helpful materials: If We Knew More, What More Good Could We Do? [Word document worksheet] Single, Double, and Triple Loop Learning Articulating Why You Need to Know It Slide Deck Please leave your feedback on this session Anticipating How Knowing Will Lead to Action This video describes strategies for beginning library assessment projects “with the end in mind” to increase the likelihood of actionable results that lead to positive outcomes and impact.  Additional helpful materials: Considering Active, Rather than Passive, Library Services, Resources, and Spaces [Word document worksheet] Goldman, K. D., and Schmalz, K. J. (2006). Logic Models: The Picture Worth Ten Thousand Words . Health Promotion Practice 7 (1), 8–12. https://doi.org/10.1177/1524839905283230 . Logic Model Template [Google document] and Logic Model Template [Word document] Anticipating How Knowing Will Lead to Action Slide Deck Please leave your feedback on this session Back to Top 3. Articulate This module focuses on articulating the focus of a library assessment project. Show More Close Identifying Research Questions This video describes strategies for structuring research questions for library assessment projects. Additional helpful materials: Building Research Questions [Word document worksheet] Research Library Impact Framework Research Questions Identifying Research Questions Slide Deck Examples from the RLIF Project: Research Question at UT Austin [Practice Brief] Research Questions at University of Illinois Chicago [Report] Please leave your feedback on this session Constructing User Stories This video describes strategies for structuring user stories for library assessment projects. Additional helpful materials: Developing User Stories [Word document worksheet] Constructing User Stories Slide Deck Please leave your feedback on this session Composing Hypotheses This video describes strategies for structuring hypotheses for library assessment projects. Additional helpful materials:  Composing Hypotheses Slide Deck Please leave your feedback on this session Back to Top 4. Collect This module focuses on collecting data, evidence, input, or other information for library assessment projects. Show More Close Identifying Potential Data, Evidence, and/or Input This video describes strategies for identifying possible data, evidence, or other inputs needed for library assessment projects, including existing information, information found in professional literature, and new information that must be collected for the project at hand. Additional helpful materials: Library Data Audit [Word document worksheet] Research Process by Leedy Judging the Feasibility of a Research Project by Leedy and Ormrod [ page 126–128] Symonds, P. M. (1956). A research checklist in educational psychology. Journal of Educational Psychology , 47(2), 100–109. [print only] Identifying Potential Data, Evidence, and Input Slide Deck Please leave your feedback on this session Considering Your (Methods) Options This video describes a variety of methods that can be used to collect information for library assessment projects.  Additional helpful materials: Ethical Principles for Social Research [Word document] Considering Your (Methods) Options Slide Deck Examples from the RLIF Project: Mixed Methods at the University of Washington [Practice Brief] and Outreach Toolkit Overview [Google doc] Please leave your feedback on this session Considering Your Sample This video describes populations and sampling for library assessment projects including sample sizes and types. Additional helpful materials: A Million Random Digits How to Use a Random Number Table Determining Sample Size (Krejcie and Morgan, 1970) Considering Your Sample Slide Deck Please leave your feedback on this session Conducting Interviews This video describes interview methods for library assessment projects including interview types, preparation, and question strategies. Additional helpful materials: Researcher Guidance for the Use of Zoom in Data Collection In-depth Interview Method Workshop by Margaret Roller [YouTube video] In-depth Interview Method Workshop Slides Example Interview Guide Conducting Interviews Slide Deck Examples from the RLIF Project: Interviews at Johns Hopkins University [Practice Brief] Interviews at Temple University [Practice Brief] Interviews at Syracuse University [Report] Interviews at the University of Washington [Report] Please leave your feedback on this session Conducting Focus Groups This video describes focus group methods for library assessment projects including focus group structures and preparation. Additional helpful materials: In-depth Focus Group Workshop by Margaret Roller [YouTube video] In-depth Focus Group Workshop Slides Example In-Person Discussion Guide Example Asynchronous Discussion Guide Conducting Focus Groups Slide Deck Please leave your feedback on this session Conducting Participatory Research This video describes participatory research methods, including examples of community-based participatory research and user-centered design research. It covers techniques for conducting card sorts and photovoice studies as well as tips for analyzing findings. Additional helpful materials: Participatory Research Methods—Choice Points in the Research Process (Vaughn and Jacquez, 2020) Card Sorting: Designing Usable Categories, Chapter Four (Spencer, 2009) How to Analyze Qualitative Data from UX Research: Thematic Analysis Understanding the Experiences and Needs of Black Students at Duke : Report from a study that utilized the photovoice method; includes sample recruitment emails and discussion guides Conducting Participatory Research Slide Deck Please leave your feedback on this session Conducting Observational Research This video describes observational research methods, including techniques for conducting online and in-person observations and analyzing findings from this type of study.  Additional helpful materials: Conducting Observational Research Slide Deck Please leave your feedback on this session Conducting Surveys This video describes survey methods for library assessment projects including question types and overall design. Additional helpful materials: Web Survey Design Workshop by Kevin Fomalont [YouTube video] Web Survey Design Workshop Slides Conducting Surveys Slide Deck Examples from the RLIF Project: Student Survey at Syracuse University [Report] Please leave your feedback on this session Back to Top 5. Organize and Analyze This module focuses on organizing and analyzing data, evidence, input, or other information for library assessment projects. Show More Close Cleaning and Organizing Quantitative Data This video describes how to clean and organize your raw dataset so that it will be ready for analysis and visualization. Additional helpful materials: Student Engagement [Excel file] Student Information [Excel file] Cleaning and Organizing Quantitative Data Slide Deck Please leave your feedback on this session Analyzing Quantitative Data This video describes common strategies for analyzing quantitative data, particularly descriptive statistics, using Excel and Google Sheets. Additional helpful materials: Sample Data for Analysis [Excel file] In-Depth Quantitative Data Analysis by Kevin Fomalont [YouTube video] In-Depth Quantitative Data Analysis Slides Analyzing Quantitative Data Slide Deck Please leave your feedback on this session Using Excel and Google Sheets to Visualize Data This video describes and demonstrates how to create charts and graphs using Excel and Google Sheets and includes a discussion of matching chart types to data.  Additional helpful materials: Using Excel and Google Sheets to Visualize Data_Sample [Excel file] Using Excel and Google Sheets to Visualize Data Slide Deck Please leave your feedback on this session Using Tableau to Visualize Data This video provides a basic overview of Tableau, what it is and how it can help explore trends and nuances in a dataset and includes a demonstration of how to connect data, create simple visualizations with filters, and then combine visualizations into interactive dashboards that can be shared with others.  Additional helpful materials: Using Tableau to Visualize Data_Sample [Excel file] In-Depth Visualization in Tableau Workshop by Kevin Fomalont [YouTube video] In-Depth Visualization in Tableau Workshop Slides Using Tableau to Visualize Data Slide Deck Please leave your feedback on this session Back to Top 6. Act This module focuses on reflecting, communicating, and acting on the results of library assessment projects. Show More Close Reflecting on Results This video describes practices for reflecting on the results of library assessment projects. Additional helpful materials: Reflecting on Results Slide Deck Please leave your feedback on this session Communicating Results: Identifying Audiences & Messages This video describes strategies for identifying audiences and crafting messages to communicate results of library assessment projects. Additional helpful materials: Identifying Audiences and Messages [Word document worksheet] Communicating Results: Identifying Audiences & Messages Slide Deck Please leave your feedback on this session Communicating Results: Sharing Results with Stakeholders This video describes purposes of and techniques for communicating the results of library assessment projects to stakeholders. Additional helpful materials: Inclusive Language , 18F Content Guide and Inclusive Language Guidelines , American Psychological Association In-Depth Reporting Qualitative Research Workshop with Margaret Roller [YouTube video] In-Depth Reporting Qualitative Research Workshop Slides In-Depth Survey Reporting Workshop with Kevin Fomalont [YouTube video] In-Depth Survey Reporting Workshop Slides Communicating Results: Sharing Results with Stakeholders Slide Deck Please leave your feedback on this session Realizing Outcomes of Assessment: Decision-Making and Action Taking This video describes possible outcomes of the overall assessment process. Additional helpful materials: Making Decisions and Taking Action [Word document worksheet] Realizing Outcomes of Assessment: Decision-Making and Action Taking Slide Deck Please leave your feedback on this session Back to Top This project (LG-18-19-0092) was made possible in part by the Institute of Museum and Library Services. The views, findings, conclusions, or recommendations expressed in this project do not necessarily represent those of the Institute of Museum and Library Services. The Institute of Museum and Library Services is the primary source of federal support for the nation’s libraries and museums. We advance, support, and empower America’s museums, libraries, and related organizations through grantmaking, research, and policy development. Our vision is a nation where museums and libraries work together to transform the lives of individuals and communities. To learn more, visit www.imls.gov and follow us on Facebook and Twitter .

CNI

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 04 October 2019

Engaging with research impact assessment for an environmental science case study

  • Kirstie A. Fryirs   ORCID: orcid.org/0000-0003-0541-3384 1 ,
  • Gary J. Brierley   ORCID: orcid.org/0000-0002-1310-1105 2 &
  • Thom Dixon   ORCID: orcid.org/0000-0003-4746-2301 3  

Nature Communications volume  10 , Article number:  4542 ( 2019 ) Cite this article

14k Accesses

18 Citations

39 Altmetric

Metrics details

  • Environmental impact
  • Research management

An Author Correction to this article was published on 08 November 2019

This article has been updated

Impact assessment is embedded in many national and international research rating systems. Most applications use the Research Impact Pathway to track inputs, activities, outputs and outcomes of an invention or initiative to assess impact beyond scholarly contributions to an academic research field (i.e., benefits to environment, society, economy and culture). Existing approaches emphasise easy to attribute ‘hard’ impacts, and fail to include a range of ‘soft’ impacts that are less easy to attribute, yet are often a dominant part of the impact mix. Here, we develop an inclusive 3-part impact mapping approach. We demonstrate its application using an environmental initiative.

Similar content being viewed by others

project research assessment

SpatialData: an open and universal data framework for spatial omics

Luca Marconato, Giovanni Palla, … Oliver Stegle

project research assessment

Expert review of the science underlying nature-based climate solutions

B. Buma, D. R. Gordon, … S. P. Hamburg

project research assessment

Australian human-induced native forest regeneration carbon offset projects have limited impact on changes in woody vegetation cover and carbon removals

Andrew Macintosh, Don Butler, … Paul Summerfield

Introduction

Universities around the World are increasingly required to demonstrate and measure the impact of their research beyond academia. The Times Higher Education (THE) World University Rankings now includes a measure of knowledge transfer and impact as an indicator of an institution’s quality and the THE World University Rankings released their inaugural University impact rankings in 2019. With the global rise of impact assessment, most nations adopt a variant of the Organisation for Economic Cooperation and Development (OECD) definition of impact 1 ; “the contribution that research makes to the economy, society, environment or culture, beyond the contribution to academic research.” Yet research impact mapping provides benefits beyond just meeting the requirements for assessment 1 . It provides an opportunity for academics to reflect on and consider the impact their research can, and should, have on the environment, our social networks and wellbeing, our economic prosperity and our cultural identities. If considered at the development stage of research practices, the design and implementation of impact mapping procedures and frameworks can provide an opportunity to better plan for impact and create an environment where impact is more likely to be achieved.

Almost all impact assessments use variants of the Research Impact Pathway (Fig. 1 ) as the conceptual framework and model with which to document, measure and assess environmental, social, economic and cultural impacts of research 1 . This Pathway starts with inputs, followed by activities. Outputs and outcomes are produced and these lead to impact. Writing for Nature Outlook: Assessing Science , Morgan 2 reported on how Australia’s Commonwealth Scientific and Research Organisation (CSIRO) mapped impact using this approach. However, the literature contains very few worked examples to guide academics and co-ordinators in the process of research impact mapping. This is particularly evident for environmental initiatives and innovations 3 , 4 .

Here we provide a new, 3-part impact mapping approach that can accommodate non-linearity in the impact pathway and can more broadly include and assess both ‘hard’ impacts, those that can be directly attributed to an initiative or invention, and ‘soft’ impacts, those that can be indirectly attributed to an initiative or invention. We then present a worked example for an environmental innovation called the River Styles Framework, developed at Macquarie University, Sydney, Australia. The River Styles Framework is an approach to analysis, interpretation and application of geomorphic insights into river landscapes as a tool to support management applications 5 , 6 . We document and map how this Framework has shaped, and continues to shape, river management practice in various parts of the world. Through mapping impact we demonstrate how the River Styles Framework has contributed to environmental, social and economic benefits at local, national and international scales. Cvitanovic and Hobday (2018) 3  in Nature Communications might consider this case study a ‘bright spot’ that sits at the environmental science-policy-practice interface and is representative of examples that are seldom documented.

figure 1

The Research Impact Pathway (modified from ref. 2 )

This case study is presented from the perspective of the researchers who developed the River Styles Framework, and the University Impact co-ordinator who has worked with the researchers to document and measure the impact as part of ex post assessment 1 , 7 . We highlight challenges in planning for impact, as the research impact pathway evolves and entails significant lag times 8 . We discuss challenges that remain in the mapping process, particularly when trying to measure and attribute ‘soft’ impacts such as a change in practice or philosophy, an improvement in environmental condition, or a reduction in community conflict to a particular initiative or innovation 9 . We then provide a personal perspective of the challenges faced and lessons learnt in applying and mapping research impact so that others, particularly in the environmental sciences and related interdisciplinary fields, can undertake similar exercises for their own research impact assessments.

Brief background on research impact assessment and reporting

Historical reviews of research policy record long-term shifts towards incorporation of concerns for research impact within national funding agencies. In the 1970s the focus was on ‘research utilisation’ 10 , more recently it has been on ‘knowledge mobilisation’ 11 . The focus is always on seeking to understand the actual manner and pathways through which research becomes incorporated into policy, and through which research has an economic, social, cultural and environmental impact. Often these are far from linear circumstances, entailing multiple pathways.

Since the 1980s, higher education systems around the world have been transitioning to performance-based research funding systems (PRFS). The initial application of the PRFS in university contexts occurred as part of the first Research Assessment Exercise (RAE) in the United Kingdom in 1986 12 . PRFS systems have been designed to reward and perpetuate the highest quality research, presenting notionally rational criteria with which to support more intellectually competitive institutions 13 . The United Kingdom’s (UK) RAE was replicated in Australia as the Research Quality Framework (RQF), and more recently as the Excellence in Research for Australia (ERA) assessment. In 2010, 15 countries engaged in some form of PRFS 14 . These frameworks focus almost solely on academic research performance and productivity, rather than the contribution and impact that research makes to the economy, society, environment or culture.

In the last decade, research policy frameworks have increasingly focused on facilitating national prosperity through the transfer, translation and commercialisation of knowledge 15 , 16 , combined with the integration of research findings into government policy-making 17 . In 2009, the Higher Education Funding Council for England conducted a year-long review and consultation process regarding the structure of the Research Excellence Framework (REF) 18 . Following this review, in 2010 the Higher Education Funding Council for England (HEFCE) commissioned a series of impact pilot studies designed to produce narrative-style case studies by 29 higher education institutions. The pilot studies featured five units of assessment: clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature 12 . These pilot studies became the basis of the REF conducted in the UK in 2014 9 , 19 with research impact reporting comprising a 20% component of the overall assessment.

In Canada, in 2009 and from 2014 the Canadian Academy of Health Sciences and Manitoba Research, respectively, developed an impact framework and narrative outputs to evaluate the returns on investment in health research 20 , 21 . Similarly the UK National Institute for Health Research (NIHR) regularly produces impact synthesis case studies 22 . In Ireland, in 2012, the Science Foundation Ireland placed research impact assessment at the core of its scientific and engineering research vision, called Agenda 2020 23 . In the United States, in 2016, the National Science Foundation, National Institute of Health, US Department of Agriculture, and US Environmental Protection Authority developed a repository of data and tools for assessing the impact of federal research and development investments 24 . In 2016–2017, the European Union (EU) established a high-level group to advise on how to maximise the impact of the EU’s investment in research and innovation, focussing on the future of funding allocation and the implementation of the remaining years of Horizon 2020 25 . In New Zealand, in 2017, the Ministry of Business, Innovation and Employment released a discussion paper proposing the introduction of an impact ‘pillar’ into the science investment system 26 . In 2020, Hong Kong will include impact assessment in their Research Assessment Exercise (RAE) for the first time 27 . Other countries including Denmark, Finland and Israel have scoped the use of research impact assessments of their major research programs as part of the Small Advanced Economies Initiative 28 .

In 2017, the Australian Research Council (ARC) conducted an Engagement and Impact Assessment Pilot (EIAP) 7 . While engagement is not analogous to impact, it is an evidential mechanism that elucidates the potential beneficiaries, stakeholders, and partners of academic research 12 , 16 . In addition to piloting narrative-style impact case study reporting, the EIAP characterised and mapped patterns of academic engagement with end users that create and enable research impact. The 2017 EIAP assessed a selection of disciplines for engagement, and a selection of disciplines for impact. Environmental science was a discipline selected for the impact pilot. These pilots became the basis for the Australian Engagement and Impact (EI) assessment in 2018 7 that ran in parallel with the ERA, and from which the case study in this paper is drawn.

Research impact assessment does not just include ex post reporting that can feed into a national PRFS. A large component of academic impact assessment involves ex ante impact reporting in research funding applications. In both the UK and Australia, the perceived merit of a research funding application has been linked in part to its planning and potential for external research impact. In the UK this is labelled a ‘Pathways to Impact’ statement (used by the Research Council UK), in Australia this is an Impact statement (used by the ARC), with a national interest statement also implemented in 2018. These statements explicitly draw from the ‘pathway to impact’ model which simplifies a direct and linear relationship between research excellence, research engagement, and research impact 29 . These ex ante impact statements can be difficult for academics, especially early career researchers, if they do not understand the process, nature and timing of impact. This issue exists in ex post impact reporting and assessment as well, with many researchers finding it difficult to supply evidence that directly or indirectly links their research to impacts that may have taken decades to manifest 1 , 7 , 8 . Also, the simplified linearity of the Research Impact Pathway model makes it difficult to adequately represent the transformation of research into impact.

For research impact statements and assessments to be successful, researchers need to understand the patterns and pathways by which impact occurs prior to articulating how their own research project might achieve impact ex ante, or has had impact ex post. The quality of research impact assessment will improve if researchers and funding agencies understand the types and qualities of impact that can reasonably be expected to arise from a research project or initiative.

Given the plethora of interest in, and a growing global movement towards, both ex ante and ex post research impact assessment and reporting, it is surprising that very few published examples demonstrate how to map research impact. Even in the business, economics and corporate sectors where impact assessment and reporting is common practice 30 , 31 , 32 , very few published examples exist. This hinders prospects for researchers and co-ordinators to develop a more critical understanding of impact, inhibiting more nuanced understandings of the pathways to impact model. Mapping impact networks and recording a cartography of impact for research projects and initiatives provides an appropriate basis to conduct such tasks. This paper provides a new method by which this can be achieved.

The research impact pathway and impact mapping

Many impact assessment frameworks around the world have common characteristics, often structured around the Research Impact Pathway model (Fig. 1 ). This model can be identified in a series of 2009 and 2016 Organisation for Economic Cooperation and Development (OECD) reports that investigated the mechanisms of impact reporting 1 , 33 . The Research Impact Pathway is presented as a sequence of steps by which impact is realised. This pathway can be visualised for an innovation or initiative using an impact mapping approach. It starts with inputs that can include funding, staff, background intellectual property and support structures (e.g., administration, facilities). This is followed by activities or the ‘doing’ elements. This includes the work of discovery (i.e., research) and the translation—i.e., courses, workshops, conferences, and processes of community and stakeholder engagement.

Outputs are the results of inputs and activities. They includes publications, reports, databases, new intellectual property, patents and inventions, policy briefings, media, and new courses or teaching materials. Inputs, activities and outputs can be planned and somewhat controlled by the researcher, their collaborators and their organisations (universities). Outcomes then occur under direct influence of the researcher(s) with intended results. This may include commercial products and licences, job creation, new contracts, grants or programs, citations of work, new companies or spin-offs and new joint ventures and collaborations.

Impacts (sometimes called benefits) tend to occur via uptake and use of an innovation or initiative by independent parties under indirect (or no) influence from the original researcher(s). Impacts can be ‘hard’ or ‘soft’ and have intended and unintended consequences. They span four main areas outside of academia, including environmental, social, economic and cultural spaces. Impacts can include improvements in environmental health, quality of life, changes in industry or agency philosophy and practice, implementation or improvement in policy, improvements in monitoring and reporting, cost-savings to the economy or industry, generation of a higher quality workforce, job creation, improvements in community knowledge, better inter-personal relationships and collaborations, beneficial transfer and use of knowledge, technologies, methods or resources, and risk-reduction in decision making.

The challenge: applying the research impact pathway to map impact for a case study

The River Styles Framework 5 , 34 aligns with UN Sustainable Development Goals of Life on Land and Clean Water and Sanitation that have a 2020 target to “ensure the conservation, restoration and sustainable use of terrestrial and inland freshwater ecosystems and their services” and a 2030 target to urgently “implement integrated water resources management at all levels” 35 .

The River Styles Framework is a catchment-scale approach to analysis and interpretation of river geomorphology 36 . It is an open-ended, generic approach for use in any landscape or environmental setting. The Framework has four stages (see refs. 5 , 37 , 38 , 39 ); (1) Analysis of river types, behaviour and controls, (2) Assessment of river condition, (3) Forecasting of river recovery potential, and (4) Vision setting and prioritisation for decision making.

River Styles Framework development, uptake, extension and training courses have contributed to a global change in river management philosophy and practice, resulting in improved on-ground river condition, use of geomorphology in river management, and end-user professional development. Using the River Styles Framework has changed the way river management decisions are made and the level of intervention and resources required to reach environmental health targets. This has been achieved through the generation of catchment-scale and regional-level templates derived from use of the Framework 6 . These templates are integrated with other biophysical science tools and datasets to enhance planning, monitoring and forecasting of freshwater resources 6 . The Framework is based on foundation research on the form and function of streams and their interaction with the landscape through which they flow (fluvial geomorphology) 5 , 40 .

The Framework has a pioneering structure and coherence due to its open-ended and generic approach to river analysis and interpretation. Going well beyond off-the-shelf imported manuals for river management, the Framework has been adopted because of its innovative approach to geomorphic analysis of rivers. The Framework is tailored for the landscape and institutional context of any given place to produce scaffolded, coherent and consistent datasets for catchment-specific decision making. Through on-ground communication of place-based results, the application of the Framework spans local, state, national and international networks and initiatives. The quality of the underlying science has been key to generating the confidence required in industry and government to adopt geomorphology as a core scientific tool to support river management in a range of geographical, societal and scientific contexts 6 .

The impact of this case study spans conceptual use, instrumental use and capacity building 4 defined as ways of thinking and alerting policy makers and practitioners to an issue. Impact also includes direct use of research in policy and planning decisions, and education, training and development of end-users, respectively 4 , 41 , 42 . The River Styles Framework has led to establishment of new decision-making processes while also changing philosophy and practice so on-ground impacts can be realised.

Impact does not just occur at one point in time. Rather, it comes and goes or builds and is sustained. How this is represented and measured, particularly for an environmental case study, and especially for an initiative built around a Framework where a traditional ‘product’, ‘widget’, or ‘invention’ is not produced is challenging 4 . More traditional metrics-based indicators such as the number of lives saved or the amount of money generated cannot be deployed for these types of case studies 4 , 9 . It is particularly challenging to unravel the commercial value and benefits of adopting and using an initiative (or Framework) that is part of a much bigger, international paradigm shift in river management philosophy and practice.

Similarly, how do you measure environmental, social, economic or cultural impacts of an initiative where the benefits can take many years (and in the case of rivers, decades) to emerge, and how do you then link and attribute those impacts directly with the design, development, use and extension of that initiative in many different places at many different times? For the River Styles Framework, on-ground impacts in terms of improved river condition and recovery are occurring 43 , but other environmental, social and economic benefits may be years or decades away. Impactful initiatives in themselves often reshape the contextual setting that then frames the next phase of science and management practices which leads to further implications for policy and institutional settings, and for societal (socio-cultural) and environmental benefits. This is currently the case in assessing the impact of the River Styles Framework.

The method: a new, 3-part impact mapping approach

Using the River Styles framework as an environmental case study, Fig. 2 presents a 3-part impact mapping approach that contains (1) a context strip, (2) an impact map, and (3) soft impact intensity strips to capture the scope of the impact and the conditions under which it has been realised. This approach provides a template that can be used or replicated by others in their own impact mapping exercises 44 .

figure 2

The research impact map for the River Styles Framework case study. This map contains 3 parts, a context strip, impact map and soft impact intensity strips

The cartographic approach to mapping impact shown in Fig. 2 provides a mechanism to display a large amount of complex information and interactions in a style that conveys and communicates an immediate snapshot of the research impact pathway, its components and associated impacts. The map can be analysed to identify patterns and interactions between components as part of ex post assessment, and as a basis for ex ante impact forecasting.

The 3-part impact map output is produced in an interactive online environment, acknowledging that impact maps are live, open-ended documents that evolve as new impacts emerge and inputs, activities, outputs and outcomes continue. The map changes when activities, outputs or outcomes that the developers had forgotten, or considered to be peripheral, later re-appear as having been influential to a stakeholder, community or network not originally considered as an end-user. Such activities, outputs and outcomes can be inserted into a live map to broaden its base and understand the impact. Also, by clicking on each icon on the map, pop-up bubbles contain details that are specific to each component of the case study. This functionality can also be used to journal or archive important information and evidence in the ‘back-end’ of the map. Such evidence is often required, or called upon, in research impact assessments. Figure 2 only provides a static reproduction of the map output for the River Styles Framework. The fully worked, interactive, River Styles Framework impact map can be viewed at https://indd.adobe.com/view/c9e2a270–4396–4fe3-afcb-be6dd9da7a36 .

Context is a key driver of research impact 1 , 45 . Context can provide goals for research agendas and impact that feeds into ex ante assessments, or provide a lens through which to analyse the conditions within which certain impacts emerged and occurred as part of ex post assessment. Part 1 of our mapping approach produces a context strip that situates the case study (Fig. 2 ). This strip is used to document settings occurring outside of academia before, during and throughout the case study. Context can be local, national or global and examples can be gathered from a range of sources such as reports, the media and personal experience. For the River Styles case study only key context moments are shown. Context for this case study is the constantly changing communities of practice in global river restoration that are driven by (or inhibited by) the environmental setting (coded with a leaf symbol), policy and institutional settings (coded with a building symbol), social and cultural settings (coded with a crowd symbol), and economic settings (coded with a dollar symbol). For most case studies, these extrinsic setting categories will be similar, but others can be added to this part of the map if needed.

Part 2 of our mapping approach produces an impact map using the Research Impact Pathway (Fig. 1 ). This impact map (Fig. 2 ) documents the time-series of inputs (coded with a blue hexagon), activities (coded with a green hexagon), outputs (coded with a yellow hexagon), outcomes (coded with a red hexagon) and impacts (coded with a purple hexagon) that occurred for the case study. Heavier bordered hexagons and intensity strips represent international aspects and uptake. To start, only the primary inputs, activities, outputs and outcomes are mapped. A hexagon appears when there is evidence that an input, activity, output or outcome has occurred. Evidence includes event advertisements, reports, publications, website mentions, funding applications, awards, personnel appointments and communications products.

However, in conducting this standard mapping exercise it soon became evident that it is difficult to map and attribute impacts, particularly for an initiative that has a wide range of both direct and indirect impacts. To address this, our approach distinguishes between ‘hard’ impacts and ‘soft’ impacts. Hard impacts can be directly attributed to an initiative or invention, whereas soft impacts can be indirectly attributed to an initiative or invention. The inclusion of soft impacts is critical as they are often an important and sometimes dominant part of the impact mix. Both quantitative and qualitative measures and evidence can be used to attribute hard or soft impacts. There is not a direct one-to-one relationship between quantitative measurement of hard impacts and qualitative appraisal of soft impacts.

Hard impacts are represented as purple hexagons in the body of the impact map. For the River Styles Framework we have only placed a purple hexagon on the impact map where the impact can be ‘named’ and for which there is ‘hard’ evidence (in the form of a report, policy, strategic plan or citation) that directly mentions and therefore attributes the impact to River Styles. Most of these are multi-year impacts and the position of the hexagons on the map is noted at the first mention.

For many case studies, particularly those that impact on the environment, society and culture, attributing impact directly to an initiative or invention is not necessarily easy or straighforward. To address this our approach contains a third element, soft impact intensity strips (Fig. 2 ) to recognise, document, capture and map the extent and influence of impact created by an initiative or invention. This is represented as a heat intensity chart (coded as a purple bar of varying intenstiy) and organised under the environmental, social and economic categories that are often used to measure Triple-Bottom-Line (TBL) benefits in sustainability and research and development (R&D) reporting (e.g., refs. 7 , 46 ). Within these broad categories, soft impacts are categorised according to the dimensions of impacts of science used by the OECD 1 . These include environmental, societal, cultural, economic, policy, organisational, scientific, symbolic and training impacts. Each impact strip for soft impacts uses different levels of purple shading (to match the purple hexagon colour in the impact map) to visualise the timing and intensity of soft impacts. For the River Styles Framework, the intensity of the purple colour is used to show those impacts that have been most impactful (darker purple), the timing of initiation, growth or step-change in intensity of each impact, the rise and wane of some impacts and the longevity of others. A heavy black border is used to note the timing of internationalisation of some impacts. This heat intensity chart was constructed by quantitatively representing qualitative sentiment in testimonials, interviews, course evaluations and feedback, surveys and questionnaires, acknowledgements and recognitions, documentation of collaborations and networks, use of River Styles concepts, and reports on the development of spin-off frameworks. Quantitative representations of qualitative sentiment was achieved through using the methods of time-series keyword searches and expert judgement. These are just two methods by which the level of heat intensity can be measured and assigned 9 .

The outcome: impact of the River Styles Framework case study

Figure 2 , and its interactive online version, present the impact map for the River Styles Framework initiative and Table 1 documents the detail of the River Styles impact story from pre-1996 to post-2020. The distribution of colour-coded hexagons and the intensity of purple on the soft impact intensity strips on Fig. 2 demonstrates the development and maturation of the initiative and the emergence of the impact.

In the first phase (pre-1996–2002), blue inputs, green activities and yellow output hexagons dominate. The next phase (2002–2005) was an intensive phase of output production (yellow hexagons). It is during this phase that red outcome hexagons appear and intensify. From 2006, purple impact hexagons appear for the first time, representing hard impact outside of academia. Soft impacts also start to emerge more intensely (Fig. 2 ). 2008–2015 represents a phase of domestic consolidation of yellow outputs, red outcomes and purple impacts, and the start of international uptake. Some of this impact is under direct influence and some is independent of the developers of the River Styles Framework (Fig. 1 ). The number of purple impact hexagons is more intense during the 2008–2015 period and soft impacts intensify further. 2016–2018 (and beyond) represents a phase of extension into international markets, collaborations and impact (heavier bordered hexagons and intensity strips; Fig. 2 ). The domestic impacts that emerged most intensively post-2006 continue in the background. Green activity hexagons re-appear during this period, much like the 1996–2002 phase, but in an international context. Foundational science (green activity hexagons) re-emerge, particularly internationally with new collaborations. At the same time, yellow outputs and red outcomes continue.

For the River Styles case study the challenge still remains one of how to adequately attribute, measure and provide evidence for soft impacts 4 that include:

a change in river management philosophy and practice

an improvement in river health and conservation of threatened species

the provision of an operational Framework that provides a common and consistent approach to analysis

the value of knowledge generation and databases for monitoring river health and informing river management decision-making for years to come

the integration into, and improvement in, river management policy

a change in prioritisation that reduces risk in decision-making and cost savings on-the-ground

professional development to produce a better trained, higher quality workforce and increased graduate employability

the creation of stronger networks of river professionals and a common suite of concepts that enable communication

more confident and appropriate use of geomorphic principles by river management practitioners

an improvement in citizen knowledge and reduced community conflict in river management practice

Lessons learnt by applying research impact mapping to a real case study

When applying the Research Impact Pathway and undertaking impact mapping for a case study it becomes obvious that generating and realising impact is not a linear process and it is never complete, and in many aspects it cannot be planned 8 , 9 , 29 . Rather, the pathway has many highways, secondary roads, intersections, some dead ends or cul-de-sacs and many unexpected detours of interest along the way.

Cycles of input, activity, outputs, outcomes and impact occur throughout the process. There are phases where greater emphasis is placed on inputs and activities, or phases of productivity that produce outputs and outcomes, and there are phases where the innovation or initiative gains momentum and produces a flurry of benefits and impacts. However, throughout the journey, inputs, activities, outputs and outcomes are always occurring, and the impact pathway never ends. Some impacts come and go while others are sustained.

The saying “being in the right place at the right time with the right people” has some truth. Impact can be probabilistically generated ex ante by the researcher(s) regularly placing themselves and their outputs in key locations or ‘rooms’ and in ‘moments’ where the chance of non-academic translation is high 47 . Context is also critical 45 . Economic, political, institutional, social and environmental conditions need to come together if an innovation or initiative is to ‘get off the ground’, gain traction and lead to impact (e.g., Fig. 2 ). Ongoing and sustained support is vital. An innovation funded 10 years ago may not receive funding today, or an innovation funded today may not lead to impact unless the right sets of circumstances and support are in place. This is, in part, a serendipitous process that involves the calculated creation of circumstances aligned to evoke the ‘black swan’ event of impact 48 . The ‘black swan’ effect, coined by Nassem Nicholas Taleb, is a metaphor for an unanticipated event that becomes reinterpreted through the benefit of hindsight, or alternatively, an event that exists ‘outside the model’. For example, black swans were presumed not to exist by Europeans until they were encountered in Australia and scientifically described in 1790. Such ‘black swan’ events are a useful device in ex post assessment for characterising those pivotal moments when a research program translates into research impact. While the exact nature of such events cannot be anticipated, by understanding the ways in which ‘black swan’ events take place in the context of research impact, researchers can manufacture scenarios that optimise their probability of provoking a ‘black swan’ event and therefore translating their research project into research impact, albeit in an unexpected way. One ‘black swan’ event for the River Styles Framework occurred between 1996–2002 (Table 1 ). Initial motivations for developing the Framework reflected inappropriate use of geomorphic principles derived elsewhere to address management concerns for distinctive river landscapes and ecosystems in Australia. Although initial applications and testing of the Framework were local (regional-scale), advice by senior-level personnel in the original funding agency, Land and Water Australia (blue input hexagon in 1997; Fig. 2 ), suggested we make principles generic such that the Framework can be used in any landscape setting. The impact of this ‘moment’ was only apparent much later on, when the Framework was adopted to inform place-based, catchment-specific river management applications in various parts of the world.

What is often not recognised is the time lag in the research impact process 9 . Depending on the innovation or initiative, this is, at best, a decadal process. Of critical importance is setting the foundations for impact. The ‘gem of an idea’ needs to be translated into a sound program of research, testing (proof of concept), peer-review and demonstration. These foundations must generate a level of confidence in the innovation or initiative before uptake. A level of branding may be required to make the innovation or initiative stand out from the crowd. Drivers are required to incentivise academics, both internal and external to their University setting, encouraging them to go outside their comfort zone to apply and translate their research in ‘real-world’ settings. Maintaining passion, patience and persistence throughout the journey are some of the most hidden and unrecognised parts of this process.

Some impacts are not foreseeable and surprises are inevitable. Activities, outputs and outcomes that may initially have seemed like a dead end, often re-appear in a different context or in a different network. Other outputs or outcomes take off very quickly and are implemented with immediate impact. Catalytic moments are sometimes required for uptake and impact to be realised 8 . These surprises are particularly obvious when an innovation or initiative enters the independent uptake stage, called impact under indirect influence on Fig. 1 . In this phase the originating researchers, developers or inventors are often absent or peripheral to the impact process. Other people or organisations have the confidence to use the innovation or initiative (as intended, or in some cases not as intended), and find new ways of taking the impact further. The innovation or initiative generates a life of its own in a snowball effect. Independent uptake is not easily measured, but it is a critical indicator of impact. Unless the foundations are solid and sound, prospects for sustained impact are diminished.

The maturity and type of impact also vary in different places at different times. This is particularly the case for innovations and initiatives where local and domestic uptake is strong, but international impact lags. Some places may be well advanced on the uptake part of the impact journey, firmly embedding the benefits while developing new extensions, add-ons and spin-offs with inputs and activities. Elsewhere, the uptake will only have just begun, such that outputs and outcomes are the primary focus for now, with the aim of generating impact soon. In some instances, authorities and practitioners are either unaware or are yet to be convinced that the innovation or initiative is relevant and useful for their circumstances. In these places the focus is on the inputs and activity phases necessary to generating outputs and outcomes relevant to their situation and context. Managing this variability while maintaining momentum is critical to creating impact.

Future directions for the practice of impact mapping and assessment

The process of engaging with impact and undertaking impact mapping for an environmental case study has been a reflective, positive but challenging experience. Our example is typical of many of the issues that must be addressed when undertaking research impact mapping and assessments where both ‘hard’ and ‘soft’ impacts are generated. Our 3-part impact mapping approach helps deal with these challenges and provides a mechanism to visualise and enhance communication of research impact to a broad range of scientists and policy practitioners from many fields, including industry and government agencies, as well as citizens who are interested in learning about the tangible and intangible benefits that arise from investing in research.

Such impact mapping work cannot be undertaken quickly 44 , 45 . Lateral thinking is required about what research impact really means, moving beyond the perception in academia that outputs and outcomes equals impact 4 , 9 , 12 . This is not the case. The research impact journey does not end at outcomes. The real measure of research impact is when an initiative gains a ‘life of its own’ and is independently picked-up and used for environmental, social or economic benefit in the ‘real-world’. This is when an initiative exits from the original researcher(s) owning the entirety of the impact, to one where the researcher(s) have an ongoing contribution to vastly scaled-up sets of collective impacts that are no longer controlled by any one actor, community or network. Penfield et al. 9 relates this to ‘knowledge creep’ where new data, information or frameworks become accepted and get absorbed over time.

Careful consideration of how an initiative is developed, emerges, is used, and the resulting benefits is needed to map impact. This process, in its own regard, provides solid foundations for future planning and consideration of possible (or maybe unforeseen) opportunities to develop the impact further as part of ex ante impact forecasting 1 , 44 . It’s value also lies in communicating and teaching others, using worked case studies, about what impact can mean, to demonstrate how it can evolve and mature, and outline the possible pathways of impact as part of ex post impact assessment 1 , 44 .

With greater emphasis being placed on impact in research policy and reporting in many parts of the world, it is timely to consider the level of ongoing support required to genuinely capture and assess impact over yearly and decadal timeframes 20 . Creation of environments and cultures in which impact can be incubated, nourished and supported aids effective planning, knowledge translation and engagement. Ongoing research is required to consider, more broadly and laterally, what is measured, what indicators are used, and the evidence required to assign attribution. This remains a challenge not just for the case study documented here, but for the process of impact assessment more generally 1 , 9 . Continuous monitoring of impacts (both intended and unintended) is needed. To do this requires support and systems to gather, archive and track data, whether quantitative or qualitative, and adequately build evidence portfolios 20 . A keen eye is needed to identify, document and archive evidence that may seem insignificant at the time, but can lead to a step-change in impact, or a re-appearance elsewhere on the pathway.

Impact reporting extends beyond traditional outreach and service roles in academia 16 , 19 . Despite the increasing recognition of the importance of impact and its permeation into academic lives, it is yet to be formally built into many academic and professional roles 9 . To date, the rewards are implicit rather than explicit 44 . Support is required if impact planning and reporting for assessment is to become a new practice for academics.

Managing the research impact process is vital, but it is also important to be open to new ideas and avenues for creating impact at different stages of the process. It is important to listen and to be attuned to developments outside of academia, and learn to live with the creative spark of uncertainty as we expect the unexpected!

Change history

08 november 2019.

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

Organisation for Economic Cooperation and Development (OECD). Enhancing Research performance through Evaluation, Impact Assessment and Priority Setting  (Directorate for Science, Technology and Innovation, Paris, 2009). This is a ‘go-to’ guide for impact assessment in Research and Development, used in OECD countries .

Morgan, B. Income for outcome. Australia and New Zealand are experimenting with ways of assessing the impact of publicly funded research. Nat. Outlook 511 , S72–S75 (2014). This Nature Outlook article reports on how Australia’s Commonwealth Scientific and Research Organisation (CSIRO) mapped their research programs against impact classes using the Research Impact Pathway .

CAS   Google Scholar  

Cvitanovic, C. & Hobday, A. J. Building optimism at the environmental science-policy-practice interface through the study of bright spots. Nat. Commun. 9 , 3466 (2018). This Nature Communications paper presents a commentary on the key principles that underpin what are termed ‘bright spots’, case studies where science and research has successfully influenced and impacted on policy and practice, as a means to inspire optimism in humanity’s capacity to address environmental challenges .

Article   ADS   Google Scholar  

Rau, H., Goggins, G. & Fahy, F. From invisibility to impact: recognising the scientific and societal relevance of interdisciplinary sustainability research. Res. Policy 47 , 266–276 (2018). This paper uses interdisciplinary sustainability research as a centrepiece for arguing the need for alternative approaches for conceptualising and measuring impact that recognise and capture the diverse forms of engagement between scientists and non-scientists, and diverse uses and uptake of knowledge at the science-policy-practice interface .

Article   Google Scholar  

Brierley, G. J. & Fryirs, K. A. Geomorphology and River Management: Applications of the River Styles Framework . 398 (Blackwell Publications, Oxford, 2005). This book contains the full River Styles Framework set within the context of the science of fluvial geomorphology .

Brierley, G. J. et al. Geomorphology in action: linking policy with on-the-ground actions through applications of the River Styles framework. Appl. Geogr. 31 , 1132–1143 (2011).

Australian Research Council (ARC). EI 2018 Framework  (Commonwealth of Australia, Canberra, 2017). This document and associated website contains the procedures for assessing research impact as part of the Australian Research Council Engagement and Impact process, and the national report, outcomes and impact cases studies assessed in the 2018 round .

Matt, M., Gaunand, A., Joly, P.-B. & Colinet, L. Opening the black box of impact–Ideal type impact pathways in a pubic agricultural research organisation. Res. Policy 46 , 207–218 (2017). This article presents a metrics-based approach to impact assessment, called the Actor Network Theory approach, to systematically code variables used to measure ex-post research impact in the agricultural sector .

Penfield, T., Baker, M. J., Scoble, R. & Wykes, M. C. Assessment, evaluations, and definitions of research impact: a review. Res. Eval. 23 , 21–32 (2014). This article reviews the concepts behind research impact assessment and takes a focussed look at how impact assessment was implemented for the UK’s Research Excellence Framework (REF) .

Weiss, C. H. The many meanings of research utilization. Public Adm. Rev. 39 , 426–431 (1979).

Cooper, A. & Levin, B. Some Canadian contributions to understanding knowledge mobilisation. Evid. Policy 6 , 351–369 (2010).

Watermeyer, R. Issues in the articulation of ‘impact’: the responses of UK academics to ‘impact’ as a new measure of research assessment. Stud. High. Educ. 39 , 359–377 (2014).

Hicks, D. Overview of Models of Performance-based Research Funding Systems. In: Organisation for Economic Cooperation and Development (OECD), Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings . 23–52 (OECD Publishing, Paris, 2010). https://doi.org/10.1787/9789264094611-en (Accessed 27 Aug 2019).

Hicks, D. Performance-based university research funding systems. Res. Policy 41 , 251–26 (2012).

Etzkowitz, H. Networks of innovation: science, technology and development in the triple helix era. Int. J. Technol. Manag. Sustain. Dev. 1 , 7–20 (2002).

Perkmann, M. et al. Academic engagement and commercialisation: a review of the literature on university-industry relations. Res. Policy 42 , 423–442 (2013).

Leydesdorff, L. & Etzkowitz, H. Emergence of a Triple Helix of university—industry—government relations. Sci. Public Policy 23 , 279–286 (1996).

Google Scholar  

Higher Education Funding Council for England (HEFCE). Research Excellence Framework . Second consultation on the assessment and funding of research. London. https://www.hefce.ac.uk (Accessed 12 Aug 2019).

Smith, S., Ward, V. & House, A. ‘Impact’ in the proposals for the UK’s Research Excellence Framework: Shifting the boundaries of academic autonomy. Res. Policy 40 , 1369–1379 (2011).

Canadian Academy of Health Sciences (CAHS). Making an Impact. A Preferred Framework and Indicators to Measure Returns on Investment in Health Research  (Canadian Academy of Health Sciences, Ottawa, 2009). This report presents the approach to research impact assessment adopted by the health science industry in Canada using the Research Impact Pathway .

Research Manitoba. Impact Framework . Research Manitoba, Winnipeg, Manitoba, Canada. (2012–2019). https://researchmanitoba.ca/impacts/impact-framework/ (Accessed 3 June 2019).

United Kingdom National Institute for Health Research (UKNIHR). Research and Impact . (NIHR, London, 2019).

Science Foundation Ireland (SFI). Agenda 2020: Excellence and Impact . (SFI, Dublin, 2012).

StarMetrics. Science and Technology for America’s Reinvestment Measuring the Effects of Research on Innovation , Competitiveness and Science . Process Guide (Office of Science and Technology Policy, Washington DC, 2016).

European Commission (EU). Guidelines on Impact Assessment . (EU, Brussels, 2015).

Ministry of Business, Innovation and Employment (MBIE). The impact of science: Discussion paper . (MBIE, Wellington, 2018).

University Grants Committee. Panel-specific Guidelines on Assessment Criteria and Working Methods for RAE 2020. University Grants Committee, (Government of the Hong Kong Special Administrative Region, Hong Kong, 2018).

Harland, K. & O’Connor, H. Broadening the Scope of Impact: Defining, assessing and measuring impact of major public research programmes, with lessons from 6 small advanced economies . Public issue version: 2, Small Advanced Economies Initiative, (Department of Foreign Affairs and Trade, Dublin, 2015).

Chubb, J. & Watermeyer, R. Artifice or integrity in the marketization of research impact? Investigating the moral economy of (pathways to) impact statements within research funding proposals in the UK and Australia. Stud. High. Educ. 42 , 2360–2372 (2017).

Oliver Schwarz, J. Ex ante strategy evaluation: the case for business wargaming. Bus. Strategy Ser. 12 , 122–135 (2011).

Neugebauer, S., Forin, S. & Finkbeiner, M. From life cycle costing to economic life cycle assessment-introducing an economic impact pathway. Sustainability 8 , 428 (2016).

Legner, C., Urbach, N. & Nolte, C. Mobile business application for service and maintenance processes: Using ex post evaluation by end-users as input for iterative design. Inf. Manag. 53 , 817–831 (2016).

Organisation for Economic Cooperation and Development (OECD). Fact sheets: Approaches to Impact Assessment; Research and Innovation Process Issues; Causality Problems; What is Impact Assessment?; What is Impact Assessment? Mechanisms . (Directorate for Science, Technology and Innovation, Paris, 2016).

River Styles. https://riverstyles.com (Accessed 2 May 2019).

United Nations Sustainable Development Goals. https://sustainabledevelopment.un.org (Accessed 2 May 2019).

Kasprak, A. et al. The Blurred Line between form and process: a comparison of stream channel classification frameworks. PLoS ONE 11 , e0150293 (2016).

Fryirs, K. Developing and using geomorphic condition assessments for river rehabilitation planning, implementation and monitoring. WIREs Water 2 , 649–667 (2015).

Fryirs, K. & Brierley, G. J. Assessing the geomorphic recovery potential of rivers: forecasting future trajectories of adjustment for use in river management. WIREs Water 3 , 727–748 (2016).

Fryirs, K. A. & Brierley, G. J. What’s in a name? A naming convention for geomorphic river types using the River Styles Framework. PLoS ONE 13 , e0201909 (2018).

Fryirs, K. A. & Brierley, G. J. Geomorphic Analysis of River Systems: An Approach to Reading the Landscape . 345 (John Wiley and Sons: Chichester, 2013).

Meagher, L., Lyall, C. & Nutley, S. Flows of knowledge, expertise and influence: a method for assessing policy and practice impacts from social science research. Res. Eval. 17 , 163–173 (2008).

Meagher, L. & Lyall, C. The invisible made visible. Using impact evaluations to illuminate and inform the role of knowledge intermediaries. Evid. Policy 9 , 409–418 (2013).

Fryirs, K. A. et al. Tracking geomorphic river recovery in process-based river management. Land Degrad. Dev. 29 , 3221–3244 (2018).

Kuruvilla, S., Mays, N., Pleasant, A. & Walt, G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv. Res. 6 , 134 (2006).

Barjolle, D., Midmore, P. & Schmid, O. Tracing the pathways from research to innovation: evidence from case studies. EuroChoices 17 , 11–18 (2018).

Department of Environment and Heritage (DEH). Triple bottom line reporting in Australia. A guide to reporting against environmental indicators . (Commonwealth of Australia, Canberra, 2003).

Le Heron, E., Le Heron, R. & Lewis, N. Performing Research Capability Building in New Zealand’s Social Sciences: Capacity–Capability Insights from Exploring the Work of BRCSS’s ‘sustainability’ Theme, 2004–2009. Environ. Plan. A 43 , 1400–1420 (2011).

Taleb, N. N. The Black Swan: The Impact of the Highly Improbable . 2nd edn. (Penguin, London, 2010).

Fryirs, K. A. & Brierley, G. J. Practical Applications of the River Styles Framework as a Tool for Catchment-wide River Management : A Case Study from Bega Catchment. (Macquarie University Press, Sydney, 2005).

Brierley, G. J. & Fryirs, K. A. (eds) River Futures: An Integrative Scientific Approach to River Repair . (Island Press, Washington, DC, 2008).

Fryirs, K., Wheaton, J., Bizzi, S., Williams, R. & Brierley, G. To plug-in or not to plug-in? Geomorphic analysis of rivers using the River Styles Framework in an era of big data acquisition and automation. WiresWater . https://doi.org/10.1002/wat2.1372 (2019).

Rinaldi, M. et al. New tools for the hydromorphological assessment and monitoring of European streams. J. Environ. Manag. 202 , 363–378 (2017).

Article   CAS   Google Scholar  

Rinaldi, M., Surian, N., Comiti, F. & Bussettini, M. A method for the assessment and analysis of the hydromorphological condition of Italian streams: The Morphological Quality Index (MQI). Geomorphology 180–181 , 96–108 (2013).

Rinaldi, M., Surian, N., Comiti, F. & Bussettini, M. A methodological framework for hydromorphological assessment, analysis and monitoring (IDRAIM) aimed at promoting integrated river management. Geomorphology 251 , 122–136 (2015).

Gurnell, A. M. et al. A multi-scale hierarchical framework for developing understanding of river behaviour to support river management. Aquat. Sci. 78 , 1–16 (2016).

Belletti, B., Rinaldi, M., Buijse, A. D., Gurnell, A. M. & Mosselman, E. A review of assessment methods for river hydromorphology. Environ. Earth Sci. 73 , 2079–2100 (2015).

Belletti, B. et al. Characterising physical habitats and fluvial hydromorphology: a new system for the survey and classification of river geomorphic units. Geomorphology 283 , 143–157 (2017).

O’Brien, G. et al. Mapping valley bottom confinement at the network scale. Earth Surf. Process. Landf. 44 , 1828–1845 (2019).

Sinha, R., Mohanta, H. A., Jain, V. & Tandon, S. K. Geomorphic diversity as a river management tool and its application to the Ganga River, India. River Res. Appl. 33 , 1156–1176 (2017).

O’Brien, G. O. & Wheaton, J. M. River Styles Report for the Middle Fork John Day Watershed, Oregon . Ecogeomorphology and Topographic Analysis Lab, Prepared for Eco Logical Research, and Bonneville Power Administration, Logan. 215 (Utah State University, Utah, 2014).

Marçal, M., Brierley, G. J. & Lima, R. Using geomorphic understanding of catchment-scale process relationships to support the management of river futures: Macaé Basin, Brazil. Appl. Geogr. 84 , 23–41 (2017).

Download references

Acknowledgements

We thank Simon Mould for building the online interactive version of the impact map for River Styles and Dr Faith Welch, Research Impact Manager at the University of Auckland for comments on the paper. The case study documented in this paper builds on over 20 years of foundation research in fluvial geomorphology and strong and lasting collaboration between researchers, scientists and managers at various universities and government agencies in many parts of the world.

Author information

Authors and affiliations.

Department of Environmental Sciences, Macquarie University, Sydney, NSW, 2109, Australia

Kirstie A. Fryirs

School of Environment, University of Auckland, Auckland, 1010, New Zealand

Gary J. Brierley

Research Services, Macquarie University, Sydney, NSW, 2109, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

K.F. conceived, developed and wrote this paper. G.B., T.D. contributed to, and edited, the paper. K.F., T.D. conceived, developed and produced the impact mapping toolbox.

Corresponding author

Correspondence to Kirstie A. Fryirs .

Ethics declarations

Competing interests.

K.F. and G.B. are co-developers of the River Styles Framework. River Styles foundation research has been supported through competitive grant schemes and university grants. Consultancy-based River Styles short courses taught by K.F. and G.B. are administered by Macquarie University. River Styles contract research is administered by Macquarie University and University of Auckland. River Styles as a trade mark expires in May 2020. T.D. declares no conflict of interest.

Additional information

Peer review information Nature Communications thanks Barbara Belletti and Gary Goggins for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fryirs, K.A., Brierley, G.J. & Dixon, T. Engaging with research impact assessment for an environmental science case study. Nat Commun 10 , 4542 (2019). https://doi.org/10.1038/s41467-019-12020-z

Download citation

Received : 17 June 2019

Accepted : 15 August 2019

Published : 04 October 2019

DOI : https://doi.org/10.1038/s41467-019-12020-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Applying a framework to assess the impact of cardiovascular outcomes improvement research.

  • Mitchell N. Sarkies
  • Suzanne Robinson

Health Research Policy and Systems (2021)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

project research assessment

ACOLA

Australian Council of Learned Academies Evidence-based Interdisciplinary Research

  • Strategic Plan
  • Member Academies
  • Board of Directors
  • ACOLA Committees
  • Management Team
  • Annual Reports
  • ACOLA Code of Conduct
  • Our History
  • Our Process
  • Horizon Scanning Series
  • Australia’s Energy Transition Research Plan
  • ACOLA Parliamentary Library Seminars
  • KONEKSI: The Australia-Indonesia Knowledge Partnership
  • Securing Australia’s Future
  • Making interdisciplinary research work
  • National Academies Forum
  • All Publications
  • Media Releases/News
  • Events | Seminars
  • Submissions
  • Contributors
  • Modernising Research Assessment
  • Other Programs

project research assessment

Research Assessment in Australia: Evidence for Modernisation  

At the request of the Australian Government, Australia’s Chief Scientist, Dr Cathy Foley AO PSM FAA FTSE, is seeking to understand how research metrics influence the diversity of Australia’s research workforce, and shape research quality, outputs, and impact. The Office of the Chief Scientist commissioned ACOLA to undertake a review of how research assessment affects the careers and publishing behaviours of Australian researchers. This review will inform the Chief Scientist’s advice to government to ensure that assessment of researchers in Australia:  

  • Recognises the valuable and essential contribution of the range of research activities, including mentorship, outreach, team science, innovation and commercialisation.  
  • Supports a diverse research workforce.  
  • Facilitates researcher mobility between research, industry and government, adequately recognising time spent working in industry or with industry partners.  
  • Accurately recognises research quality and research excellence, while supporting research integrity.  
  • Provides the right incentives for researchers and institutions to engage in high quality research, development and innovation.  

Through consultations with stakeholders and surveys that received more than 1137 responses involving both individuals and more than 50 research organisations, and the consideration of international approaches to modern research assessment, the project reveals the reality of current research assessment practices in Australia.  

Key messages:  

  • Researchers have expressed serious concerns about the current research assessment practices, noting that the assessments do not acknowledge their capabilities and contributions.  
  • The current assessment approaches do not incentivise diversity within the research workforce, hampering progress and inclusivity.  
  • Research culture and teams are being eroded as the focus is primarily on ‘publish or perish.’  
  • There is a lack of recognition for innovative and multidisciplinary research, resulting in a standardised approach that overlooks the unique nature of different research fields.  
  • Significant barriers exist for researchers in terms of career opportunities, collaboration, and mobility between sectors, hindering growth and stifling progress in research fields.  
  • Assessment practices heavily prioritise publication numbers, citations, and journal prestige, which perpetuates the status quo and hinders the success of underrepresented groups.  
  • The system has created a problematic relationship between universities, publishers, funders, and global ranking agencies, due to the pursuit of higher rankings and prioritising quantity over quality.  
  • Narrow research metrics stifle innovation and multidisciplinary research, and do not translate across different sectors.  
  • The current practices fail to recognise the value of experience outside of the research sector and hinder mobility between academia, industry, and government.  

It is crucial to find new ways to assess research careers to promote a diverse and effective research workforce, interdisciplinary collaboration, and career mobility.   

ACOLA has identified six pillars of modern research assessment to improve the practices in Australia moving forward:  

project research assessment

The findings from this report contribute to broader efforts by the Office of the Chief Scientist , the National Science and Technology Council (NSTC), and other government initiatives to support the knowledge economy and address barriers to retention and diversity in STEM fields.

Project outputs

  • Full report: Research Assessment in Australia: Evidence for Modernisation  [ PDF ] [ Word Document ]
  • Media Release Australia’s systems for assessing research careers ‘not fit for purpose’ | Chief Scientist
  • The Modernising Research Assessment code book and data are now available, please contact us to request a copy.

Expert Working Group

Project management.

We also acknowledge the important contributions from external researchers who have assisted with this project.

Project Funding and Support

This work is funded by the Office of the Chief Scientist, via the Department of Industry, Science and Resources.

Acknowledgement of Country

ACOLA acknowledges all Aboriginal and Torres Strait Islander Traditional Custodians of Country and recognises their continuing connection to land, sea, culture and community. We pay our respect to Elders both past and present.

We acknowledge the Ngunnawal people on which ACOLA’s office in Canberra is based, and the lands of the Gadigal (Sydney), Whadjuk (Perth), Turrbal (Brisbane) and Wurundjeri (Melbourne) Peoples where ACOLA staff for this project were based

Related Posts

project research assessment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Post comment

The Essentials of Effective Project Risk Assessments

By Kate Eby | September 19, 2022

  • Share on Facebook
  • Share on LinkedIn

Link copied

Performing risk assessments is vital to a project’s success. We’ve gathered tips from experts on doing effective risk assessments and compiled a free, downloadable risk assessment starter kit. 

Included on this page, you’ll find details on the five primary elements of risk , a comprehensive step-by-step process for assessing risk , tips on creating a risk assessment report , and editable templates and checklists to help you perform your own risk assessments.

What Is a Project Risk Assessment?

A project risk assessment is a formal effort to identify and analyze risks that a project faces. First, teams identify all possible project risks. Next, they determine the likelihood and potential impact of each risk.

During a project risk assessment, teams analyze both positive and negative risks. Negative risks are events that can derail a project or significantly hurt its chances of success. Negative risks become more dangerous when teams haven’t identified them or created a plan to deal with them.

A project risk assessment also looks at positive risks. Also called opportunities, positive risks are events that stand to benefit the project or organization. Your project team should assess those risks so they can seize on opportunities when they arise.

Your team will want to perform a project risk assessment before the project begins. They should also continually monitor for risks and update the assessment throughout the life of the project.

Some experts use the term project risk analysis to describe a project risk assessment. However, a risk analysis typically refers to the more detailed analysis of a single risk within your broader risk assessment. For expert tips and information, see this comprehensive guide to performing a project risk analysis. 

Project risk assessments are an important part of project risk management. Learn more from experts about best practices in this article on project risk management . For even more tips and resources, see this guide to creating a project risk management plan.

How Do You Assess Risk in a Project?

Teams begin project risk assessments by brainstorming possible project risks. Avoid missing important risks by reviewing events from similar past projects. Finally, analyze each risk to understand its time frame, probability, factors, and impact.  

Your team should also gather input from stakeholders and others who might have thoughts on possible risks. 

In general terms, consider these five important elements when analyzing risks:

  • Risk Event: Identify circumstances or events that might have an impact on your project. 
  • Risk Time Frame: Determine when these events are most likely to happen. This might mean when they happen in the lifecycle of a project or during a sales season or calendar year. 
  • Probability: Estimate the likelihood of an event happening. 
  • Impact: Determine the impact on the project and your organization if the event happens. 
  • Factors: Determine the events that might happen before a risk event or that might trigger the event.

Project Risk Assessment Tools

Project leaders can use various tools and methodologies to help measure risks. One option is a failure mode and effects analysis. Other options include a finite element analysis or a factor analysis and information risk.

These are some common risk assessment tools:

  • Process Steps: Identify all steps in a process.
  • Potential Problems: Identify what could go wrong with each step.
  • Problem Sources: Identify the causes of the problem.
  • Potential Consequences: Identify the consequences of the problem or failure.
  • Solutions: Identify ways to prevent the problem from happening.
  • Finite Element Analysis (FEA): This is a computerized method for simulating and analyzing the forces on a structure and the ways that a structure could break. The method can account for many, sometimes thousands, of elements. Computer analysis then determines how each of those elements works and how often the elements won’t work. The analysis for each element is then added together to determine all possible failures and the rate of failure for the entire product.
  • Factor Analysis of Information Risk (FAIR): This framework helps teams analyze risks to information data or cybersecurity risk.

How to Conduct a Project Risk Assessment

The project manager and team members will want to continually perform risk assessments for a project. Doing good risk assessments involves a number of steps. These steps include identifying all possible risks and assessing the probability of each.

Most importantly, team members must fully explore and assess all possible risks, including risks that at first might not be obvious.

Mike-Wills

“The best thing that a risk assessment process can do for any project, over time, is to be a way of bringing unrecognized assumptions to light,” says Mike Wills , a certified mentor and coach and an assistant professor at Embry-Riddle Aeronautical University’s College of Business. “We carry so many assumptions without realizing how they constrain our thinking.”

Steps in a Project Risk Assessment

Experts recommend several important steps in an effective project risk assessment. These steps include identifying potential risks, assessing their possible impact, and formulating a plan to prevent or respond to those risks.

Here are 10 important steps in a project risk assessment:

Step 1: Identify Potential Risks

Bring your team together to identify all potential risks to your project. Here are some common ways to help identify risks, with tips from experts:

  • Review Documents: Review all documents associated with the project.
  • Consider Industry-Specific Risks: Use risk prompt lists for your industry. Risk prompt lists are broad categories of risks, such as environmental or legal, that can occur in a project.

Wendy Romeu

  • Consult Experts: Conduct interviews with experts within and, in some cases, outside your organization.
  • Brainstorm: Brainstorm ideas with your team. “The best scenario, which doesn't usually happen, is the whole team comes together and identifies the risks,” says Romeu.
  • Stick to Major Risks: Don’t try to identify an unrealistic or unwieldy number of risks. “You want to identify possible risks, but you want to keep the numbers manageable,” says Wills. “The more risks you identify, the longer you spend analyzing them. And the longer you’re in analysis, the fewer decisions you make.”

project research assessment

Step 2: Determine the Probability of Each Risk

After your team has identified possible risks, you will want to determine the probability of each risk happening. Your team can make educated guesses using some of the same methods it used to identify those risks.

Determine the probability of each identified risk with these tactics:

  • Brainstorm with your team.
  • Interview experts.
  • Review similar past projects.
  • Review other projects in the same industry.

Step 3: Determine the Impact of Each Risk

Your team will then determine the impact of each risk should it occur. Would the risk stop the project entirely or stop the development of a product? Or would the risk occurring have a relatively minor impact?

Assessing impact is important because if it’s a positive risk, Romeu says, “You want to make sure you’re doing the things to make it happen. Whereas if it's a high risk and a negative situation, you want to do the things to make sure it doesn't happen.”

There are two ways to measure impact: qualitative and quantitative. “Are we going to do just a qualitative risk assessment, where we're talking about the likelihood and the probability or the urgency of that risk?” asks Zucker. “Or are we going to do a quantitative risk assessment, where we're putting a dollar figure or a time figure to those risks?”

Most often, a team will analyze and measure risk based on qualitative impact. The team will analyze risk based on a qualitative description of what could happen, such as a project being delayed or failing. The team may judge that impact as significant but won’t put a dollar figure on it.

A quantitative risk assessment, on the other hand, estimates the impact in numbers, often measured in dollars or profits lost, should a risk happen. “Typically, for most projects, we don’t do a quantitative risk assessment,” Zucker says. “It’s usually when we’re doing engineering projects  or big, federal projects. That’s where we're doing the quantitative.”

Step 4: Determine the Risk Score of Each Event

Once your team assesses possible risks, along with the risk probability and impact, it’s time to determine a risk score for each potential event. This score allows your organization to understand the risks that need the most attention.

Often, teams will use a simple risk matrix to determine that risk score. Your team will assign a score based on the probability of each risk event. It will then assign a second score based on the impact that event would have on the organization. Those two figures multiplied will give you each event or risk a risk score.

Zucker says he prefers to assign the numbers 1, 5, and 10 — for low, medium, and high — to both the likelihood of an event happening and its impact. In that scenario, an event with a low likelihood of happening (level of 1) and low impact (level of 1) would have a total risk score of 1 (1 multiplied by 1). An event with a high likelihood of happening (level of 10) and a large impact (level of 10) would have a total risk score of 100.

Zucker says he prefers using those numbers because a scale as small as one to three doesn't convey the importance of high-probability and high-impact risks. “A nine doesn't feel that bad,” he says. “But if it's 100, it's like, ‘Whoa, I really need to worry about that thing.’”

While these risk matrices use numbers, they are not really quantitative. Your teams are making qualitative judgments on events and assigning a rough score. In some cases, however, teams can determine a quantitative risk score.

Your team might determine, based on past projects or other information, that an event has a 10 percent chance of happening. For example, if that event will diminish your manufacturing plant’s production capacity by 50 percent for one month, your team might determine that it will cost your company $400,000. In that case, the risk would have a risk score of $40,000.

At the same time, another event might have a 40 percent chance of happening. Your team might determine the cost to the business would be $10,000. In that case, the risk score is $4,000.

“Just simple counts start to give you a quantifiable way of looking at risk,” says Wills. “A risk that is going to delay 10 percent of your production capacity is a different kind of risk than one that will delay 50 percent of it. Because you have a number, you can gather real operational data for a week or two and see how things support the argument. You can start to compare apples to apples, not apples to fish.”

Wills adds, “Humans, being very optimistic and terrible at predicting the future, will say, ‘Oh, I don't think it'll happen very often.’ Quantitative techniques help to get you away from this gambler fallacy kind of approach. They can make or break your argument to a stakeholder that says, ‘I've looked at this, and I can explain mechanically, count by the numbers like an accountant, what's going on and what might go wrong.’”

Step 5: Understand Your Risk Tolerance

As your team considers risks, it must understand the organization’s risk tolerance. Your team should know what kinds of risks that organizational leaders and stakeholders are willing to take to see a project through.

Understanding that tolerance will also help your team decide how and where to invest time and resources in order to prevent certain negative events.

Step 6: Decide How to Prioritize Risks

Once your team has determined the risk score for each risk, it will see which potential risks need the most attention. These are risks that are high impact and that your organization will want to work hard to prevent.

“You want to attack the ones that are high impact and high likelihood first,” says Romeu. 

“Some projects are just so vital to what you do and how you do it that you cannot tolerate the risk of derailment or major failure,” says Wills. “So you're willing to spend money, time, and effort to contain that risk. On other projects, you're taking a flier. You're willing to lose a little money, lose a little effort.”

“You have to decide, based on your project, based on your organization, the markets you're in, is that an ‘oh my gosh, it's gonna keep me up every night’ kind of strategic risk? Or is it one you can deal with?” he says.

Step 7: Develop Risk Response Strategies

Once your team has assessed all possible risks and ranked them by importance, you will want to dive deeper into risk response strategies. That plan should include ways to respond to both positive and negative risks.

These are the main strategies for responding to threats or negative risks:

  • Mitigate: These are actions you will take to reduce the likelihood of a risk event happening or that will reduce the impact if it does happen. “For example, if you’re building a datacenter, we might have backup power generators to mitigate the likelihood or the impact of a power loss,” says Zucker. You can learn more, including more tips from experts, about project risk mitigation.
  • Avoid: If a certain action, new product, or new service carries an unacceptably high risk, you might want to avoid it entirely. 
  • Transfer: The most common way that organizations transfer risk is by buying insurance. A common example is fire insurance for a building. Another is cybersecurity insurance that would cover your company in the event of a data breach. An additional option is to transfer certain risks to other companies that can do the work and assume its risks for your company. “It could be if you didn't want to have the risk of running a datacenter anymore, you transfer that risk to Jeff Bezos (Amazon Web Services) or to Google or whoever,” Zucker says.

These are the main strategies for responding to opportunities or positive risks:

  • Share: Your company might partner with another company to work together on achieving an opportunity, and then share in the benefits.
  • Exploit: Your company and team work hard to make sure an event happens because it will benefit your company.
  • Enhance: Your company works to improve the likelihood of something happening, with the understanding that it might not happen.

These are the main strategies for responding to both threats and opportunities, or negative and positive risks:

  • Accept: Your company simply accepts that a risk might happen but continues on because the benefits of the action are significant. “You're not ignoring the risks, but you're saying, ‘I can't do anything practical about them,’” says Wills. “So they're there. But I'm not going to spend gray matter driving myself crazy thinking about them.”
  • Escalate: This is when a project manager sees a risk as exceptionally high, impactful, and beyond their purview. The project manager should then escalate information about the risk to company leaders. They can then help decide what needs to happen. “Some project managers seem almost fearful about communicating risks to organization leaders,” Romeu says. “It drives me nuts. It's about communicating at the right level to the right people. At the executive level, it’s about communicating what risks are happening and what the impact of those risks are. If they happen, everybody knows what the plan is. And people aren't taken by surprise.”

Step 8: Monitor Your Risk Plans

Your team will want to understand how viable your organization’s risk plans are. That means you might want to monitor how they might work or how to test them.

A common example might be all-hands desktop exercises on a disaster plan. For example, how will a hospital respond to a power failure or earthquake? It’s like a fire drill, Zucker says. “Did we have a plan? Do people know what to do when the risk event occurs?”

Step 9: Perform Risk Assessments Continually

Your team will want to continually assess risks to the project. This step should happen throughout your project, from project planning to execution to closeout. 

Zucker explains that the biggest mistake teams tend to make with project risk assessment: “People think it's a one-and-done event. They say, ‘I’ve put together my risk register, we’ve filed it into the documents that we needed to file, and I'm not worrying about it.’ I think that is probably the most common issue: that people don't keep it up. They don't think about it.”

Not thinking about how risks change and evolve throughout a project means project leaders won’t be ready for something when it happens. That’s why doing continual risk assessment as a primary part of risk management is vital, says Wills.

“Risk management is a process that should start before you start doing that activity. As you have that second dream about doing that project, start thinking about risk management,” he says. “And when you have completely retired that thing — you've shut down the business, you've pensioned everybody off, you’re clipping your coupons and working on your backstroke — that's when you're done with risk management. It's just a living, breathing, ongoing thing.”

Experts say project managers must learn to develop a sense for always assessing and monitoring risk. “As a PM, you should, in every single meeting you have, listen for risks,” Romeu says. “A technical person might say, ‘Well, this is going to be difficult because of X or Y or Z.’ That's a risk. They don't understand that's a risk, but as a PM, you should be aware of that.”

Step 10: Identify Lessons Learned

After your project is finished, your team should come together to identify the lessons learned during the project. Create a lessons learned document for future use. Include information about project risks in the discussion and the final document.

By keeping track of risks in a lessons learned document, you allow future leaders of similar projects to learn from your successes and failures. As a result, they can better understand the risks that could affect their project.

“Those lessons learned should feed back into the system — back into that original risk checklist,” Romeu says. “So the next software development project knows to look at these risks that you found.”

How to Write a Project Risk Assessment Report

Teams will often track risks in an online document that is accessible to all team members and organization leaders. Sometimes, a project manager will also create a separate project risk assessment report for top leaders or stakeholders.

Here are some tips for creating that report:

  • Find an Appropriate Template for Your Organization, Industry, and Project: You can find a number of templates that will help guide you in creating a risk assessment report. Find a project risk assessment report template in our project risk assessment starter kit.
  • Consider Your Audience: As you create the report, remember your audience. For example, a report for a technical team will be more detailed than a report for the CEO of your company. Some more detailed reports for project team members might include a full list of risks, which would be 100 or more. “But don't show executives that list; they will lose their mind,” says Romeu.

Project Risk Assessment Starter Kit

Project Risk Assessment Starter Kit

Download Project Risk Assessment Starter Kit

This starter kit includes a checklist on assessing possible project risks, a risk register template, a template for a risk impact matrix, a quantitative risk impact matrix, a project risk assessment report template, and a project risk response table. The kit will help your team better understand how to assess and continually monitor risks to a project.

In this kit, you’ll find: 

  • A risk assessment checklist PDF document and Microsoft Word to help you identify potential risks for your project. The checklist included in the starter kit is based on a document from Alluvionic Project Management Services.
  • A project risk register template for Microsoft Excel to help you identify, analyze, and track project risks.
  • A project risk impact assessment matrix for Microsoft Excel to assess the probability and impact of various risks.
  • A quantitative project risk impact matrix for Microsoft Excel to quantify the probability and impact of various risks. 
  • A project risk assessment report template for Microsoft Excel to help you communicate your risk assessment findings and risk mitigation plans to company leadership.
  • A project risk response diagram PDF document and Microsoft Word to better understand how to respond to various positive and negative risks.

Expertly Assess and Manage Project Risks with Real-Time Work Management in Smartsheet 

Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change. 

The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed. 

When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time.  Try Smartsheet for free, today.

Discover why over 90% of Fortune 100 companies trust Smartsheet to get work done.

The Straits Times

  • International
  • Print Edition
  • news with benefits
  • SPH Rewards
  • STClassifieds
  • Berita Harian
  • Hardwarezone
  • Shin Min Daily News
  • SRX Property
  • Tamil Murasu
  • The Business Times
  • The New Paper
  • Lianhe Zaobao
  • Advertise with us

Research projects to study impact of climate change on diseases in Singapore

project research assessment

SINGAPORE - As climate change looks set to result in hotter temperatures here, some researchers suggest that a heat warning system could help in managing the impact of chronic diseases exacerbated by heatwaves.

During heatwaves, where increased temperatures last for several days, such a warning system could be used to alert people with conditions such as diabetes, hypertension or a history of heart problems – who are at greater risk of adverse health effects due to the heat – to stay indoors, said Assistant Professor Borame Dickens from the National University of Singapore’s Saw Swee Hock School of Public Health.

This comes as Singapore’s third National Climate Change Study, released in January, suggested the Republic could experience more extreme weather by the end of the century, such as more frequent dry spells and hotter days.

Data from the Meteorological Service Singapore’s 2023 Annual Climate Assessment Report showed that 2023 was the fourth-warmest year on record for the Republic, with temperatures possibly getting even hotter in 2024.

A team of researchers, led by Prof Dickens, is looking at the long-term impact of climate change on such diseases here.

Their work uses agent-based modelling, in which computer simulations study interactions between different autonomous factors, to investigate the effects of climate change on health, she said.

The method could, for example, simulate the impact of increased temperatures on the number of cases of strokes and heart attacks, she noted.

“We are actually still trying to learn a lot more about how climate affects disease.”

She said the team is concerned about the interaction between air pollution and increasing temperatures, noting that this could have a significant effect on long-term respiratory illnesses such as chronic obstructive pulmonary disease.

project research assessment

In the longer term, the research could look at whether other measures are needed, such as closing public parks during heatwaves or redesigning Housing Board blocks to be cooler.

Meanwhile, another project is looking at the repercussions of environmental changes on vector-borne diseases, which include dengue and malaria.

Computer simulations will be used to show how climate change will impact health risks from such mosquito-borne diseases, said Assistant Professor Lim Jue Tao from Nanyang Technological University’s Lee Kong Chian School of Medicine.

The simulations will also examine the economic effect of such diseases becoming more prevalent, such as how much the average person will have to pay for the treatment of dengue and productivity costs from rising absenteeism.

project research assessment

Prof Lim said climate change could increase the geographic range of diseases such as dengue by allowing mosquitoes to breed in more temperate regions.

And in places with multiple seasons, which typically experience such diseases in the summer, mosquitoes may be able to breed into autumn due to warming temperatures, he said.

These would increase the risk of travellers importing such diseases into Singapore, he added.

“This means that there’s a higher likelihood that diseases which are not already in Singapore have a higher potential to be imported into the country,” said Prof Lim, pointing to mosquito-borne pathogens such as the Japanese encephalitis virus and yellow fever virus, which are currently not found here.

Rough findings from most climate change scenario projections in Singapore point to a significant increase in the mosquito population here, he added.

This could require the country to expand mosquito control initiatives such as Project Wolbachia or introduce vaccination programmes for mosquito-borne diseases, he suggested.

The two projects receive grants from the National Research Foundation (NRF) as part of the $23.5 million Climate Impact Science Research Programme.

The programme is helmed by the National Environment Agency’s Centre for Climate Research Singapore, and looks at the long-term impact of climate change in Singapore.

“There is growing awareness that climate change affects human health, but research in this area is still nascent,” said NRF director for urban solutions and sustainability Ni De En.

“Working alongside other government agencies, NRF’s sustained investments in these research areas will allow scientists to observe trends and develop a better understanding of the evolving impacts of climate change on human health,” he added.

Join ST's WhatsApp Channel and get the latest news and must-reads.

  • Climate change
  • Scientific research
  • Singapore health

Read 3 articles and stand to win rewards

Spin the wheel now

NASA Logo

The Effects of Climate Change

The effects of human-caused global warming are happening now, are irreversible for people alive today, and will worsen as long as humans add greenhouse gases to the atmosphere.

project research assessment

  • We already see effects scientists predicted, such as the loss of sea ice, melting glaciers and ice sheets, sea level rise, and more intense heat waves.
  • Scientists predict global temperature increases from human-made greenhouse gases will continue. Severe weather damage will also increase and intensify.

Earth Will Continue to Warm and the Effects Will Be Profound

Effects_page_triptych

Global climate change is not a future problem. Changes to Earth’s climate driven by increased human emissions of heat-trapping greenhouse gases are already having widespread effects on the environment: glaciers and ice sheets are shrinking, river and lake ice is breaking up earlier, plant and animal geographic ranges are shifting, and plants and trees are blooming sooner.

Effects that scientists had long predicted would result from global climate change are now occurring, such as sea ice loss, accelerated sea level rise, and longer, more intense heat waves.

The magnitude and rate of climate change and associated risks depend strongly on near-term mitigation and adaptation actions, and projected adverse impacts and related losses and damages escalate with every increment of global warming.

project research assessment

Intergovernmental Panel on Climate Change

Some changes (such as droughts, wildfires, and extreme rainfall) are happening faster than scientists previously assessed. In fact, according to the Intergovernmental Panel on Climate Change (IPCC) — the United Nations body established to assess the science related to climate change — modern humans have never before seen the observed changes in our global climate, and some of these changes are irreversible over the next hundreds to thousands of years.

Scientists have high confidence that global temperatures will continue to rise for many decades, mainly due to greenhouse gases produced by human activities.

The IPCC’s Sixth Assessment report, published in 2021, found that human emissions of heat-trapping gases have already warmed the climate by nearly 2 degrees Fahrenheit (1.1 degrees Celsius) since 1850-1900. 1 The global average temperature is expected to reach or exceed 1.5 degrees C (about 3 degrees F) within the next few decades. These changes will affect all regions of Earth.

The severity of effects caused by climate change will depend on the path of future human activities. More greenhouse gas emissions will lead to more climate extremes and widespread damaging effects across our planet. However, those future effects depend on the total amount of carbon dioxide we emit. So, if we can reduce emissions, we may avoid some of the worst effects.

The scientific evidence is unequivocal: climate change is a threat to human wellbeing and the health of the planet. Any further delay in concerted global action will miss the brief, rapidly closing window to secure a liveable future.

Here are some of the expected effects of global climate change on the United States, according to the Third and Fourth National Climate Assessment Reports:

Future effects of global climate change in the United States:

sea level rise

U.S. Sea Level Likely to Rise 1 to 6.6 Feet by 2100

Global sea level has risen about 8 inches (0.2 meters) since reliable record-keeping began in 1880. By 2100, scientists project that it will rise at least another foot (0.3 meters), but possibly as high as 6.6 feet (2 meters) in a high-emissions scenario. Sea level is rising because of added water from melting land ice and the expansion of seawater as it warms. Image credit: Creative Commons Attribution-Share Alike 4.0

Sun shining brightly over misty mountains.

Climate Changes Will Continue Through This Century and Beyond

Global climate is projected to continue warming over this century and beyond. Image credit: Khagani Hasanov, Creative Commons Attribution-Share Alike 3.0

Satellite image of a hurricane.

Hurricanes Will Become Stronger and More Intense

Scientists project that hurricane-associated storm intensity and rainfall rates will increase as the climate continues to warm. Image credit: NASA

project research assessment

More Droughts and Heat Waves

Droughts in the Southwest and heat waves (periods of abnormally hot weather lasting days to weeks) are projected to become more intense, and cold waves less intense and less frequent. Image credit: NOAA

2013 Rim Fire

Longer Wildfire Season

Warming temperatures have extended and intensified wildfire season in the West, where long-term drought in the region has heightened the risk of fires. Scientists estimate that human-caused climate change has already doubled the area of forest burned in recent decades. By around 2050, the amount of land consumed by wildfires in Western states is projected to further increase by two to six times. Even in traditionally rainy regions like the Southeast, wildfires are projected to increase by about 30%.

Changes in Precipitation Patterns

Climate change is having an uneven effect on precipitation (rain and snow) in the United States, with some locations experiencing increased precipitation and flooding, while others suffer from drought. On average, more winter and spring precipitation is projected for the northern United States, and less for the Southwest, over this century. Image credit: Marvin Nauman/FEMA

Crop field.

Frost-Free Season (and Growing Season) will Lengthen

The length of the frost-free season, and the corresponding growing season, has been increasing since the 1980s, with the largest increases occurring in the western United States. Across the United States, the growing season is projected to continue to lengthen, which will affect ecosystems and agriculture.

Heatmap showing scorching temperatures in U.S. West

Global Temperatures Will Continue to Rise

Summer of 2023 was Earth's hottest summer on record, 0.41 degrees Fahrenheit (F) (0.23 degrees Celsius (C)) warmer than any other summer in NASA’s record and 2.1 degrees F (1.2 C) warmer than the average summer between 1951 and 1980. Image credit: NASA

Satellite map of arctic sea ice.

Arctic Is Very Likely to Become Ice-Free

Sea ice cover in the Arctic Ocean is expected to continue decreasing, and the Arctic Ocean will very likely become essentially ice-free in late summer if current projections hold. This change is expected to occur before mid-century.

U.S. Regional Effects

Climate change is bringing different types of challenges to each region of the country. Some of the current and future impacts are summarized below. These findings are from the Third 3 and Fourth 4 National Climate Assessment Reports, released by the U.S. Global Change Research Program .

  • Northeast. Heat waves, heavy downpours, and sea level rise pose increasing challenges to many aspects of life in the Northeast. Infrastructure, agriculture, fisheries, and ecosystems will be increasingly compromised. Farmers can explore new crop options, but these adaptations are not cost- or risk-free. Moreover, adaptive capacity , which varies throughout the region, could be overwhelmed by a changing climate. Many states and cities are beginning to incorporate climate change into their planning.
  • Northwest. Changes in the timing of peak flows in rivers and streams are reducing water supplies and worsening competing demands for water. Sea level rise, erosion, flooding, risks to infrastructure, and increasing ocean acidity pose major threats. Increasing wildfire incidence and severity, heat waves, insect outbreaks, and tree diseases are causing widespread forest die-off.
  • Southeast. Sea level rise poses widespread and continuing threats to the region’s economy and environment. Extreme heat will affect health, energy, agriculture, and more. Decreased water availability will have economic and environmental impacts.
  • Midwest. Extreme heat, heavy downpours, and flooding will affect infrastructure, health, agriculture, forestry, transportation, air and water quality, and more. Climate change will also worsen a range of risks to the Great Lakes.
  • Southwest. Climate change has caused increased heat, drought, and insect outbreaks. In turn, these changes have made wildfires more numerous and severe. The warming climate has also caused a decline in water supplies, reduced agricultural yields, and triggered heat-related health impacts in cities. In coastal areas, flooding and erosion are additional concerns.

1. IPCC 2021, Climate Change 2021: The Physical Science Basis , the Working Group I contribution to the Sixth Assessment Report, Cambridge University Press, Cambridge, UK.

2. IPCC, 2013: Summary for Policymakers. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

3. USGCRP 2014, Third Climate Assessment .

4. USGCRP 2017, Fourth Climate Assessment .

Related Resources

project research assessment

A Degree of Difference

So, the Earth's average temperature has increased about 2 degrees Fahrenheit during the 20th century. What's the big deal?

project research assessment

What’s the difference between climate change and global warming?

“Global warming” refers to the long-term warming of the planet. “Climate change” encompasses global warming, but refers to the broader range of changes that are happening to our planet, including rising sea levels; shrinking mountain glaciers; accelerating ice melt in Greenland, Antarctica and the Arctic; and shifts in flower/plant blooming times.

project research assessment

Is it too late to prevent climate change?

Humans have caused major climate changes to happen already, and we have set in motion more changes still. However, if we stopped emitting greenhouse gases today, the rise in global temperatures would begin to flatten within a few years. Temperatures would then plateau but remain well-elevated for many, many centuries.

Discover More Topics From NASA

Explore Earth Science

project research assessment

Earth Science in Action

Earth Action

Earth Science Data

The sum of Earth's plants, on land and in the ocean, changes slightly from year to year as weather patterns shift.

Facts About Earth

project research assessment

project research assessment

Personalize Your Experience

Log in or create an account for a personalized experience based on your selected interests.

Already have an account? Log In

Free standard shipping is valid on orders of $45 or more (after promotions and discounts are applied, regular shipping rates do not qualify as part of the $45 or more) shipped to US addresses only. Not valid on previous purchases or when combined with any other promotional offers.

Register for an enhanced, personalized experience.

Receive free access to exclusive content, a personalized homepage based on your interests, and a weekly newsletter with topics of your choice.

Home / Healthy Aging / AI in healthcare: The future of patient care and health management

AI in healthcare: The future of patient care and health management

Curious about artificial intelligence? Whether you're cautious or can't wait, there is a lot to consider when AI is used in a healthcare setting.

Please login to bookmark

project research assessment

With the widespread media coverage in recent months, it’s likely that you’ve heard about artificial intelligence (AI) — technology that enables computers to do things that would otherwise require a human’s brain. In other words, machines can be given access to large amounts of information, and trained to solve problems, spot patterns and make recommendations. Common examples of AI in everyday life are virtual assistants like Alexa and Siri.

What you might not know is that AI has been and is being used for a variety of healthcare applications. Here’s a look at how AI can be helpful in healthcare, and what to watch for as it evolves.

What can AI technology in healthcare do for me?

A report from the National Academy of Medicine identified three potential benefits of AI in healthcare: improving outcomes for both patients and clinical teams, lowering healthcare costs, and benefitting population health.

From preventive screenings to diagnosis and treatment, AI is being used throughout the continuum of care today. Here are two examples:

Preventive care

Cancer screenings that use radiology , like a mammogram or lung cancer screening, can leverage AI to help produce results faster.

For example, in polycystic kidney disease (PKD), researchers discovered that the size of the kidneys — specifically, an attribute known as total kidney volume — correlated with how rapidly kidney function was going to decline in the future.

But assessing total kidney volume, though incredibly informative, involves analyzing dozens of kidney images, one slide after another — a laborious process that can take about 45 minutes per patient. With the innovations developed at the PKD Center at Mayo Clinic, researchers now use artificial intelligence (AI) to automate the process, generating results in a matter of seconds.

Bradley J. Erickson, M.D., Ph.D., director of Mayo Clinic’s Radiology Informatics Lab, says that AI can complete time-consuming or mundane work for radiology professionals , like tracing tumors and structures, or measuring amounts of fat and muscle. “If a computer can do that first pass, that can help us a lot,” says Dr. Erickson.

Risk assessment

In a Mayo Clinic cardiolog y study , AI successfully identified people at risk of left ventricular dysfunction, which is the medical name for a weak heart pump , even though the individuals had no noticeable symptoms. And that’s far from the only intersection of cardiology and AI.

“We have an AI model now that can incidentally say , ‘Hey, you’ve got a lot of coronary artery calcium, and you’re at high risk for a heart attack or a stroke in five or 10 years,’ ” says Bhavik Patel, M.D., M.B.A., the chief artificial intelligence officer at Mayo Clinic in Arizona.

How can AI technology advance medicine and public health?

When it comes to supporting the overall health of a population, AI can help people manage chronic illnesses themselves — think asthma, diabetes and high blood pressure — by connecting certain people with relevant screening and therapy, and reminding them to take steps in their care, such as take medication.

AI also can help promote information on disease prevention online, reaching large numbers of people quickly, and even analyze text on social media to predict outbreaks. Considering the example of a widespread public health crisis, think of how these examples might have supported people during the early stages of COVID-19. For example, a study found that internet searches for terms related to COVID-19 were correlated with actual COVID-19 cases. Here, AI could have been used to predict where an outbreak would happen, and then help officials know how to best communicate and make decisions to help stop the spread.

How can AI solutions assist in providing superior patient care?

You might think that healthcare from a computer isn’t equal to what a human can provide. That’s true in many situations, but it isn’t always the case.

Studies have shown that in some situations, AI can do a more accurate job than humans. For example, AI has done a more accurate job than current pathology methods in predicting who will survive malignant mesothelioma , which is a type of cancer that impacts the internal organs. AI is used to identify colon polyps and has been shown to improve colonoscopy accuracy and diagnose colorectal cancer as accurately as skilled endoscopists can.

In a study of a social media forum, most people asking healthcare questions preferred responses from an AI-powered chatbot over those from physicians, ranking the chatbot’s answers higher in quality and empathy. However, the researchers conducting this study emphasize that their results only suggest the value of such chatbots in answering patients’ questions, and recommend it be followed up with a more convincing study.

How can physicians use AI and machine learning in healthcare?

One of the key things that AI may be able to do to help healthcare professionals is save them time . For example:

  • Keeping up with current advances. When physicians are actively participating in caring for people and other clinical duties, it can be challenging for them to keep pace with evolving technological advances that support care. AI can work with huge volumes of information — from medical journals to healthcare records — and highlight the most relevant pieces.
  • Taking care of tedious work. When a healthcare professional must complete tasks like writing clinical notes or filling out forms , AI could potentially complete the task faster than traditional methods, even if revision was needed to refine the first pass AI makes.

Despite the potential for AI to save time for healthcare professionals, AI isn’t intended to replace humans . The American Medical Association commonly refers to “augmented intelligence,” which stresses the importance of AI assisting, rather than replacing, healthcare professionals. In the case of current AI applications and technology, healthcare professionals are still needed to provide:

  • Clinical context for the algorithms that train AI.
  • Accurate and relevant information for AI to analyze.
  • Translation of AI findings to be meaningful for patients.

A helpful comparison to reiterate the collaborative nature needed between AI and humans for healthcare is that in most cases, a human pilot is still needed to fly a plane. Although technology has enabled quite a bit of automation in flying today, people are needed to make adjustments, interpret the equipment’s data, and take over in cases of emergency.

What are the drawbacks of AI in healthcare?

Despite the many exciting possibilities for AI in healthcare, there are some risks to weigh:

  • If not properly trained, AI can lead to bias and discrimination. For example, if AI is trained on electronic health records, it is building only on people that can access healthcare and is perpetuating any human bias captured within the records.
  • AI chatbots can generate medical advice that is misleading or false, which is why there’s a need for effectively regulating their use.

Where can AI solutions take the healthcare industry next?

As AI continues to evolve and play a more prominent role in healthcare, the need for effective regulation and use becomes more critical. That’s why Mayo Clinic is a member of Health AI Partnership, which is focused on helping healthcare organizations evaluate and implement AI effectively, equitably and safely.

In terms of the possibilities for healthcare professionals to further integrate AI, Mark D. Stegall, M.D., a transplant surgeon and researcher at Mayo Clinic in Minnesota says, “I predict AI also will become an important decision-making tool for physicians.”

Mayo Clinic hopes that AI could help create new ways to diagnose, treat, predict, prevent and cure disease. This might be achieved by:

  • Selecting and matching patients with the most promising clinical trials.
  • Developing and setting up remote health-monitoring devices.
  • Detecting currently imperceptible conditions.
  • Anticipating disease-risk years in advance.

project research assessment

Relevant reading

Mayo Clinic on Hearing and Balance, 3rd Edition

Mayo Clinic on Better Hearing and Balance helps readers understand the possible causes of hearing and balance issues and offers solutions aimed at improving not just hearing and balance, but quality of life overall.

project research assessment

Mayo Clinic on Healthy Aging

An easy-to-understand yet comprehensive guide to help people live longer and more purposeful lives. 

project research assessment

Discover more Healthy Aging content from articles, podcasts, to videos.

You May Also Enjoy

project research assessment

Privacy Policy

We've made some updates to our Privacy Policy. Please take a moment to review.

IMAGES

  1. 10+ Project Assessment Examples

    project research assessment

  2. FREE 12+ Sample Research Project Templates in PDF

    project research assessment

  3. Rethinking Research Assessment for the Greater Good

    project research assessment

  4. Impact assessment

    project research assessment

  5. Assessment Plans

    project research assessment

  6. FREE 10+ Sample Project Assessment Templates in PDF

    project research assessment

VIDEO

  1. Assessment Overview

  2. Assessment Overview

  3. Assessment Overview

  4. Assessment 2 Research Project 3

  5. Assessment project- CDIS 362

  6. Assessment Overview Manage Project Scope

COMMENTS

  1. Research Project Evaluation—Learnings from the PATHWAYS Project Experience

    The challenge of a research project evaluation is that the majority of research projects are described as unique, although we believe several projects face similar issues and challenges as those observed in the PATHWAYS project. ... The UK Research Assessment Exercise: The evolution of a national research evaluation system. Res. Eval. 2007; 16: ...

  2. More carrot, less stick: how to make research assessments fairer

    In the latest assessment, in 2021, around 180,000 research outputs from 76,000 researchers at 157 institutions were reviewed by expert panels. More than 80% of submitted research was either 3* or ...

  3. What is Project Evaluation? The Complete Guide with Templates

    Project evaluation is a key part of assessing the success, progress and areas for improvement of a project. It involves determining how well a project is meeting its goals and objectives. Evaluation helps determine if a project is worth continuing, needs adjustments, or should be discontinued. A good evaluation plan is developed at the start of ...

  4. Project Evaluation Process: Definition, Methods & Steps

    1. Planning. The ultimate goal of this step is to create a project evaluation plan, a document that explains all details of your organization's project evaluation process. When planning for a project evaluation, it's important to identify the stakeholders and what their short-and-long-term goals are.

  5. Research Assessment Processes

    Research organisations receive many high-quality research applications and research proposals. They need to be able to identify which proposals are the best for funding, and which researchers should be appointed or promoted. They also regularly evaluate the performance of research institutes and universities. Research assessment processes are an important aspect of ensuring the quality of ...

  6. Determining the Scope of Your Project

    All relevant stakeholders must have a shared sense of the purpose (s) of a given assessment, evaluation, or research project. The scope of an assessment, evaluation, or research project reflects the type of question you would like to ask and the judgment you would like to make. You can ask questions about the need for a given intervention (e.g ...

  7. PDF RECOMMENDATIONS ON RESEARCH ASSESSMENT PROCESSES

    Transparency of research assessment processes 10 Evaluating and monitoring the robustness of research assessment processes 12 Challenges faced during assessment processes 14 ... of ongoing research projects and the evaluation of finished ones. To that end, the method of peer review - also known as merit review - was developed ...

  8. Assessment, evaluations, and definitions of research impact: A review

    A very different approach known as Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions (SIAMPI) was developed from the Dutch project Evaluating Research in Context and has a central theme of capturing 'productive interactions' between researchers and stakeholders by analysing ...

  9. Research & Evaluation FAQs

    The terms assessment, evaluation, and research are often used interchangeably in the field of education, and there is no single accepted definition of what each term means. We know that all three terms refer to the systematic gathering, analysis, and interpretation of information ("data"), but the intended purpose of this data gathering ...

  10. Research Assessment Tools

    The purpose of the quantitative and qualitative research assessment tools is to provide users with a quick and simple means to evaluate the quality of research studies. The research assessment tools describe the information that should be available in study reports and the key features of a high quality study design. ... This project is ...

  11. (PDF) A Framework for Project-Based Assessment in ...

    The use of crosscutting concepts in teaching is one of the major recommendations found in the U.S. National Research Council's (2011) A Framework for K-12 Science Education (Framework) and ...

  12. How to Write a Research Proposal

    Research questions give your project a clear focus. They should be specific and feasible, but complex enough to merit a detailed answer. 2604. How to Write a Literature Review | Guide, Examples, & Templates A literature review is a survey of scholarly knowledge on a topic. Our guide with examples, video, and templates can help you write yours.

  13. Resources for Assessment in Project-Based Learning

    Read about PBL assessment that supports student success in this page from Edutopia's comprehensive research review on project-based learning. BIE Research on Project-Based Learning (Buck Institute for Education) The Buck Institute for Education has compiled, as well as conducted, comprehensive research on PBL. Many of the studies shared on this ...

  14. Research and Assessment Cycle Toolkit

    The Research and Assessment Cycle Toolkit is a resource for library assessment practitioners to access information about assessment processes in libraries. The toolkit includes 23 training videos and supporting materials that: Review the principles and practices of library assessment. Enable library workers to develop the skills necessary to ...

  15. Engaging with research impact assessment for an environmental ...

    Impact assessment is embedded in many national and international research rating systems. Most applications use the Research Impact Pathway to track inputs, activities, outputs and outcomes of an ...

  16. Project TARA

    Project TARA is supported by Arcadia, a charitable foundation that works to protect nature, preserve cultural heritage and promote open access to knowledge. The Declaration on Research Assessment (DORA) recognizes the need to improve the ways in which the outputs of scholarly research are evaluated.

  17. How to Use Project-Based Assessments (PBAs) in Education

    Unlike those lighter projects, a project-based assessment is the primary means for covering a unit. ... What Does the Research Say About Project-Based Assessments in Education? While transitioning to PBL from traditional methods can take some getting used to, research shows it's well worth the effort, boosting student engagement, critical ...

  18. Modernising Research Assessment

    The findings from this report contribute to broader efforts by the Office of the Chief Scientist, the National Science and Technology Council (NSTC), and other government initiatives to support the knowledge economy and address barriers to retention and diversity in STEM fields.. Project outputs. Full report: Research Assessment in Australia: Evidence for Modernisation [] [Word Document]

  19. Two new tools to support responsible research assessment ...

    Introduction to the Project TARA toolkit. Since 2013, DORA has advocated a number of principles for responsible research evaluation, including avoiding the use of journal-based metrics as surrogate measures of research quality, recognizing a broader range of scholarly outputs in promotion, tenure, and review practices, and the importance of responsible research assessment as a mechanism to ...

  20. Evaluating impact from research: A methodological framework

    1. Introduction. Interest is growing rapidly in the evaluation of non-academic benefits or "impacts" (see Section 3 for definition) arising from research, as funders and Governments around the world increasingly seek evidence of the value of their research investments to society (Edler et al., 2012; Oancea, 2019).The growth of research over the past few decades has outstripped available ...

  21. Research Assessment

    Research assessment. In the UK the first Research Assessment Exercise (RAE) was conducted in 1986, and has been followed by a further six cycles, the most recent in 2008. In 2014, the RAE will be replaced by the new Research Excellence Framework (REF) intended to shape funding of research in universities, provide for benchmarking and impact ...

  22. Essential Guide to Project Risk Assessments

    A project risk assessment is a formal effort to identify and analyze risks that a project faces. First, teams identify all possible project risks. Next, they determine the likelihood and potential impact of each risk. During a project risk assessment, teams analyze both positive and negative risks. Negative risks are events that can derail a ...

  23. Project Assessment: How to Conduct (Free Examples)

    A project assessment is a systematic process of addressing and determining the needs of an ongoing project. It aims to meet the desired wants and conditions of a project. It is essential to ensure that the desired conditions are measured appropriately to pinpoint the need for change. After that, any changes that need to be made are considered ...

  24. Research projects to study impact of climate change on diseases in

    Data from the Meteorological Service Singapore's 2023 Annual Climate Assessment Report showed that 2023 was the fourth-warmest year on record for the Republic, with temperatures possibly getting ...

  25. AI, VR, And The Future Of Assessment In Schools

    New research into how assessment is being reimagined around the world provides clear indications of the current and emerging priorities in creating evaluations that are rigorous, adaptive, and ...

  26. The Effects of Climate Change

    The IPCC's Sixth Assessment report, published in 2021, found that human emissions of heat-trapping gases have already warmed the climate by nearly 2 degrees Fahrenheit (1.1 degrees Celsius) since 1850-1900. 1 The global average temperature is expected to reach or exceed 1.5 degrees C (about 3 degrees F) within the next few decades. These ...

  27. Riesgo: A Knowledge-Based Qualitative Risk Assessment System for ...

    A successful public-private partnership (PPP) relies heavily on effective risk assessment, given the intricate risk factors and contractual arrangements involved. While quantitative risk assessment methods have received significant attention in the PPP literature, qualitative risk assessment, the sector's predominant preference, remains underexplored, causing a low level of applicability of ...

  28. AI in healthcare: The future of patient care and health management

    Risk assessment. In a Mayo Clinic cardiolog y study, AI successfully identified people at risk of left ventricular dysfunction, which is the medical name for a weak heart pump, even though the individuals had no noticeable symptoms. And that's far from the only intersection of cardiology and AI.

  29. NVIDIA Announces Project GR00T Foundation Model for Humanoid Robots and

    NVIDIA today announced Project GR00T, a general-purpose foundation model for humanoid robots, designed to further its work driving breakthroughs in robotics and embodied AI. ... NVIDIA Unveils 6G Research Cloud Platform to Advance Wireless Communications With AI March 18, 2024. Oracle and NVIDIA to Deliver Sovereign AI Worldwide March 18, 2024 ...

  30. RNA deserves its own massive counterpart to the Human Genome Project

    As a result, NIEHS and the National Human Genome Research Institute (NHGRI) organized a workshop to explore the idea. Cheung, with support from these two institutes, ... Over 10 years, the RNome project should document all the modifications in the RNA of a few well-characterized human cell lines. By 15 years, it should have mapped modifications ...