• Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. research excellence framework and related literature, 3. methodology, 5. conclusions, acknowledgements, assessing research excellence: evaluating the research excellence framework.

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Mehmet Pinar, Timothy J Horne, Assessing research excellence: Evaluating the Research Excellence Framework, Research Evaluation , Volume 31, Issue 2, April 2022, Pages 173–187, https://doi.org/10.1093/reseval/rvab042

  • Permissions Icon Permissions

Performance-based research funding systems have been extensively used around the globe to allocate funds across higher education institutes (HEIs), which led to an increased amount of literature examining their use. The UK’s Research Excellence Framework (REF) uses a peer-review process to evaluate the research environment, research outputs and non-academic impact of research produced by HEIs to produce a more accountable distribution of public funds. However, carrying out such a research evaluation is costly. Given the cost and that it is suggested that the evaluation of each component is subject to bias and has received other criticisms, this article uses correlation and principal component analysis to evaluate REF’s usefulness as a composite evaluation index. As the three elements of the evaluation—environment, impact and output—are highly and positively correlated, the effect of the removal of an element from the evaluation leads to relatively small shifts in the allocation of funds and in the rankings of HEIs. As a result, future evaluations may consider the removal of some elements of the REF or reconsider a new way of evaluating different elements to capture organizational achievement rather than individual achievements.

Performance-based research funding systems (PRFS) have multiplied since the United Kingdom introduced the first ‘Research Selectivity Exercise’ in 1986. Thirty years on from this first exercise, Jonkers and Zacharewicz (2016) reported that 17 of the EU28 countries had some form of PRFS, and this had increased to 18 by 2019 ( Zacharewicz et al. 2019 ).

A widely used definition of what constitutes a PRFS is that they must meet the following criteria ( Hicks 2012 ):

Research must be evaluated, not the quality of teaching and degree programmes;

The evaluation must be ex post, and must not be an ex ante evaluation of a research or project proposal;

The output(s) of research must be evaluated;

The distribution of funding from Government must depend upon the evaluation results;

The system must be national.

Within these relatively narrow boundaries, there is significant variation between both what is assessed in different PRFS, and how the assessment is made. With regards to ‘what’, some focus almost exclusively on research outputs, predominantly journal articles, whereas others, notably the UK’s Research Excellence Framework (REF), assess other aspects of research such as the impact of research and the research environment. With regards to ‘how’, some PRFS use exclusively or predominantly metrics such as citations whereas others use expert peer review, and others still a mix of both methods ( Zacharewicz et al. 2019 ). 1

This article focuses on UK’s REF, which originated in the very first PRFS, the Research Selectivity Exercise in 1986. This was followed by a second exercise in 1989 and a series of Research Assessment Exercises (RAEs) in the 1990s and 2000s. Each RAE represented a relatively gentle evolution from the previous one, but there was arguably more of a revolution than evolution between the last RAE in 2008 and the first REF in 2014 ( REF 2014 ), with the introduction of the assessment of research impact into the assessment framework (see e.g., Gilroy and McNamara 2009 ; Shattock 2012 ; Marques et al. 2017 for a detailed discussion on the evolution of RAEs in the UK). Three elements of research, namely research outputs, the non-academic impact of research and the research environment, were evaluated in the REF 2014 exercise. Research outputs (e.g., journal articles, books and research-based artistic works) were evaluated in terms of their ‘originality, significance and rigour’. The assessment of the non-academic impact of research is based on the submission of impact case studies that describe the details of the ‘reach and significance’ of impacts on the economy, society and/or culture, that were underpinned by excellent research. The research environment consisted of both data relating to the environment and a narrative environment statement. The environment data consisted of the number of postgraduate research degree completions and total research income generated by the submitting unit. The research environment statement provided information on the research undertaken, the staffing strategy, infrastructure and facilities, staff development activities, and research collaborations and contribution to the discipline. The quality of the research environment was assessed in terms of its ‘vitality and sustainability’ based on the environment data and narrative environment statements (see REF 2012 for further details).

There has been criticism of several aspects of the assessment of research excellence in the REF, including the cost of preparation and evaluation of the REF, the potential lack of objectivity in assessing them and the effect of the quasi-arbitrary or opaque value judgements on the allocation of quality-related research (QR) funding (see Section 2 for the details). Furthermore, the use of multiple criteria, which is the case for the REF (i.e., environment, impact and outputs), in assessing university performance has been long criticized (see e.g., Saisana, d’Hombres and Saltelli 2011 ; Pinar, Milla and Stengos 2019 ). These multidimensional indices are risky as some of the index components have been considered redundant ( McGillivray 1991 ; McGillivray and White 1993 ). For instance, McGillivray (1991) , McGillivray and White (1993) and Bérenger and Verdier-Chouchane (2007) use correlation analysis to examine the redundancy of different components of well-being when the indices are constructed. The main argument of these papers is that if the index components are highly and positively correlated, then the inclusion of additional dimensions to the index does not add new information to that provided by any of the other components. Furthermore, Nardo et al. (2008) also point out that obtaining a composite index with the highly correlated components leads to a double weighting of the same information and so overweighting of the information captured by these components. Therefore, this literature argues that excluding any component from the evaluation does not lead to loss of information if the evaluation elements are highly and positively correlated. For instance, by using correlation analysis, Cahill (2005) showed that excluding any component from a composite index produces rankings and achievements similar to the composite index. To overcome these drawbacks, principal components analysis (PCA) has been used to obtain indices (see e.g., McGillivray 2005 ; Khatun 2009 ; Nguefack‐Tsague, Klasen and Zucchini 2011 for the use of PCA for well-being indices, and see Tijssen, Yegros-Yegros and Winnink 2016 and Robinson-Garcia et al. 2019 for the use of PCA for university rankings). The PCA transforms the correlated variables into a new set of uncorrelated variables using a covariance matrix, which explains most of the variation in the existing components ( Nardo et al. 2008 ).

This article will contribute to the literature by examining the redundancy of the three components of the REF by using the correlation analysis between them to examine the relevance of each component for the evaluation. If the three elements of the REF are highly and positively correlated, then excluding one component from the analysis will not result in major changes in the overall assessment of universities and funding allocated to them. This article will examine whether this would be the case. Furthermore, we will also carry out PCA to obtain weights that would produce an index that explains most of the variation in the three elements of the REF while obtaining an overall assessment of higher education institutes (HEIs) and distributing funding across them.

The remainder of this article is structured as follows. In Section 2, we will provide details on how the UK’s REF operates, identify the literature on the REF exercise and outline the hypotheses of the article. In Section 3, we provide the detailed data used in this article and examine the correlation between the environment, impact and output scores. In this section, we also provide the details of the QR funding formula used to allocate the funding and demonstrate the correlation between the funding distribution in the environment, impact and output pots. We also will carry out PCA by using the achievement scores and funding distributed in each element in this section. Finally, in this section, we provide an alternative approach to the calculation of overall REF scores and the distribution of QR funding based on the hypotheses of the article. Section 4 will consider the effect on the distribution of QR funding for English universities 2 and their rankings when each element is removed from the calculation one at a time and PCA weights are used. Finally, Section 5 will identify conclusions of our analyses and the implications for how future REF assessment exercises might be structured.

Research assessment exercises have existed in the UK since the first Research Selectivity Exercise was undertaken in 1986. A subsequent exercise was held in 1989, which was followed by RAEs in 1996, 2001 and 2008. Each HEI’s submission to the 1986 exercise comprised a research statement in one or more of 37 subject areas, together with five research outputs per area in which a submission was made (see e.g., Hinze et al. 2019 ). The complexity of the submissions has increased from that first exercise, and in 2014 the requirement to submit case studies and a narrative template to allow for the assessment of research impact was included for the first time, and the exercise was renamed to the REF.

The REF 2014 ‘Assessment Framework and Guidance on Submissions’ (REF 2011) indicated that a submission’s research environment would be assessed according to its ‘vitality and sustainability’, using the same five-point (4* down to unclassified) scale as for the other elements of the exercise. 3

Following the 2014 REF exercise, there have been many criticisms of REF. For instance, the effects of the introduction of impact as an element of the UK’s research assessment methodology has itself been the subject of many papers and reports in which many of the issues and challenges it has brought have been discussed (see e.g., Smith and Ward, House 2011 ; Penfield et al. 2014 ; Manville et al. 2015 ; Watermeyer 2016 ; Pinar and Unlu 2020a ). Manville et al. (2015) and Watermeyer (2016) show that academics in some fields were concerned about how their research focus would be affected by the impact agenda by forcing them to produce more ‘impactful’ research than carrying out their own research agenda. On the other hand, Manville et al. (2015 ) demonstrate that there have been problems with the peer reviewing of the impact case studies where reviewer panels struggled to distinguish between 2-star and 3-star and, most importantly, between 3-star and 4-star. Furthermore, Pinar and Unlu (2020a ) demonstrate that the inclusion of the impact agenda in REF 2014 increased the research income gap across HEIs. Similarly, the literature identifies some serious concerns with the assessment of the research environment ( Taylor 2011 ; Wilsdon et al. 2015 ; Thorpe et al. 2018a , b ). Taylor (2011) considered the use of metrics to assess the research environment, and found evidence of bias towards more research-intensive universities in the assessment of research environment in the 2008 RAE (see Pinar and Unlu 2020b for similar findings for the REF 2014). In particular, he argued that the judgement of assessors may have an implicit bias and be influenced by the ‘halo effect’, where assessors allocate relatively higher scores to departments with long-standing records of high-quality research, and showed that members of Russell Group universities benefited from a ‘halo effect’, after accounting for various important quantitative factors. Wilsdon et al. (2015) wrote in a report for the Higher Education Funding Council for England (HEFCE), which ran the REF on behalf of the four countries of the UK, in which those who had reviewed the narrative research environment statements in REF 2014 as members of the panels of experts expressed concerns ‘that the narrative elements were hard to assess, with difficulties in separating quality in research environment from quality in writing about it.’ Thorpe et al. (2018a , b ) examined environment statements submitted to REF 2014, and their work indicates that the scores given to the overall research environment were influenced by the language used in the narrative statements, and whether or not the submitting university was represented amongst those experts who reviewed the statements. Finally, a similar peer-review bias has been identified in the evaluation of research outputs (see e.g., Taylor 2011 ). Overall, there have been criticisms about the evaluation biases in each element of the REF exercise.

Another criticism of the REF 2014 exercise has been that of the cost. HEFCE commissioned a review of it ( Farla and Simmonds 2015 ) which estimated the cost of the exercise to be £246 million ( Farla and Simmonds 2015 , 6), and the cost of preparing the REF submissions was £212 million. It can be estimated that roughly £19–27 million was spent preparing the research environment statements, 4 and £55 million was spent in preparation of impact case studies, and the remainder cost of preparation may be associated with the output submission. Overall, the cost of preparing each element was significant. Since there is a good agreement between bibliometric factors and peer review assessments ( Bertocchi et al. 2015 ; Pidd and Broadbent 2015 ), it has been argued that cost of evaluating outputs could be decreased with the use of bibliometric information (see e.g., De Boer et al. 2015 ; Geuna and Piolatto 2016 ). Furthermore, Pinar and Unlu (2020b ) found that the use of ‘environment data’ alone could minimize the cost of preparation of the environment part of the assessment as the environment data (i.e., income generated by units, number of staff and postgraduate degree completions) explains a good percentage of the variation between HEIs in REF environment scores.

Because of these criticisms, together with Kelly (2016) and Pinar (2020 )’s works which show that a key outcome of the REF, which is to distribute ca. £1bn per annum of QR funding, is dependent upon somewhat arbitrary or opaque value judgements (e.g., the relative importance of world-leading research compared to internationally excellent research and the relative cost of undertaking research in different disciplines). In this article, we will contribute to the existing literature by using correlation analysis to examine the redundancy of each research element, and also use PCA to obtain weights for each element that overcome high correlation between three elements but explain most of the variation in achievements and funding distribution in each element.

The three components of the REF are highly and positively correlated (see next section for correlation analysis), and a high and positive correlation amongst the three components would suggest that removal of one component from the REF would have only a small effect on the QR funding distribution and overall performance rankings based on the redundancy literature (e.g., McGillivray 1991 ; McGillivray and White 1993 ; Bérenger and Verdier-Chouchane 2007 ). Therefore, based on the arguments put forward in the redundancy literature, we set the hypotheses of this article as follows:

Hypothesis 1: Exclusion of one of the REF elements from the distribution of the mainstream QR funding would lead to relatively small shifts in the allocation of funds if three components of the REF elements are positively and highly correlated. Hypothesis 2: Exclusion of one of the REF elements from the calculation of overall REF grade point averages (GPAs) obtained by HEIs would result in relatively small shifts in the rankings of HEIs when REF elements are positively and highly correlated. Hypothesis 3: Overall REF GPAs and allocation of funding with the PCA weights given to each element of REF would result in small shifts in rankings and funding allocation when three components of the REF are highly and positively correlated.

In this section, we will provide the details of the data sources for the REF results and QR funding allocation based on the REF results. We will also discuss the alternative ways of obtaining overall REF scores and QR funding allocation.

3.1 REF results data

In REF 2014, each participating UK institution submitted in one or more disciplinary areas, known as ‘units of assessment’ (UOAs). Each submission comprised three elements:

A number of research outputs. The expected number of research outputs submitted by each UOA was four times the full-time equivalent (FTE) staff included in that submission, unless one or more staff members was allowed a reduction in outputs. Each FTE staff member was expected to submit four research outputs, but reductions in outputs were allowed for staff members who had individual circumstances which included that they were early career researchers, had taken maternity, paternity or adoption leave during the assessment period, or had had health problems.

A number of case studies demonstrating the impact of research undertaken within that UOA, and a narrative ‘impact template’ which included a description of the UOA’s approach to generating impact from its research. Each case study was a maximum of four pages and the rules stipulated that the number of case studies required depended upon the number of FTEs submitted in the UOA, as was the length of the impact template. Ninety-five per cent of submissions by English universities comprised between two and seven case studies and narratives that were three or four pages long. 5

Information about the research environment, which comprised a narrative ‘environment statement’ describing the research environment, together with data on research income and PhD completions. As with the impact narrative the length of the environment statement was dependent upon the number of FTEs submitted, with 95% of submission from English universities comprising narratives which were between 7 and 12 pages long.

After the submission of UOAs, each individual component in these elements (e.g., a research output, an impact case study) was given a score on a five-point ‘star’ scale, namely 4* (world-leading), 3* (internationally excellent), 2* (internationally recognized), 1* (nationally recognized) and unclassified (for elements which were below the 1* standard) by the peer-reviewers. From the scores for each individual component in each element, a profile for each element was obtained and this was the information which was released by HEFCE. This profile for each element, obtained from REF (2014) gives the percentage of the research in each element (i.e., research outputs, environment and impact) that were rated as 4*, 3*, 2*, 1* or unclassified. Finally, an overall research profile of the UOA is calculated where each element’s score was weighted 65:20:15 for outputs: impact: environment.

To test whether the quality of the research environment, impact and outputs are correlated, we obtain each individual submissions’ weighted average environment, impact and output scores. 6   Table  1 provides a correlation matrix between GPA scores of different elements. This table shows that GPA scores are positively and significantly correlated with each other at the 1% level. Table  2 shows the results of PCA of the three elements when GPA scores in each element are used. The first principal component accounts for approximately 79.0% of the variation in three elements. In comparison, the first two principal components account for approximately 92.5% of the variation in three elements. Clearly, the first principal component contains most of the statistical information embedded in the three elements. Second, the first principal component results in roughly similar eigenvectors, suggesting that the overall GPA scores could be obtained using roughly equal weights given to each element when the eigenvectors are normalized to sum the weights to 1.

Correlation matrix between different element GPAs

Note: Asterisk (*) represents a significance level at the 1% level.

Results of PCA of the three elements using GPA scores

Since all the elements are positively and significantly correlated with each other, removing one of the elements from the REF assessment or an alternative combination of the REF elements (via PCA weights) might have a little overall effect on the distribution of QR income and overall achievement.

3.2 QR funding allocation data based on REF results

Based on the REF results obtained by UOAs, Research England describes how it distributes QR funding in Research England (2019a) . In brief, QR funding comprises six elements: (1) mainstream QR funding; (2) QR research degree programme supervision fund; (3) QR charity support fund; (4) QR business research element; (5) QR funding for National Research Libraries; and (6) the Global Challenge Research Fund. The mainstream QR funding is the largest, comprising approximately two-thirds of the overall QR funding, and is the element which is most directly related to an institution’s performance in REF 2014. The data for the mainstream QR funding allocations across panels, UOAs and HEIs during the 2019–20 funding period are obtained from Research England (2019b) .

In calculating an institution’s mainstream QR funding, Research England follows a four-stage process:

The mainstream QR funding is separated into three elements, for outputs, impact and environment, with 65% of funding for outputs, 20% for impact and 15% for environment.

The funding for each of the three elements is distributed amongst the four ‘main subject panels’ 7 in proportion to the volume of research in each main panel which was 3* or above, weighted to reflect an assumed relative cost of research in different disciplines.

Within each main panel, mainstream QR funding is distributed to each UOA according to the volume of research at 3* or above and the cost weights (which reflect the relative cost of undertaking research in different disciplines), and with an additional multiplier of 4 being given to research rated as world-leading, i.e., 4* research, compared to internationally excellent, or 3*, research.

The mainstream QR funding for each element in each UOA is then distributed to individual HEIs according to the volume of research at 3* or above produced by that HEI, with the cost and quality weights taken into account.

Therefore, a university’s total QR mainstream funding comprises an amount for each element of outputs, impact and environment, for each UOA in which it made a submission.

Since the allocation of the mainstream QR funding in each pot (environment, impact and output) is closely related to the performance of the UOAs in each respective research element, we also found positive and significant correlation coefficients between mainstream QR funding distributed to the UOAs in the environment, impact and output pots at the 1% level (see Table  3 ). Similarly, when we carried out PCA analysis, we found that the first principal component accounts for approximately 97% of the variation in the components, and the first principal component results in roughly similar eigenvectors (see Table  4 ), suggesting that equal funding could be distributed in the environment, impact and output pots.

Correlation matrix between different funding pots

Results of PCA of the three elements using funds distributed in each pot

3.3 Alternative ways of allocating QR funding and obtaining overall REF scores

Based on the arguments in the redundancy literature, we examine the effects of excluding one element of the evaluation while distributing QR funding and calculating overall REF scores. Initially, as described in Section 3.2, the mainstream QR funding is distributed across three pots (i.e., output, environment and impact) where 65%, 20% and 15% of the mainstream QR funding is distributed based on the performances of the submissions in output, impact and environment elements in REF 2014, respectively (i.e., step 1 of the funding formula). Similarly, the overall REF scores of units and HEIs were obtained by a weighted average of the three elements where the output, impact and environment performances were weighted 65%, 20% and 15%, respectively. If one of the elements (i.e., environment, impact and output) is excluded, the weight given to it should be allocated amongst the other two elements to redistribute the QR funding and to obtain the overall REF scores, so that the weights sum to 100%. In the first scenario, we exclude the environment element and reallocate the weight of environment to output and impact in proportion to their initial weights: 65:20, which becomes 76.5% and 23.5%. 8 For the second scenario, we exclude the impact element and reallocate the weight of impact to the environment and output in proportion to their initial weights: 15:65, which results in 18.75% and 81.25%. Finally, if we exclude the output element, then the environment and impact elements are allocated 43% and 57% weights based on their initial weight ratio of 15:20. Finally, as a fourth scenario, we rely on the results obtained with the PCA and that each element is kept in the calculation of the overall GPA and distribution of QR funding, but instead, each element is given equal weights (i.e., 33.33%).

Based on the funding formula of the mainstream QR funding allocation (see Research England 2019a , 16–9 for details on how the mainstream QR funding is allocated or Section 3.2 of this article for the steps), we follow the same steps to redistribute the mainstream QR funding across different panels, UOAs and HEIs based on the alternative scenarios. To obtain the overall REF scores of HEIs, the overall GPA of each unit is obtained by weighting the GPA of output, impact and environment elements with 65%, 20% and 15%, respectively. With the alternative scenarios, we will obtain the overall GPA of the HEIs by weighting elements with respective scenario weights as discussed above.

4.1 Alternative way of allocating QR funding

In this subsection, we will examine the effect of the mainstream QR funding distribution to different panels, UOAs, and HEIs in England with Scenarios 1–4 compared to the official mainstream QR funding allocation. To provide an overall picture of the amount of mainstream QR funding distributed in 2019–20 funding period, Table  5 provides the amount of mainstream QR funding distributed in each of the three pots with the official REF 2014 results, and mainstream QR funding distributed with the alternative scenarios proposed in this article. During the 2019–20 funding period, a total of £1,060 million (i.e., just over a billion pounds) was distributed under the mainstream QR funding and roughly £159 million, £212 million and £689 million of mainstream QR funding are distributed in the environment, impact and output pots across the English HEIs, respectively. 9 On the other hand, with Scenarios 1, 2 and 3, no mainstream QR funding is distributed in the environment, impact and output pots, respectively. Whereas, equal amounts of funds are distributed in each pot with Scenario 4. With Scenario 1, £249 million and £811 million of mainstream QR funding are distributed based on the REF 2014 performances in impact and output elements, indicating that an additional £37 million and £122 million are distributed in the impact and output pots compared to the official scenario, respectively. In contrast, with Scenario 2, £199 million and £862 million of mainstream QR funding are distributed in environment and output elements, indicating that additional £39 million and £72 million are distributed in the environment and output pots compared to the official scenario, respectively. In Scenario 3, £456 million and £605 million are distributed in environment and impact pots, respectively, suggesting that additional £297 million and £392 million were distributed in respective pots compared to the official scenario. Finally, with Scenario 4, an equal amount of funds (i.e., £353.5 million) are distributed in each pot where more funding is allocated in environment and impact pots and less funding is distributed in output pot.

Distribution of mainstream QR funding across different pots based on the REF2014 results and alternative scenarios

Table  6 provides the allocation of the mainstream QR funding to four main panels (i.e., Panel A: Medicine, Health and Life Sciences; Panel B: Physical Sciences, Engineering and Mathematics; Panel C: Social Sciences; Panel D: Arts and Humanities) with the REF 2014 results, and with alternative scenarios. This table also provides the change in the mainstream QR funding received by four main panels from the official allocation to alternative scenarios where a positive (negative) change indicates that the panel would have received more (less) funding with the alternative scenario compared to the official allocation. The results suggest that the panel B would have been allocated more funds, and panels A, C and D would have been allocated less QR funding with the alternative Scenarios 1 and 2 compared to the official allocation, suggesting that exclusion of environment and impact elements would have benefitted panel B. On the other hand, panel B (panels A, C and D) would have generated less (more) QR funding with the third and fourth scenarios (i.e., when the output element is excluded, and equal amount of funds distributed in each pot, respectively) compared to the official scenario. Overall, with the reallocation of QR funding with alternative Scenarios 1, 2, 3 and 4, only 0.34%, 0.64%, 2.29% and 1.08% of the total mainstream QR funding (i.e., £3.6 million, £6.8 million, £24.3 million and £11.5 million) would have been reallocated across the four main panels with the alternative allocation scenarios compared to the official one.

Allocation of the mainstream QR funding to four main panels with the alternative scenarios

Note: Panels A (Medicine, Health and Life Sciences), B (Physical Sciences, Engineering and Mathematics), C (Social Sciences) and D (Arts and Humanities) consist of the UoAs between 1 and 6, 7 and 15, 16 and 26, and 27 and 36, respectively.

Table  7 reports the official QR funding allocation and the QR funding allocation changes between alternative scenarios and official scenario in different UOAs where a positive (negative) figure suggests that the UOA received relatively more (less) QR funding with the alternative scenario compared to the official case. We find, for example, that the Computer Science and Informatics, and the Public Health, Health Services and Primary Care units would have received £2.0 million more and £1.2 million less QR funding when the environment element is excluded (Scenario 1) compared to the official scenario, respectively. On the other hand, when the impact element is excluded (Scenario 2), the Biological Sciences and Clinical Medicine units would have generated £3.0 million more and £4.3 million less than the official scenario, respectively. When the output element is excluded from the evaluation (Scenario 3), we find that the Clinical Medicine and Biological Sciences units would have generated £11.7 million more and £7.2 million less compared to the official scenario, respectively. Finally, if all three elements are weighted equally (Scenario 4), Clinical Medicine and Computer Science and Informatics units would have generated £5.1 million more and £3.5 million less than the official scenario, respectively. This evaluation clearly shows in which elements specific subjects perform better (worse) than other subject areas. Even though we observe changes in funds generated by each unit with alternative scenarios, there is a limited funding shift across units. Overall, the total amounts reallocated across different UOAs are £5.9 million, £11.5 million, £36.9 million and £17.2 million with Scenarios 1, 2, 3 and 4, which correspond to 0.55%, 1.08%, 3.48% and 1.62% of the total mainstream QR funding, respectively.

Allocation of mainstream QR funding across different UoAs and changes in funding allocation with alternative scenarios compared to benchmark

Note: A positive (negative) figure in changes columns suggests that the UOA received relatively more (less) QR funding with the respective alternative scenario compared to the official case.

Finally, we examine the effect of alternative QR funding allocations on the funding received by HEIs. Table  8 shows the five HEIs that would have generated the biggest increase (decrease) in mainstream QR funding with the alternative scenarios compared to the official allocation. The data show that the University of Leicester, University of Plymouth, University of East Anglia, University of Birmingham and the University of Surrey would have generated £745k, £552k, £550k, £522k and £464k more QR funding with the first scenario compared to the official scenario, whereas University College London, University of Cambridge, University of Oxford, University of Manchester and the University of Nottingham would have generated £3.4 million, £2.1 million, £2million, £1.5 million and £1.4 million less, respectively. On the other hand, the University of Cambridge would have generated £1.9 million more if the impact element is excluded (Scenario 2), and University College London would have generated £9.8 million and £5.6 million more if the output element is excluded (Scenario 3) and each element is weighted equally (Scenario 4), respectively. In comparison, the University of Leeds, University of Birmingham and University of Leicester would have generated £1 million, £2.4 million and £1.3 million less with Scenarios 2, 3 and 4, respectively. Overall, the total amounts reallocated across different HEIs are £15.5 million, £11.1 million, £46.7 million and £25.6 million with Scenarios 1, 2, 3 and 4, which correspond to just 1.46%, 1.05%, 4.41% and 2.42% of the total mainstream QR funding, respectively. Furthermore, only a handful of universities would have experienced a significant change in their funding allocation with alternative scenarios where 6, 3, 25 and 10 HEIs experienced a difference in their QR funding allocation of more than £1 million with Scenarios 1, 2, 3 and 4 compared to the official one, respectively (see Appendix Table A.1 for the allocation of the mainstream QR funding to the HEIs with the official case and also the difference in the allocation of QR funding between alternative scenarios and official one).

Five HEIs that would generate more (less) with the alternative scenarios compared to the official scenario

4.2 Ranking of HEIs

Since the REF exercise is used in the rankings of HEIs, in this subsection, we will evaluate the effect of different scenarios on the overall GPA and rankings of HEIs. Table  9 offers the Spearman’s rank correlation coefficients between GPA scores obtained with the official scenario and the GPA scores obtained with the alternative scenarios. We find that the GPA scores obtained with the alternative scenarios are highly and positively correlated with the official GPA scores at the 1% level. Even though the correlation coefficients between GPA scores of HEIs with the alternative scenarios and official one are highly and positively correlated, some HEIs would have been ranked in relatively higher (lower) positions with the alternative scenarios compared to the official scenario. Amongst 111 HEIs, just 9, 5, 22 and 5 HEIs experienced more than 10 position changes in their ranking with the Scenarios 1, 2, 3 and 4, respectively, compared to the official rankings. For instance, Guildhall School of Music & Drama would have experienced a major improvement in their ranking with the third scenario as it would have been ranked in the 53rd position when output element is excluded (i.e., Scenario 1) compared to the 89th position with the official scenario. On the other hand, London Business School would have been ranked in the 32nd position with the third scenario, but ranked 7th with the official scenario (see Appendix Table  A.2 for the GPA scores and respective rankings of HEIs with the official case and Scenarios 1, 2, 3 and 4). However, with very few exceptions, it can be seen that the difference between the rankings in the alternative scenarios compared with the official rankings is relatively small.

Spearman’s rank correlation coefficients between official and alternative scenario GPAs

Note: Asterisk (*) represents significance level at the 1% level.

Given concerns over possible bias in the assessment of the three elements of the REF and the cost of preparing the REF return ( Farla and Simmonds 2015 ), we evaluated the implications of the exclusion of different elements from the REF. Since three components of the REF are positively and highly correlated, each of the elements of the REF could be considered redundant and therefore, this article examined the QR funding allocation implications to different panels, UOAs and HEIs when an element (environment, impact and output) of the REF was disregarded from the allocation of the QR funding and the effect on the obtaining the overall REF GPAs. Furthermore, we also use the PCA method to get weights that explain most of the variation in the funding distributed amongst three elements, which suggested that using equal weights to distribute funds explains most of the variation in funding distribution in three pots.

We found that the exclusion of one element from the REF or using equal weights would have benefited (disadvantaged) some HEIs, but at most £46.7 million (out of over £1 billion) would have been reallocated between HEIs when the output element is excluded from the evaluation. Furthermore, when different elements are excluded from the rankings and the weight of the excluded element redistributed between the other two (in proportion to their original weightings) to produce new rankings, these rankings are highly and significantly correlated with the official rankings, suggesting that alternative ways of obtaining composite scores lead to rankings similar to the official one. Overall, the main argument of this article is that given the high cost of preparing REF returns, the potential bias in assessing each component, and the relatively small effect on QR income distribution and universities’ relative rankings of removing some elements of the REF assessment, removal of some elements from the assessment process may be considered for future assessment exercises.

This article does not quantify the bias involved in the evaluation of each element of the REF exercise, and therefore, we do not provide any suggestion about which element should be removed from the REF. Instead, our findings demonstrate that excluding a component from the REF evaluation does not result in significant rank reversals in overall outcomes and reallocation of funds across units and HEIs.

In addition, the assessment of outputs and impact cases in the REF are both based on the submit-to-be-rated methodology from 1986 by which, in essence, the achievements of individuals, not of the organization, are summed up. Based on the definition of organizational evaluation by BetterEvaluation (2021) , impact and output evaluations of the REF are based on the achievements of individuals, and if the aim is to evaluate the organizations, then evaluation of the impact and output elements, which are in essence individual achievements, could be removed, and their removal from the evaluation will not result in significant effects as found in this article. Therefore, if the REF aims to evaluate the organizational performance, the choice of the components should be further motivated by and rely on the metrics that evaluate the organization rather than the individual achievements.

Furthermore, if future evaluations include new metrics that aim to measure organizational achievement, these metrics should be carefully chosen to provide a new set of information beyond the existing indicators. Therefore, these indicators should not be highly correlated with the already existing indicator set so that new information is captured through their assessment.

There is a significant body of literature on PRFS, and for a review of these systems, the reader is directed to a number of papers and references ( Rebora and Turri 2013 ; Bertocchi et al. 2015 ; De Boer et al. 2015 ; Hicks et al. 2015 ; Dougherty et al. 2016 ; Geuna and Piolatto 2016 ; Sivertsen 2017 ; Zacharewicz et al. 2019 , amongst many others).

Education is a devolved matter in the UK, and university funding and oversight in 2014 was the responsibility of the Higher Education Funding Council (HEFCE) in England, the Scottish Funding Council (SFC) in Scotland, the Higher Education Funding Council for Wales (HEFCW) in Wales and the Department for Employment and Learning (DELNI) in Northern Ireland. The formulae which converted REF performance into QR funding were different in the different administrations, and this article only examines the QR distribution across English HEIs.

An environment that is conducive to producing research of world-leading quality, internally excellent quality, international recognized quality and nationally recognized quality is given 4*, 3*, 2* and 1* scores, respectively. On the other hand, an environment that is not conducive to producing research of at least nationally recognized quality is considered as unclassified.

The cost to UK HEIs of submitting to REF, excluding the impact element was estimated at £157 million ( Farla and Simmonds 2015 , 6). It is further estimated that 12% of time spent at the central level was on the environment template and 17% of time at the UOA level (see Figures 5 and 6 of Farla and Simmonds 2015 , respectively). The estimate of £19–27 million is obtained as 12–17% of the overall £157 million non-impact cost of submission. Furthermore, it was found that the panel members spent on average 533 h on panel duties, which represented an estimated cost to the sector of £23 million (see Farla and Simmonds 2015 , 40, 41).

As stated previously, because the devolved administrations of the UK used different methods to calculate QR income, this article focusses just on English institutions.

The scores for each individual output, environment or impact component are not given on the REF 2014 website, www.ref.ac.uk/2014 . In other words, the ratings of each research output, research environment element and impact case study are not provided. However, the REF results instead provide the percentage of the overall research elements (i.e., research output, environment and impact) that were rated as 4*, 3*, 2* and 1* and unclassified. Therefore, the weighted average of the research elements (i.e., output, environment and impact) are obtained as follows. If the 35%, 30%, 20% and 15% of the research element of a given submission were rated as 4*, 3*, 2* and 1*, respectively, then the weighted average score of this element would be (35 * 4+30 * 3+20 * 2+15 * 1)/100=2.85.

The four main panels are groupings of individual UOAs which broadly speaking encompass medical, and health and biological sciences (Panel A), physical sciences and engineering (Panel B), social sciences (Panel C) and humanities and arts (Panel D).

These percentage weights are obtained by (0.65/0.85)×100 and (0.2/0.85)×100, respectively.

Note that HEIs within inner and outer London area receive 12% and 8% (respectively) additional QR funding on top of their allocated mainstream QR funding but to examine the effect of the exclusion of alternative scenarios on the allocation of the mainstream QR funding, we do not consider the additional funding allocation that is based on the location of HEI.

We would like to thank the editor and three anonymous referees for very constructive and insightful reviews of earlier drafts of this article.

Conflict of interest statement . None declared.

Bertocchi G. , Gambardella A. , Jappelli T. , Nappi C. A. , Peracchi F. ( 2015 ) ‘ Bibliometric Evaluation vs. Informed Peer Review: Evidence from Italy ’, Research Policy , 44 : 451 – 66 .

Google Scholar

Bérenger V. , Verdier-Chouchane A. ( 2007 ) ‘ Multidimensional Measures of Well-Being: Standard of Living and Quality of Life across Countries ’, World Development , 35 : 1259 – 76 .

BetterEvaluation ( 2021 ) Evaluating the Performance of an Organisation < https://www.betterevaluation.org/en/theme/organisational_performance > accessed 15 October 2021.

Cahill M. B. ( 2005 ) ‘ Is the Human Development Index Redundant? ’, Eastern Economic Journal , 31 : 1 – 6 .

De Boer H. , Jongbloed B. , Benneworth P. , Cremonini L. , Kolster R. , Kottmann A. , Lemmens-Krug K. , Vossensteyn H. ( 2015 ) ‘Performance-based funding and performance agreements in fourteen higher education systems’, report for the Ministry of Education, Culture and Science, The Hague: Ministry of Education, Culture and Science.

Dougherty K. J. , Jones S. M. , Lahr H. , Natow R. S. , Pheatt L. , Reddy V. ( 2016 ) Performance Funding for Higher Education , Baltimore, MD : Johns Hopkins University Press .

Google Preview

Farla K. , Simmonds P. ( 2015 ) ‘REF accountability review: Costs, benefits and burden’, report by Technopolis to the four UK higher education funding bodies. Technopolis Group < http://www.technopolis-group.com/report/ref-accountability-review-costs-benefits-and-burden/ > accessed 14 March 2021.

Geuna A. , Piolatto M. ( 2016 ) ‘ Research Assessment in the UK and Italy: Costly and Difficult, but Probably Worth It (at least for a while) ’, Research Policy , 45 : 260 – 71 .

Gilroy P. , McNamara O. ( 2009 ) ‘ A Critical History of Research Assessment in the United Kingdom and Its Post‐1992 Impact on Education ’, Journal of Education for Teaching , 35 : 321 – 35 .

Hicks D. ( 2012 ) ‘ Performance-Based University Research Funding Systems ’, Research Policy , 41 : 251 – 61 .

Hicks D. , Wouters P. , Waltman L. , de Rijcke S. , Rafols I. ( 2015 ) ‘ Bibliometrics: The Leiden Manifesto for Research Metrics ’, Nature , 520 : 429 – 31 .

Hinze S. , Butler L. , Donner P. , McAllister I. ( 2019 ) ‘Different Processes, Similar Results? A Comparison of Performance Assessment in Three Countries’, in Glänzel W. , Moed H. F. , Schmoch U. , Thelwall M. (eds) Springer Handbook of Science and Technology Indicators , pp 465-484. Cham : Springer .

Jonkers K. , Zacharewicz T. ( 2016 ) ‘Research performance based funding systems: A comparative assessment’, JRC Science for Policy Report, European Commission: Joint Research Centre < https://publications.jrc.ec.europa.eu/repository/bitstream/JRC101043/kj1a27837enn.pdf > accessed 15 May 2021.

Kelly A. ( 2016 ) ‘Funding in English Universities and its Relationship to the Research Excellence Framework’, British Education Research Journal , 42 : 665 – 81 .

Khatun T. ( 2009 ) ‘ Measuring Environmental Degradation by Using Principal Component Analysis ’, Environment, Development and Sustainability , 11 : 439 – 57 .

McGillivray M. ( 1991 ) ‘ The Human Development Index: Yet Another Redundant Composite Development Indicator ’, World Development , 19 : 1461 – 8 .

McGillivray M. ( 2005 ) ‘ Measuring Non-Economic Wellbeing Achievement ’, Review of Income and Wealth , 51 : 337 – 64 .

McGillivray M. , White H. ( 1993 ) ‘ Measuring Development? The UNDP’s Human Development Index ’, Journal of International Development , 5 : 183 – 92 .

Manville C. , Guthrie, S. , Henham M. L. , Garrod B. , Sousa S. , Kirtley A., Castle-Clarke, S., Ling, T et al.  ( 2015 ) ‘Assessing impact submissions for REF 2014: An evaluation’ < www.rand.org/content/dam/rand/pubs/research_reports/RR1000/RR1032/RAND_RR1032.pdf>  accessed 7 April 2021.

Marques M. , Powell J. J. W. , Zapp M. , Biesta G. ( 2017 ) ‘ How Does Research Evaluation Impact Educational Research? Exploring Intended and Unintended Consequences of Research Assessment in the United Kingdom, 1986–2014 ’, European Educational Research Journal , 16 : 820 – 42 .

Nardo M. , Saisana M. , Saltelli A. , Tarantola S. ( 2008 ). Handbook on Constructing Composite Indicators: Methodology and User Guide , Paris : OECD Publishing < https://www.oecd.org/sdd/42495745.pdf > accessed 15 October 2021.

Nguefack‐Tsague G. , Klasen S. , Zucchini W. ( 2011 ) ‘ On Weighting the Components of the Human Development Index: A Statistical Justification ’, Journal of Human Development and Capabilities , 12 : 183 – 202 .

Penfield T. , Baker M. , Scoble R. , Wykes M. ( 2014 ) ‘ Assessment, Evaluations, and Definitions of Research Impact: A Review ’, Research Evaluation , 23 : 21 – 32 .

Pidd M. , Broadbent J. ( 2015 ) ‘ Business and Management Studies in the 2014 Research Excellence Framework ’, British Journal of Management , 26 : 569 – 81 .

Pinar M. ( 2020 ) ‘ It is Not All about Performance: Importance of the Funding Formula in the Allocation of Performance-Based Research Funding in England ’, Research Evaluation , 29 : 100 – 19 .

Pinar M. , Milla J. , Stengos T. ( 2019 ) ‘ Sensitivity of University Rankings: Implications of Stochastic Dominance Efficiency Analysis ’, Education Economics , 27 : 75 – 92 .

Pinar M. , Unlu E. ( 2020a ) ‘ Evaluating the Potential Effect of the Increased Importance of the Impact Component in the Research Excellence Framework of the UK ’, British Educational Research Journal , 46 : 140 – 60 .

Pinar M. , Unlu E. ( 2020b ) ‘ Determinants of Quality of Research Environment: An Assessment of the Environment Submissions in the UK’s Research Excellence Framework in 2014 ’, Research Evaluation , 29 : 231 – 44 .

Rebora G. , Turri M. ( 2013 ) ‘ The UK and Italian Research Assessment Exercises Face to Face ’, Research Policy , 42 : 1657 – 66 .

REF ( 2011 ) Assessment Framework and Guidance on Submissions < https://www.ref.ac.uk/2014/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf > accessed 7 April 2021.

REF ( 2012 ) Panel Criteria and Working Methods < https://www.ref.ac.uk/2014/media/ref/content/pub/panelcriteriaandworkingmethods/01_12_1.pdf > accessed 21 May 2021.

REF ( 2014 ) Results and Submissions < https://results.ref.ac.uk/(S(ag0fd0kpw5wgdcjk2rh1cwxr ))/> accessed 05 May 2021.

Research England ( 2019a ) Research England: How We Fund Higher Education Institutions < https://re.ukri.org/documents/2019/research-england-how-we-fund-higher-education-institutions-pdf/ > accessed 3 July 2020.

Research England ( 2019b ) Annual Funding Allocations 2019–20 < https://re.ukri.org/finance/annual-funding-allocations/annual-funding-allocations-2019-20/ > accessed 5 July 2020.

Robinson-Garcia N. , Torres-Salinas D. , Herrera-Viedma E. , Docampo D. ( 2019 ) ‘ Mining University Rankings: Publication Output and Citation Impact as Their Basis ’, Research Evaluation , 28 : 232 – 40 .

Saisana M. , d’Hombres B. , Saltelli A. ( 2011 ) ‘ Rickety Numbers: Volatility of University Rankings and Policy Implications ’, Research Policy , 40 : 165 – 77 .

Shattock M. ( 2012 ) Making Policy in British Higher Education 1945–2011 , Berkshire : McGraw-Hill .

Sivertsen G. ( 2017 ) ‘ Unique, but Still Best Practice? The Research Excellence Framework (REF) from an International Perspective ’, Palgrave Communications , 3 : 1 – 6 .

Smith S. , Ward V. , House A. ( 2011 ) ‘“ Impact” in the Proposals for the UK’s Research Excellence Framework: Shifting the Boundaries of Academic Autonomy ’, Research Policy , 40 : 1369 – 79 .

Taylor J. ( 2011 ) ‘ The Assessment of Research Quality in UK Universities: Peer Review or Metrics? ’, British Journal of Management , 22 : 202 – 17 .

Thorpe A. , Craig R. , Hadikin G. , Batistic S. ( 2018a ) ‘ Semantic Tone of Research “Environment” Submissions in the UK’s Research Evaluation Framework 2014 ’, Research Evaluation , 27 : 53 – 62 .

Thorpe A. , Craig R. , Tourish D. , Hadikin G. , Batistic S. ( 2018b ) ‘ Environment’ Submissions in the UK’s Research Excellence Framework 2014 ’, British Journal of Management , 29 : 571 – 87 .

Tijssen R. J. W. , Yegros-Yegros A. , Winnink J. J. ( 2016 ) ‘ University–Industry R&D Linkage Metrics: Validity and Applicability in World University Rankings ’, Scientometrics , 109 : 677 – 96 .

Watermeyer R. ( 2016 ) ‘ Impact in the REF: Issues and Obstacles ’, Studies in Higher Education , 41 : 199 – 214 .

Wilsdon J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., Jones, R., Kain, R., Kerridge, S., Thelwall, M., Tinkler, J., Viney, I., Wouters, P., Hill, J., Johnson, B. ( 2015 ) ‘The metric tide: Report of the independent review of the role of metrics in research assessment and management’. DOI: 10.13140/RG.2.1.4929.1363.

Zacharewicz T. , Lepori B. , Reale E. , Jonkers K. ( 2019 ) ‘ Performance-Based Research Funding in EU Member States—A Comparative Assessment ’, Science and Public Policy , 46 : 105 – 15 .

Allocation of mainstream QR funding allocation to HEIs with the official and alternative scenarios

Notes: Official column presents the allocation of mainstream QR funding across HEIs with the official funding allocation.

Scenario 1—Official: This column provides the differences in the mainstream QR funding allocated to the HEIs between Scenario 1 and official case.

Scenario 2—Official: This column provides the differences in the mainstream QR funding allocated to the HEIs between Scenario 2 and official case.

Scenario 3—Official: This column provides the differences in the mainstream QR funding allocated to the HEIs between Scenario 3 and official case.

Scenario 4—Official: This column provides the differences in the mainstream QR funding allocated to the HEIs between Scenario 4 and official case.

A positive (negative) figure in changes columns suggests that the HEI received relatively more (less) QR funding with the respective alternative scenario compared to the official case.

GPA scores and respective rankings of HEIs with the official case and Scenarios 1, 2, 3 and 4

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Contact sales

Start free trial

Project Evaluation Process: Definition, Methods & Steps

ProjectManager

Managing a project with copious moving parts can be challenging to say the least, but project evaluation is designed to make the process that much easier. Every project starts with careful planning —t his sets the stage for the execution phase of the project while estimations, plans and schedules guide the project team as they complete tasks and deliverables.

But even with the project evaluation process in place, managing a project successfully is not as simple as it sounds. Project managers need to keep track of costs , tasks and time during the entire project life cycle to make sure everything goes as planned. To do so, they utilize the project evaluation process and make use of project management software to help manage their team’s work in addition to planning and evaluating project performance.

What Is Project Evaluation?

Project evaluation is the process of measuring the success of a project, program or portfolio . This is done by gathering data about the project and using an evaluation method that allows evaluators to find performance improvement opportunities. Project evaluation is also critical to keep stakeholders updated on the project status and any changes that might be required to the budget or schedule.

Every aspect of the project such as costs, scope, risks or return on investment (ROI) is measured to determine if it’s proceeding as planned. If there are road bumps, this data can inform how projects can improve. Basically, you’re asking the project a series of questions designed to discover what is working, what can be improved and whether the project is useful. Tools such as project dashboards and trackers help in the evaluation process by making key data readily available.

how can project evaluation help in boosting research area

Get your free

Project Review Template

Use this free Project Review Template for Word to manage your projects better.

The project evaluation process has been around as long as projects themselves. But when it comes to the science of project management , project evaluation can be broken down into three main types or methods: pre-project evaluation, ongoing evaluation and post-project evaluation. Let’s look at the project evaluation process, what it entails and how you can improve your technique.

Project Evaluation Criteria

The specific details of the project evaluation criteria vary from one project or one organization to another. In general terms, a project evaluation process goes over the project constraints including time, cost, scope, resources, risk and quality. In addition, organizations may add their own business goals, strategic objectives and other project metrics .

Project Evaluation Methods

There are three points in a project where evaluation is most needed. While you can evaluate your project at any time, these are points where you should have the process officially scheduled.

1. Pre-Project Evaluation

In a sense, you’re pre-evaluating your project when you write your project charter to pitch to the stakeholders. You cannot effectively plan, staff and control a new project if you’ve first not evaluated it. Pre-project evaluation is the only sure way you can determine the effectiveness of the project before executing it.

2. Ongoing Project Evaluation

To make sure your project is proceeding as planned and hitting all of the scheduling and budget milestones you’ve set, it’s crucial that you constantly monitor and report on your work in real-time. Only by using project metrics can you measure the success of your project and whether or not you’re meeting the project’s goals and objectives. It’s strongly recommended that you use project management dashboards and tracking tools for ongoing evaluation.

Related: Free Project Dashboard Template for Excel

3. Post-Project Evaluation

Think of this as a postmortem. Post-project evaluation is when you go through the project’s paperwork, interview the project team and principles and analyze all relevant data so you can understand what worked and what went wrong. Only by developing this clear picture can you resolve issues in upcoming projects.

Free Project Review Template for Word

The project review template for Word is the perfect way to evaluate your project, whether it’s an ongoing project evaluation or post-project. It takes a holistic approach to project evaluation and covers such areas as goals, risks, staffing, resources and more. Download yours today.

Project review template

Project Evaluation Steps

Regardless of when you choose to run a project evaluation, the process always has four phases: planning, implementation, completion and dissemination of reports.

1. Planning

The ultimate goal of this step is to create a project evaluation plan, a document that explains all details of your organization’s project evaluation process. When planning for a project evaluation, it’s important to identify the stakeholders and what their short-and-long-term goals are. You must make sure that your goals and objectives for the project are clear, and it’s critical to have settled on criteria that will tell you whether these goals and objects are being met.

So, you’ll want to write a series of questions to pose to the stakeholders. These queries should include subjects such as the project framework, best practices and metrics that determine success.

By including the stakeholders in your project evaluation plan, you’ll receive direction during the course of the project while simultaneously developing a relationship with the stakeholders. They will get progress reports from you throughout the project life cycle , and by building this initial relationship, you’ll likely earn their belief that you can manage the project to their satisfaction.

project plan template for word

2. Implementation

While the project is running, you must monitor all aspects to make sure you’re meeting the schedule and budget. One of the things you should monitor during the project is the percentage completed. This is something you should do when creating status reports and meeting with your team. To make sure you’re on track, hold the team accountable for delivering timely tasks and maintain baseline dates to know when tasks are due.

Don’t forget to keep an eye on quality. It doesn’t matter if you deliver the project within the allotted time frame if the product is poor. Maintain quality reviews, and don’t delegate that responsibility. Instead, take it on yourself.

Maintaining a close relationship with the project budget is just as important as tracking the schedule and quality. Keep an eye on costs. They will fluctuate throughout the project, so don’t panic. However, be transparent if you notice a need growing for more funds. Let your steering committee know as soon as possible, so there are no surprises.

3. Completion

When you’re done with your project, you still have work to do. You’ll want to take the data you gathered in the evaluation and learn from it so you can fix problems that you discovered in the process. Figure out the short- and long-term impacts of what you learned in the evaluation.

4. Reporting and Disseminating

Once the evaluation is complete, you need to record the results. To do so, you’ll create a project evaluation report, a document that provides lessons for the future. Deliver your report to your stakeholders to keep them updated on the project’s progress.

How are you going to disseminate the report? There might be a protocol for this already established in your organization. Perhaps the stakeholders prefer a meeting to get the results face-to-face. Or maybe they prefer PDFs with easy-to-read charts and graphs. Make sure that you know your audience and tailor your report to them.

Benefits of Project Evaluation

Project evaluation is always advisable and it can bring a wide array of benefits to your organization. As noted above, there are many aspects that can be measured through the project evaluation process. It’s up to you and your stakeholders to decide the most critical factors to consider. Here are some of the main benefits of implementing a project evaluation process.

  • Better Project Management: Project evaluation helps you easily find areas of improvement when it comes to managing your costs , tasks, resources and time.
  • Improves Team performance: Project evaluation allows you to keep track of your team’s performance and increases accountability.
  • Better Project Planning: Helps you compare your project baseline against actual project performance for better planning and estimating.
  • Helps with Stakeholder Management: Having a good relationship with stakeholders is key to success as a project manager. Creating a project evaluation report is very important to keep them updated.

How ProjectManager Improves the Project Evaluation Process

To take your project evaluation to the next level, you’ll want ProjectManager , an online work management tool with live dashboards that deliver real-time data so you can monitor what’s happening now as opposed to what happened yesterday.

With ProjectManager’s real-time dashboard, project evaluation is measured in real-time to keep you updated. The numbers are then displayed in colorful graphs and charts. Filter the data to show the data you want or to drill down to get a deeper picture. These graphs and charts can also be shared with a keystroke. You can track workload and tasks, because your team is updating their status in real-time, wherever they are and at whatever time they complete their work.

ProjectManager’s dashboard view, which shows six key metrics on a project

Project evaluation with ProjectManager’s real-time dashboard makes it simple to go through the evaluation process during the evolution of the project. It also provides valuable data afterward. The project evaluation process can even be fun, given the right tools. Feel free to use our automated reporting tools to quickly build traditional project reports, allowing you to improve both the accuracy and efficiency of your evaluation process.

ProjectManager's status report filter

ProjectManager is a cloud-based project management software that has a suite of powerful tools for every phase of your project, including live dashboards and reporting tools. Our software collects project data in real-time and is constantly being fed information by your team as they progress through their tasks. See how monitoring, evaluation and reporting can be streamlined by taking a free 30-day trial today!

Click here to browse ProjectManager's free templates

Deliver your projects on time and under budget

Start planning your projects.

  • Project Management Methodologies
  • Project Management Metrics
  • Project Portfolio Management
  • Proof of Concept Templates
  • Punch List Templates
  • Requirement Gathering Process
  • Requirements Traceability Matrix
  • Resource Scheduling
  • Roles and Responsibilities Template
  • Stakeholder Mapping
  • Team Charter
  • Templates for Managers
  • What is Project Baseline
  • Work Log Templates
  • Workback Schedule
  • Workload Management
  • Work Breakdown Structures
  • Agile Team Structure
  • Avoding Scope Creep
  • Cross-Functional Flowcharts
  • Precision VS Accuracy
  • Scrum-Spike
  • User Story Guide
  • Creating Project Charters
  • Guide to Team Communication
  • How to Prioritize Tasks
  • Mastering RAID Logs
  • Overcoming Analysis Paralysis
  • Understanding RACI Model
  • Eisenhower Matrix Guide
  • Guide to Multi Project Management
  • Procure-to-Pay Best Practices
  • Procurement Management Plan Template to Boost Project Success
  • Project Execution and Change Management
  • Project Plan and Schedule Templates
  • Resource Planning Templates for Smooth Project Execution
  • Risk Management and Quality Management Plan Templates
  • Risk Management in Software Engineering
  • Stage Gate Process
  • Stakeholder Management Planning
  • Understanding the S-Curve
  • Visualizing Your To-Do List
  • 30-60-90 Day Plan
  • Work Plan Template
  • Weekly Planner Template
  • Task Analysis Examples
  • Cross-Functional Flowcharts for Planning
  • Inventory Management Tecniques
  • Inventory Templates
  • Six Sigma DMAIC Method
  • Visual Process Improvement
  • Value Stream Mapping
  • Creating a Workflow
  • Fibonacci Scale Template
  • Supply Chain Diagram
  • Kaizen Method
  • Procurement Process Flow Chart
  • UML Activity Diagrams
  • Class Diagrams & their Relationships
  • Visualize flowcharts for software
  • Wire-Frame Benefits
  • Applications of UML
  • Selecting UML Diagrams
  • Create Sequence Diagrams Online
  • Activity Diagram Tool
  • Archimate Tool
  • Class Diagram Tool
  • Graphic Organizers
  • Social Work Assessment Tools
  • Using KWL Charts to Boost Learning
  • Editable Timeline Templates
  • Guides & Best Practices
  • Kinship Diagram Guide
  • Graphic Organizers for Teachers & Students
  • Visual Documentation Techniques
  • Visual Tool for Visual Documentation
  • Visualizing a Dichotomous Key
  • 5 W's Chart
  • Circular Flow Diagram Maker
  • Cladogram Maker
  • Comic Strip Maker
  • Course Design Template
  • AI Buyer Persona
  • AI Data Visualization
  • AI Diagrams
  • AI Project Management
  • AI SWOT Analysis
  • Best AI Templates
  • Brainstorming AI
  • Pros & Cons of AI
  • AI for Business Strategy
  • Using AI for Business Plan
  • AI for HR Teams
  • BPMN Symbols
  • BPMN vs UML
  • Business Process Analysis
  • Business Process Modeling
  • Capacity Planning Guide
  • Case Management Process
  • How to Avoid Bottlenecks in Processes
  • Innovation Management Process
  • Project vs Process
  • Solve Customer Problems
  • Startup Templates
  • Streamline Purchase Order Process
  • What is BPMN
  • Approval Process
  • Employee Exit Process
  • Iterative Process
  • Process Documentation
  • Process Improvement Ideas
  • Risk Assessment Process
  • Tiger Teams
  • Work Instruction Templates
  • Workflow Vs. Process
  • Process Mapping
  • Business Process Reengineering
  • Meddic Sales Process
  • SIPOC Diagram
  • What is Business Process Management
  • Process Mapping Software
  • Business Analysis Tool
  • Business Capability Map
  • Decision Making Tools and Techniques
  • Operating Model Canvas
  • Mobile App Planning
  • Product Development Guide
  • Product Roadmap
  • Timeline Diagrams
  • Visualize User Flow
  • Sequence Diagrams
  • Flowchart Maker
  • Online Class Diagram Tool
  • Organizational Chart Maker
  • Mind Map Maker
  • Retro Software
  • Agile Project Charter
  • Critical Path Software
  • Brainstorming Guide
  • Brainstorming Tools
  • Visual Tools for Brainstorming
  • Brainstorming Content Ideas
  • Brainstorming in Business
  • Brainstorming Questions
  • Brainstorming Rules
  • Brainstorming Techniques
  • Brainstorming Workshop
  • Design Thinking and Brainstorming
  • Divergent vs Convergent Thinking
  • Group Brainstorming Strategies
  • Group Creativity
  • How to Make Virtual Brainstorming Fun and Effective
  • Ideation Techniques
  • Improving Brainstorming
  • Marketing Brainstorming
  • Rapid Brainstorming
  • Reverse Brainstorming Challenges
  • Reverse vs. Traditional Brainstorming
  • What Comes After Brainstorming
  • Flowchart Guide
  • Spider Diagram Guide
  • 5 Whys Template
  • Assumption Grid Template
  • Brainstorming Templates
  • Brainwriting Template
  • Innovation Techniques
  • 50 Business Diagrams
  • Business Model Canvas
  • Change Control Process
  • Change Management Process
  • NOISE Analysis
  • Profit & Loss Templates
  • Scenario Planning
  • Winning Brand Strategy
  • Work Management Systems
  • Developing Action Plans
  • Guide to setting OKRS
  • How to Write a Memo
  • Improve Productivity & Efficiency
  • Mastering Task Batching
  • Monthly Budget Templates
  • Top Down Vs. Bottom Up
  • Weekly Schedule Templates
  • Kaizen Principles
  • Opportunity Mapping
  • Strategic-Goals
  • Strategy Mapping
  • T Chart Guide
  • Business Continuity Plan
  • Developing Your MVP
  • Incident Management
  • Needs Assessment Process
  • Product Development From Ideation to Launch
  • Visualizing Competitive Landscape
  • Communication Plan
  • Graphic Organizer Creator
  • Fault Tree Software
  • Bowman's Strategy Clock Template
  • Decision Matrix Template
  • Communities of Practice
  • Goal Setting for 2024
  • Meeting Templates
  • Meetings Participation
  • Microsoft Teams Brainstorming
  • Retrospective Guide
  • Skip Level Meetings
  • Visual Documentation Guide
  • Weekly Meetings
  • Affinity Diagrams
  • Business Plan Presentation
  • Post-Mortem Meetings
  • Team Building Activities
  • WBS Templates
  • Online Whiteboard Tool
  • Communications Plan Template
  • Idea Board Online
  • Meeting Minutes Template
  • Genograms in Social Work Practice
  • How to Conduct a Genogram Interview
  • How to Make a Genogram
  • Genogram Questions
  • Genograms in Client Counseling
  • Understanding Ecomaps
  • Visual Research Data Analysis Methods
  • House of Quality Template
  • Customer Problem Statement Template
  • Competitive Analysis Template
  • Creating Operations Manual
  • Knowledge Base
  • Folder Structure Diagram
  • Online Checklist Maker
  • Lean Canvas Template
  • Instructional Design Examples
  • Genogram Maker
  • Work From Home Guide
  • Strategic Planning
  • Employee Engagement Action Plan
  • Huddle Board
  • One-on-One Meeting Template
  • Story Map Graphic Organizers
  • Introduction to Your Workspace
  • Managing Workspaces and Folders
  • Adding Text
  • Collaborative Content Management
  • Creating and Editing Tables
  • Adding Notes
  • Introduction to Diagramming
  • Using Shapes
  • Using Freehand Tool
  • Adding Images to the Canvas
  • Accessing the Contextual Toolbar
  • Using Connectors
  • Working with Tables
  • Working with Templates
  • Working with Frames
  • Using Notes
  • Access Controls
  • Exporting a Workspace
  • Real-Time Collaboration
  • Notifications
  • Meet Creately VIZ
  • Unleashing the Power of Collaborative Brainstorming
  • Uncovering the potential of Retros for all teams
  • Collaborative Apps in Microsoft Teams
  • Hiring a Great Fit for Your Team
  • Project Management Made Easy
  • Cross-Corporate Information Radiators
  • Creately 4.0 - Product Walkthrough
  • What's New

What is Project Evaluation? The Complete Guide with Templates

hero-img

Project evaluation is an important part of determining the success or failure of a project. Properly evaluating a project helps you understand what worked well and what could be improved for future projects. This blog post will provide an overview of key components of project evaluation and how to conduct effective evaluations.

What is Project Evaluation?

Project evaluation is a key part of assessing the success, progress and areas for improvement of a project. It involves determining how well a project is meeting its goals and objectives. Evaluation helps determine if a project is worth continuing, needs adjustments, or should be discontinued.

A good evaluation plan is developed at the start of a project. It outlines the criteria that will be used to judge the project’s performance and success. Evaluation criteria can include things like:

  • Meeting timelines and budgets - Were milestones and deadlines met? Was the project completed within budget?
  • Delivering expected outputs and outcomes - Were the intended products, results and benefits achieved?
  • Satisfying stakeholder needs - Were customers, users and other stakeholders satisfied with the project results?
  • Achieving quality standards - Were quality metrics and standards defined and met?
  • Demonstrating effectiveness - Did the project accomplish its intended purpose?

Project evaluation provides valuable insights that can be applied to the current project and future projects. It helps organizations learn from their projects and continuously improve their processes and outcomes.

Project Evaluation Templates

These templates will help you evaluate your project by providing a clear structure to assess how it was planned, carried out, and what it achieved. Whether you’re managing the project, part of the team, or a stakeholder, these template assist in gathering information systematically for a thorough evaluation.

Project Evaluation Template 1

  • Ready to use
  • Fully customizable template
  • Get Started in seconds

exit full-screen

Project Evaluation Template 2

Project Evaluation Methods

Project evaluation involves using various methods to assess the performance and impact of a project. The choice of methods depends on the nature of the project, its objectives, and the available resources. Here are some common project evaluation methods:

Pre-project evaluation

Pre-project evaluations are done before a project begins. This involves evaluating the project plan, scope, objectives, resources, and budget. This helps determine if the project is feasible and identifies any potential issues or risks upfront. It establishes a baseline for later evaluations.

Ongoing evaluation

Ongoing evaluations happen during the project lifecycle. Regular status reports track progress against the project plan, budget, and deadlines. Any deviations or issues are identified and corrective actions can be taken promptly. This allows projects to stay on track and make adjustments as needed.

Post-project evaluation

Post-project evaluations occur after a project is complete. This final assessment determines if the project objectives were achieved and customer requirements were met. Key metrics like timeliness, budget, and quality are examined. Lessons learned are documented to improve processes for future projects. Stakeholder feedback is gathered through surveys, interviews, or focus groups .

Project Evaluation Steps

When evaluating a project, there are several key steps you should follow. These steps will help you determine if the project was successful and identify areas for improvement in future initiatives.

Step 1: Set clear goals

The first step is establishing clear goals and objectives for the project before it begins. Make sure these objectives are SMART: specific, measurable, achievable, relevant and time-bound. Having clear goals from the outset provides a benchmark for measuring success later on.

Step 2: Monitor progress

Once the project is underway, the next step is monitoring progress. Check in regularly with your team to see if you’re on track to meet your objectives and deadlines. Identify and address any issues as early as possible before they become major roadblocks. Monitoring progress also allows you to course correct if needed.

Step 3: Collect data

After the project is complete, collect all relevant data and metrics. This includes both quantitative data like budget information, timelines and deliverables, as well customer feedback and qualitative data from surveys or interviews. Analyzing this data will show you how well the project performed against your original objectives.

Step 4: Analyze and interpret

Identify what worked well and what didn’t during the project. Highlight best practices to replicate and lessons learned to improve future initiatives. Get feedback from all stakeholders involved, including project team members, customers and management.

Step 5: Develop an action plan

Develop an action plan to apply what you’ve learned for the next project. Update processes, procedures and resource allocations based on your evaluation. Communicate changes across your organization and train employees on any new best practices. Implementing these changes will help you avoid similar issues the next time around.

Benefits of Project Evaluation

Project evaluation is a valuable tool for organizations, helping them learn, adapt, and improve their project outcomes over time. Here are some benefits of project evaluation.

  • Helps in making informed decisions by providing a clear understanding of the project’s strengths, weaknesses, and areas for improvement.
  • Holds the project team accountable for meeting goals and using resources effectively, fostering a sense of responsibility.
  • Facilitates organizational learning by capturing valuable insights and lessons from both successful and challenging aspects of the project.
  • Allows for the efficient allocation of resources by identifying areas where adjustments or reallocations may be needed.
  • Provides evidence of the project’s value by assessing its impact, cost-effectiveness, and alignment with organizational objectives.
  • Involves stakeholders in the evaluation process, fostering collaboration, and ensuring that diverse perspectives are considered.

Project Evaluation Best Practices

Follow these best practices to do a more effective and meaningful project evaluation, leading to better project outcomes and organizational learning.

  • Clear objectives : Clearly define the goals and questions you want the evaluation to answer.
  • Involve stakeholders : Include the perspectives of key stakeholders to ensure a comprehensive evaluation.
  • Use appropriate methods : Choose evaluation methods that suit your objectives and available resources.
  • Timely data collection : Collect data at relevant points in the project timeline to ensure accuracy and relevance.
  • Thorough analysis : Analyze the collected data thoroughly to draw meaningful conclusions and insights.
  • Actionable recommendations : Provide practical recommendations that can lead to tangible improvements in future projects.
  • Learn and adapt : Use evaluation findings to learn from both successes and challenges, adapting practices for continuous improvement.
  • Document lessons : Document lessons learned from the evaluation process for organizational knowledge and future reference.

How to Use Creately to Evaluate Your Projects

Use Creately’s visual collaboration platform to evaluate your project and improve communication, streamline collaboration, and provide a visual representation of project data effectively.

Task tracking and assignment

Use the built-in project management tools to create, assign, and track tasks right on the canvas. Assign responsibilities, set due dates, and monitor progress with Agile Kanban boards, Gantt charts, timelines and more. Create task cards containing detailed information, descriptions, due dates, and assigned responsibilities.

Notes and attachments

Record additional details and attach documents, files, and screenshots related to your tasks and projects with per item integrated notes panel and custom data fields. Or easily embed files and attachments right on the workspace to centralize project information. Work together on project evaluation with teammates with full multiplayer text and visual collaboration.

Real-time collaboration

Get any number of participants on the same workspace and track their additions to the progress report in real-time. Collaborate with others in the project seamlessly with true multi-user collaboration features including synced previews and comments and discussion threads. Use Creately’s Microsoft Teams integration to brainstorm, plan, run projects during meetings.

Pre-made templates

Get a head start with ready-to-use progress evaluation templates and other project documentation templates available right inside the app. Explore 1000s more templates and examples for various scenarios in the community.

In summary, project evaluation is like a compass for projects, helping teams understand what worked well and what can be improved. It’s a tool that guides organizations to make better decisions and succeed in future projects. By learning from the past and continuously improving, project evaluation becomes a key factor in the ongoing journey of project management, ensuring teams stay on the path of excellence and growth.

More project management related guides

  • 8 Essential Metrics to Measure Project Success
  • How to Manage Your Project Portfolio Like a Pro
  • What is Project Baseline in Project Management?
  • How to Create a Winning Project Charter: Your Blueprint for Success
  • Your Comprehensive Guide to Creating Effective Workback Schedules
  • What is a Work Breakdown Structure? and How To Create a WBS?
  • The Practical Guide to Creating a Team Charter
  • Your Guide to Multi-Project Management
  • How AI Is Transforming Project Management
  • A Practical Guide to Resource Scheduling in Project Management

Join over thousands of organizations that use Creately to brainstorm, plan, analyze, and execute their projects successfully.

More Related Articles

What is Dependency  Mapping in Project Management?

Amanda Athuraliya is the communication specialist/content writer at Creately, online diagramming and collaboration tool. She is an avid reader, a budding writer and a passionate researcher who loves to write about all kinds of topics.

Site logo

  • Effective Strategies for Conducting Evaluations, Monitoring, and Research
  • Learning Center

Effective Strategies for Conducting Evaluations, Monitoring, and Research

Table of Contents

Understanding the Importance of Evaluations, Monitoring, and Research

Key steps to conducting comprehensive evaluations, best practices for ongoing monitoring, designing and executing effective research, leveraging technology in evaluation, monitoring, and research, analyzing and presenting your findings, ethical considerations in evaluations, monitoring, and research.

Evaluations, monitoring, and research are all fundamental aspects of effective project management and strategic decision-making in virtually every field. Each serves a different, but crucial, purpose and offers unique insights into the performance and potential of a project, a program, a policy, or an organization.

Evaluations allow us to assess the effectiveness of an intervention, a program, or a policy. They involve a systematic collection and analysis of information to judge the merit, worth, or value of something. This can help stakeholders understand the impacts, both expected and unexpected, and identify areas for improvement.

Monitoring is an ongoing, systematic process of collecting data on specified indicators to provide management and stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives. Monitoring is important because it helps track progress, detects issues early, ensures resources are used efficiently, and contributes to achieving transparency and accountability.

Research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue. It provides evidence-based insights, which can help in shaping strategies, formulating policies, and making informed decisions. It offers a way of examining your practice critically, to confirm what works and identify areas that need improvement.

Understanding the importance of evaluations, monitoring, and research, and employing them correctly, can significantly contribute to the success of projects and initiatives. They allow us to verify assumptions, learn, make informed decisions, improve performance, and achieve planned results.

Conducting comprehensive evaluations involves a series of crucial steps. These steps are designed to facilitate objective assessments and draw meaningful conclusions that can help improve performance and effectiveness.

1. Define the Purpose of the Evaluation: Before you start the evaluation, clearly define what you hope to achieve from it. This might include assessing the effectiveness of a program, identifying areas for improvement, or informing strategic decision-making. The purpose of the evaluation will guide the rest of the process.

2. Identify Stakeholders: Determine who has a vested interest in the evaluation’s results. Stakeholders might include team members, managers, funders, or the program’s beneficiaries. These individuals or groups should be involved in the evaluation process as much as possible to ensure their perspectives and needs are taken into account.

3. Develop Evaluation Questions: Based on the purpose of the evaluation and stakeholder input, develop specific questions that the evaluation will seek to answer. These might include questions about the program’s effectiveness, efficiency, relevance, impact, and sustainability.

4. Determine Evaluation Methodology: Decide on the best methods for gathering and analyzing data to answer your evaluation questions. This might involve quantitative methods like surveys and data analysis, qualitative methods like interviews and focus groups, or a combination of both. Consider the resources available to you and the strengths and limitations of different methodologies.

5. Collect Data: Implement your chosen methods to collect data. This might involve distributing surveys, conducting interviews, or gathering data from existing sources. Be sure to follow ethical guidelines when collecting data, particularly when human subjects are involved.

6. Analyze Data: Once you’ve collected your data, analyze it to draw conclusions. This might involve statistical analysis for quantitative data or thematic analysis for qualitative data.

7. Report Findings: Compile your findings into a comprehensive report that clearly presents the results of the evaluation, including answers to the evaluation questions and any recommendations for improvement. Share this report with stakeholders and use it to inform decision-making and strategic planning.

8. Implement Changes: An evaluation is only useful if its findings are acted upon. Use the results of your evaluation to implement changes, improve performance, and inform future strategies.

Remember, a comprehensive evaluation is not a one-time activity, but a continuous process of learning and improvement.

Catch HR’s eye instantly?

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

Monitoring is a vital component of any project or program management. It involves the ongoing assessment of a project’s progress, ensuring it’s on track and aligned with its set goals. Below are some best practices for effective ongoing monitoring:

1. Establish Clear Goals and Objectives: The first step in any monitoring process is to establish what you are aiming to achieve. Set clear, measurable goals and objectives for your project or program.

2. Define Key Performance Indicators (KPIs): KPIs are measurable values that demonstrate how effectively a project or organization is achieving its objectives. Identify relevant KPIs that align with your goals and can provide quantifiable metrics to measure progress.

3. Use a Monitoring Plan: A monitoring plan provides a detailed outline of what will be monitored, how monitoring will take place, who will conduct the monitoring, and when it will happen. This can help ensure the monitoring process is systematic and comprehensive.

4. Regularly Review Progress: Monitoring is an ongoing process, and regular review of progress is essential. Determine a suitable review schedule based on the nature of the project.

5. Utilize Monitoring Tools: There are numerous monitoring tools available, both digital and traditional, that can help with data collection, analysis, and reporting. These tools can automate many aspects of the monitoring process, increasing efficiency and accuracy.

6. Collect and Analyze Data: Data collection is at the heart of monitoring. Collect data related to your KPIs and analyze it to understand the progress and performance of your project or program.

7. Involve Stakeholders: Monitoring should be participatory, involving all relevant stakeholders. This includes team members, project beneficiaries, and any external stakeholders. Their perspectives can provide valuable insights into the effectiveness of the project.

8. Respond to Monitoring Results: The purpose of monitoring is to identify any deviations from the plan and address them promptly. When issues are detected, timely action should be taken to correct the course and mitigate any risks.

9. Document Lessons Learned: As you monitor your project or program, document lessons learned and best practices. These insights can help improve future projects and contribute to the organization’s knowledge base.

By integrating these best practices into your monitoring processes, you can ensure your projects stay on track, achieve their objectives, and deliver maximum impact.

Research is a systematic inquiry to discover and interpret new knowledge. It’s crucial for making informed decisions, developing policies, and contributing to scholarly knowledge. Here are some key steps to designing and executing effective research:

1. Identify the Research Problem: The first step in the research process is to identify and clearly define the research problem. What is the specific question or issue that your research seeks to address? This should be a succinct statement that frames the purpose of your research.

2. Conduct a Literature Review: Before diving into new research, it’s essential to understand the existing knowledge on the topic. A literature review helps you understand the existing body of knowledge, identify gaps in the current research, and contextualize your study within the broader field.

3. Formulate a Hypothesis or Research Question: Based on the research problem and literature review, formulate a hypothesis or research question to guide your study. This should be a clear, focused question that your research will seek to answer.

4. Design the Research Methodology: The research methodology is the framework that guides the collection and analysis of your data. This could be qualitative (e.g., interviews, focus groups), quantitative (e.g., surveys, experiments), or a mix of both, depending on your research question. Consider factors like feasibility, reliability, and validity when designing your methodology.

5. Collect the Data: Based on your methodology, collect the data for your study. This might involve conducting interviews, distributing surveys, or collecting existing data from reliable sources.

6. Analyze the Data: After collecting your data, analyze it to uncover patterns, relationships, and insights. The exact methods will depend on the type of data you’ve collected. You might use statistical analysis for quantitative data or thematic analysis for qualitative data.

7. Interpret the Results: Based on your analysis, interpret the results of your study. Do the results support your hypothesis or answer your research question? Be cautious not to overgeneralize your findings beyond your specific context and sample.

8. Write the Research Report: Once your research is complete, write up your findings in a research report. This should include an introduction, literature review, methodology, results, discussion, and conclusion. Be sure to also acknowledge any limitations of your study.

9. Share Your Findings: Finally, share your findings with others, such as through publication in a peer-reviewed journal, presentation at a conference, or application in a practical context.

Effective research requires careful planning, meticulous execution, and thoughtful interpretation of the results. Always adhere to ethical standards in conducting research to maintain integrity and credibility.

In today’s digital era, technology plays a vital role in enhancing the efficiency and effectiveness of evaluation, monitoring, and research. It provides new ways to collect, analyze, and interpret data, allowing for real-time updates, greater reach, and better visualization of information. Here are some ways to leverage technology in these areas:

1. Digital Data Collection: Traditional methods of data collection like paper surveys or in-person interviews can be time-consuming and resource-intensive. Digital tools allow for quicker, more efficient data collection. Online surveys, mobile data collection apps, and web scraping tools can streamline this process, reduce errors, and enable real-time data collection.

2. Remote Monitoring and Evaluation: Technology allows for remote monitoring and evaluation, which can be particularly useful when physical access to a site is difficult or impossible. Satellite images, GPS tracking, and internet-based communication platforms can provide valuable data from afar.

3. Advanced Data Analysis: Technology provides sophisticated tools for data analysis, from basic statistical analysis software to advanced machine learning algorithms. These tools can handle large datasets, perform complex analyses, and reveal patterns and insights that might not be evident through manual analysis.

4. Data Visualization: Data visualization tools can help to present complex data in an understandable and accessible format. Interactive dashboards, charts, and maps can make data more engaging and easier to interpret, allowing for more informed decision-making.

5. Improved Communication and Collaboration: Technology enhances communication and collaboration among researchers, evaluators, stakeholders, and participants. Cloud-based platforms allow for real-time collaboration on data collection, analysis, and report writing.

6. Virtual Reality (VR) and Augmented Reality (AR): VR and AR technologies are being used for immersive data presentation, allowing stakeholders to experience data in a novel and engaging way. They are also used in research to simulate environments and scenarios.

7. Artificial Intelligence (AI) and Machine Learning (ML): AI and ML can help automate the analysis of large and complex datasets, predict trends, and identify patterns that humans might not notice.

While technology provides powerful tools, it’s also important to consider issues like data privacy, digital literacy, and accessibility. Additionally, technology should not replace human judgment and interpretation, but rather, serve as a tool to support these processes. Technology is most effective when it’s used strategically, with clear goals and a good understanding of its capabilities and limitations.

After the data collection and analysis phase in your evaluation, monitoring, or research process, it’s crucial to effectively analyze and present your findings. This step allows you to translate your data into meaningful insights that can inform decisions and drive action. Here’s how to go about it:

1. Review and Interpret the Data: Begin by reviewing your data. Look for patterns, trends, and key points of interest. Interpret these within the context of your research questions or objectives. Always be transparent about any limitations or uncertainties in the data.

2. Compare with Prior Expectations or Benchmarks: If you set hypotheses or benchmarks, compare your findings with these. Do the results align with what you expected, or are there surprising outcomes? Understanding these deviations can provide valuable insights.

3. Develop Key Takeaways: Summarize the most important insights from your data into a few key takeaways. These should be concise, impactful statements that encapsulate the main findings of your work.

4. Create Visual Representations: Data visualization tools, like charts, graphs, or infographics, can help illustrate your findings and make complex data easier to understand. Ensure these visualizations are clear, accurate, and effectively support your key takeaways.

5. Write a Clear and Concise Report: Document your findings, methodology, and implications in a report. Make sure it’s structured logically, clearly written, and accessible to your intended audience. Include an executive summary that provides a high-level overview of your findings.

6. Tailor the Presentation to Your Audience: Different stakeholders may have different interests and levels of familiarity with your subject. Tailor your presentation to suit your audience, focusing on the information that is most relevant to them.

7. Use Clear and Simple Language: While presenting your findings, use language that your audience will understand. Avoid jargon and technical terms as much as possible, or clearly define them if necessary.

8. Include Recommendations: If appropriate, include recommendations based on your findings. These can help guide future action or decision-making.

9. Solicit Feedback and Questions: After presenting your findings, invite feedback and questions. This can help ensure your audience understands your findings and can facilitate a more in-depth discussion.

Remember, the goal of analyzing and presenting your findings is to communicate your insights clearly and effectively, enabling stakeholders to understand and act upon them.

Ethical considerations are paramount in evaluations, monitoring, and research. They ensure the integrity of your work, protect the rights and wellbeing of participants, and build trust with stakeholders. Here are some key ethical considerations to keep in mind:

1. Informed Consent: Participants should understand what they’re agreeing to when they participate in your research. This includes the purpose of the research, what’s involved, any risks or benefits, and their right to withdraw at any time without penalty.

2. Privacy and Confidentiality: Protect participants’ privacy by keeping their data confidential. This means that individual responses should not be shared without consent, and data should be reported in aggregate to prevent identification of individuals.

3. Avoiding Harm: Your research should not cause harm to participants. This includes physical harm, but also emotional distress, inconvenience, or any other negative impacts. Always consider the potential impacts on participants and take steps to mitigate any harm.

4. Honesty and Transparency: Be open and honest about your research. This includes being transparent about any conflicts of interest, accurately reporting your findings (including negative results or limitations), and not manipulating or misrepresenting data.

5. Respect for Diversity and Inclusion: Ensure your research respects and includes diverse perspectives. This might involve including participants from diverse backgrounds, considering cultural sensitivities, and ensuring your research does not perpetuate bias or discrimination.

6. Data Integrity and Management: Collect, store, and analyze data in a way that maintains its integrity. This includes avoiding practices like data fabrication or falsification, and ensuring secure and appropriate data storage.

7. Compliance with Laws and Regulations: Ensure your research complies with all relevant laws and regulations. This might involve data protection laws, ethical review processes, or sector-specific regulations.

8. Collaboration and Respect: Treat all participants and stakeholders with respect. This includes valuing their time, acknowledging their contributions, and fostering a collaborative relationship.

Ethical considerations should be integrated throughout your research process, from planning to reporting. It’s also beneficial to stay updated on ethical guidelines in your field, as they can evolve over time.

Evaluations, monitoring, and research are indispensable tools for understanding the performance, impact, and progress of projects, policies, and programs. Effectively conducting these activities requires careful planning, thorough data collection and analysis, clear communication of findings, and careful consideration of ethical issues.

The choice of method should always be driven by the research questions and objectives. Whether using qualitative methods, quantitative methods, or a combination of both, it’s important to select a methodology that best fits the research purpose and context.

Technology is playing an increasingly prominent role, offering new ways to collect, analyze, and share data. However, it’s essential to use technology judiciously, considering factors such as data privacy, digital literacy, and accessibility.

Presenting findings in a clear, accessible, and engaging manner is crucial. Data visualization tools can help translate complex data into a format that’s easy to understand and actionable. And no matter how the data is presented, maintaining the integrity and transparency of the research process is paramount.

Ethical considerations must be at the forefront of all evaluations, monitoring, and research. Respecting participants’ rights, maintaining privacy and confidentiality, avoiding harm, and adhering to laws and regulations is vital to ensure the credibility and integrity of your work.

In conclusion, conducting effective evaluations, monitoring, and research is a multifaceted process that requires strategic planning, rigorous execution, and constant learning and adaptation. These activities offer valuable insights that can guide decision-making, inform strategy, and contribute to the broader body of knowledge.

' data-src=

Fation Luli

Hey EvalCommunity readers,

Did you know that you can enhance your visibility by actively engaging in discussions within the EvalCommunity? Every week, we highlight the most active commenters and authors in our newsletter , which is distributed to an impressive audience of over 1.289,000 monthly readers and practitioners in International Development , Monitoring and Evaluation (M&E), and related fields.

Seize this opportunity to share your invaluable insights and make a substantial contribution to our community. Begin sharing your thoughts below to establish a lasting presence and wield influence within our growing community.

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

how can project evaluation help in boosting research area

Recommended Jobs

Monitoring & evaluation technical specialist (indicators & data).

  • Washington, DC, USA
  • United States Department of Treasury, Office of Technical Assistance

Project Assistant

Collaboration, learning, adapting (cla) specialist, ymelp ii – finance and administration consultant (stta), usaid/haiti/pcps program design advisor, digital mel manager – digital strategy implementation mechanism (dsim) washington, dc, program quality & meal manager.

  • Honiara, Solomon Islands
  • Save the Children Australia

Justice Sector Innovation Specialist

Court administration specialist, kosovo expert, justice sector specialist, kosovo expert, justice sector specialist, international expert, evaluation team leader/evaluation specialist, manager ii, institutional support program implementation, usaid guatemala- program analyst (information coordinator), gender experts – sahel regional gender assessment sahel, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.8(12); 2022 Dec

Logo of heliyon

Models and methods for information systems project success evaluation – A review and directions for research

João varajão.

a ALGORITMI Research Centre/LASI, Universidade do Minho, Campus de Azurém, 4804-533 Guimarães, Portugal

João Carlos Lourenço

b CEGIST, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal

João Gomes

c MIEGSI, Uiversidade do Minho, Campus de Azurém, 4804-533 Guimarães, Portugal

Associated Data

No data was used for the research described in the article.

Organizations heavily rely on information systems to improve their efficiency and effectiveness. However, on the one hand, information systems projects have often been seen as problematic endeavors. On the other hand, one can ask if this perspective results from subjective perceptions or objective assessments. We cannot find a definitive answer to this question in the literature. Moreover, there is no structured information about the models and methods currently available to assess projects' success in practice. This paper aims to present the results of a literature review carried out on the extant models and methods for evaluating the success of information systems projects. Additionally, it also offers models and methods from other areas that may be suitable for assessing IS projects. Results show that most models and methods found in the literature are, in their essence, theoretical exercises with only a few pieces of evidence of their use in practice, thus urging for more empirically based research.

Project evaluation; Project assessment; Project appraisal; Project evaluation models; Project evaluation methods; Project evaluation techniques; Evaluation criteria; Success criteria; Project success; Information systems; Project management.

1. Introduction

Information Systems (IS) are crucial for contemporary organizations, being present in all business aspects ( Varajão and Carvalho, 2018 ; Kääriäinen et al., 2020 ). In today’s VUCA (Volatile, Uncertain, Complex, and Ambiguous) world, the capacity to maintain and update existing IS and create and adopt new IS features is a competitive differentiator ( Patnayakuni and Ruppel, 2010 ; Ngereja and Hussein, 2021 ). Furthermore, organizations need to innovate in products, processes, markets, and business models to remain sustainable and perform effectively ( David and Lawrence, 2010 ). Without IS projects, such as digital transformation projects, which refer to the changes in working and business offerings enabled by the adoption of Information Technologies (IT) in an organization ( Kääriäinen et al., 2020 ), that is not viable.

In the last decades, project management has gained recognition as an academic discipline because all organizations develop projects and, for this, resort to project management as a way of structuring and managing their investments. However, in the particular case of IS, projects continue to report lower levels of success ( Iriarte and Bayona, 2020 ), so it is crucial to understand the influencers of project success ( Kendra and Taplin, 2004 ) and how to evaluate it ( Varajão, 2016 , 2018 ; Pereira et al., 2022 ). Additionally, it is often thought that achieving success in these projects is challenging ( Tam et al., 2020 ).

Evaluation of success is concerned with judgments about the achievements of an endeavor ( Arviansyah et al., 2015 ; Pereira et al., 2022 ), and appropriate methods should be adopted for evaluating projects ( Pinto and Slevin, 1987 ). The success of projects has been traditionally related to the Iron Triangle, i.e., to the accomplishment of scope, cost and time. More recently, other important criteria have been considered in the evaluation of success, such as stakeholders' satisfaction or business impact ( Varajão et al., 2021 ).

On the “dark side”, there are studies that reveal high levels of failure, as is the case, for instance, of a global study of IT change initiatives covering 1471 projects, that concludes that one out of six projects ran, on average, 200% over budget and 70% over schedule ( Hoang et al., 2013 ); several authors, such as Cecez-Kecmanovic and Nagm (2008) or Iriarte and Bayona (2020) , also mention such disappointing success rates. As expected, this uncertainty of value realization troubles both practitioners and researchers.

Conversely, on the “bright side”, some researchers and practitioners (e.g., Lech (2013) ) have been questioning these numbers because the world around us is full of useful and reliable (successful) IT applications, which debunks that negative perspective of success ( Varajão et al., 2022b ). The lack of details (namely, regarding samples or criteria) of most studies also helps raise criticism regarding the reported results ( Sauer et al., 2007 ). Furthermore, in most studies, it is impossible to ascertain the methods or techniques used for evaluating and reporting the success of IS projects or whether (and how) projects are formally assessed in practice. Some studies addressing the evaluation of project success in practice support this concern ( Pereira et al., 2022 ; Varajão and Carvalho, 2018 ).

The rapid advance in IT has boosted IS project development in organizations for reorganizing businesses and improving services. Such projects help organizations create and maintain competitive advantages through fast business transactions, increasingly automated business processes, improved customer service, and adequate decision support. Considering that any organization’s sustainable success is strongly associated with IS success and, consequently, with the success of IS projects, evaluating these projects assumes critical importance in modern organizations ( Ma et al., 2013 ; Pereira et al., 2022 ).

Although “success” is a frequently discussed topic, a consensus concerning its meaning is rarely reached. This is due to the fact that project success is an intricate and elusive concept with several different meanings ( McCoy, 1986 ; Thomas and Fernández, 2008 ). In effect, the concept of success can have several interpretations because of the different perceptions it generates, leading to disagreements about what can be considered a successful project ( Baccarini, 1999 ). Since the 1950s, many authors have accepted the triple constraints (time, cost, specification) as a standard measure of success ( Oisen, 1971 ; Atkinson, 1999 ). These continue to be very important in evaluating the success of IS projects, together with other criteria such as stakeholders' satisfaction or project benefits ( Varajão and Carvalho, 2018 ). However, an IS project cannot always be seen as a complete success or a complete failure, and different stakeholders may perceive the terms “success” and “failure” differently ( Milis and Vanhoof, 2006 ). To make things even more challenging regarding evaluation, organizational effectiveness is paradoxical ( Cameron, 1986 ), and projects have priority, structural, and execution tensions ( Iivari, 2021 ).

Although there are several models, methods, and techniques to evaluate projects' success, the lack of structured information about them (e.g., characteristics, context, or results achieved in practice) may hinder their use by practitioners. Without such information, it can be quite difficult to identify which models or methods are adequate to evaluate a project’s success, considering the implementation feasibility, benefits, and limitations of each alternative. It also makes it difficult for researchers to identify opportunities for new contributions.

The evaluation of project success seems to be currently an informal and rudimentary process based on perceptions, mainly focused on project management’s success and not concerned with the success of the projects' outputs ( Varajão and Carvalho, 2018 ; Pereira et al., 2022 ). In other words, many times, success is not formally evaluated, and even when the evaluation is carried out, it is based on an incomplete set of criteria or limited evaluation models. The consequences of not formally evaluating the success of a project may result in the waste of efforts and resources ( Pujari and Seetharam, 2015 ) and misperception of results ( Turner and Zolin, 2012 ).

Furthermore, as aforementioned, it is interesting to note that several studies report on project success, but not much is known about how it is being evaluated concerning the techniques or methods used by project managers and organizations. For instance, the Standish Group’s studies (e.g., Standish (2018) ; StandishGroup (2020) ) and other studies (e.g., Marnewick (2012) ) declare the success achieved in projects. Still, there is no information about “whether” and “how” the formal success assessment was carried out in practice by the participants (often, only a few criteria are mentioned, and the reported success is based on “perceptions” of the study’s participants).

Aiming to fill this gap in the literature as well as improve awareness about the currently available models and methods for evaluating IS projects' success, we conducted a literature review. The purpose is to summarize the extant research, providing a framework of models and methods for researchers and practitioners that identifies their characteristics, underlying techniques, contexts of application, benefits, limitations, and empirical support. We also present a review of other project areas in order to have richer results since some models and methods are not dependent on the project type. Moreover, the discussion includes main insights and future directions for research.

The structure of the article is as follows: Section 2 presents the main concepts; Section 3 addresses the research questions and the research method; Section 4 presents the results; Section 5 discusses the main findings; and, finally, Section 6 addresses the contributions and limitations.

2. Main concepts

Our study focuses on models and methods for evaluating IS project success. To enable a comprehensive understanding of the subject, as depicted in Figure 1 , there are several related concepts that are important to clarify. The concepts of “Information Systems projects”, “success of projects”, and “evaluation of success” are presented in the next subsections. The models and methods for project success evaluation are presented in Section 4 and discussed in Section 5 .

Figure 1

Conceptual framework.

2.1. Information systems projects

A project is an undertaking to create something that does not exist yet, which needs to be delivered in a set time and at an agreed cost. IS projects include all the common characteristics of other projects. However, they also have particularities, such as providing a service to implement IT solutions, eventually including the assessment of the project outcome ( Kutsch and Maylor, 2009 ).

In other words, IS projects are temporary endeavors that lead to some unique outputs and outcomes related to IT adoption. These outputs and outcomes can be, for instance, changes in business processes; the renewal of the IT infrastructure; the adoption of new software applications; etc. On the one hand, sometimes, the success of the outcome/project can be assessed right after it is delivered ( Varajão et al., 2022a ). On the other hand, many times, a complete account of the project’s success can only be obtained long after the end of the project; in other words, after the impact of the outputs and outcomes on the enterprise is felt ( Varajão and Carvalho, 2018 ).

2.2. Success of projects

One of the problems that show up frequently concerning the project’s success has its roots in the definition of “success”. The success of a project can be understood in diverse ways according to different stakeholders. On the one hand, time, costs and scope compliance are essential elements for a project’s success; on the other hand, the stakeholders' satisfaction or the achievement of business benefits play a prominent role. Therefore, the main concern should be meeting the client’s real needs ( Paiva et al., 2011 ) since projects are typically designed to obtain benefits for the organization according to business objectives and value concerns ( Keeys and Huemann, 2017 ).

In project management and over the years, the concept of success has undergone some significant changes. In the 1970’s, the success of a project was mainly focused on the operational dimension; the focus on the customer was practically non-existent. Since project management began to be a body of knowledge in the mid-twentieth century, many processes, techniques, and tools have been developed ( Davis, 2014 ). Today, they cover various aspects of the project lifecycle and have made it possible to increase its efficiency and effectiveness ( Varajão, 2016 ). According to Kerzner (2017) , a more modern perspective assesses success in terms of primary (timely, within budget, and with the expected quality) and secondary aspects (customer acceptance and customer agreement regarding the use of his name as a reference). For De Wit (1988) , a distinction must be made between project success and project management success in any discussion of success, bearing in mind that good project management can contribute to project success. According to Baccarini (1999) , project management success is mainly related to the achievement of the project regarding scope, time, and cost, which indicate the efficiency and effectiveness of project execution. Usually, project management success can be evaluated at the end of a project. The success of the output is related to the impacts of the project resulting product(s)/service(s)/other on the business (e.g., business benefits), and its evaluation may only be possible later, at the post-project.

Cuellar (2010) states that project success can be considered objective when represented by measurable constructs such as time, schedule, and scope, or subjective if evaluated based on stakeholders' opinions.

2.3. Evaluation of success

Different meanings of assessment have also been presented throughout the years. For instance, APHA (1960) characterized assessment as “the process of determining the value or amount of success in achieving a predetermined objective”. Scriven (1991: 139) defines assessment as “the process of determining the merit, worth or value of something”. DAC (2002) has characterized assessment as “a precise and target appraisal of a continuous or finished task, program or approach, its plan, execution, and results”. Patton (1996: 14) describes program assessment as “the systematic collection of information about the activities, characteristics, and outcomes of programs for use by specific people to reduce uncertainties, improve effectiveness, and make decisions concerning what those programs are doing and affecting”. These definitions reflect ex-ante , observing, mid-term, and final assessments.

We can also add the ex-post assessment, which can be described as an assessment that is made after an intervention has been finished. In other words, ex-post assessment is directed after a specific period following the fulfillment of an undertaking, with emphasis on the adequacy and supportability of the task ( Zidane et al., 2016 ).

Project success is a multi-dimensional concept that requires appropriate evaluation models and methods, which can be defined as practical tools used to measure the success of a project ( Silvius and Schipper, 2015 ).

The research method is presented in the following sections.

3.1. Literature review

Literature reviews aim to address some problems by identifying, critically evaluating, and integrating the findings of all relevant, high-quality individual studies addressing one or more research questions. A literature review might achieve all or most of the following objectives ( Baumeister and Leary, 1997 ): establish to what extent existing research has progressed towards clarifying a particular problem; identify connections, contradictions, gaps, and inconsistencies in the literature, as well as explore reasons for these; formulate general statements or an overarching conceptualization ( Sternberg, 1991 ); comment on, evaluate, extend, or develop theory; describe directions for future research. By doing this, implications for future practice and policy should be provided.

The literature review process is defined as an examination of a clearly formulated question (or questions) that uses systematic and explicit methods to identify, select, and critically appraise relevant research and to collect and analyze data from the studies that are included in the review ( Cochrane, 2005 ).

The research started with problem formulation by defining the research questions. This was followed by the definition of data sources and search strategy. Then, a literature search was carried out in the selected database. After obtaining the results, an eligibility test was performed to identify candidate publications. The final set of publications was then selected after a quality assessment considering the inclusion and exclusion criteria. After getting all the relevant publications, the final steps involved data extraction, analysis, and interpretation. The remaining sections describe the research process in detail.

3.2. Research questions

By looking at the literature, it is easy to understand the difficulty of evaluating the success of a project, not only due to the subjective nature of the definition of success but also to the different characteristics, context, or complexity of projects and different ways of evaluating them. For each type of project, several evaluation models and methods can be applied. Therefore, it can be quite challenging to identify which models or methods are adequate to evaluate a project’s success.

The application of a model or method to evaluate the success should follow a well-justified process, considering the type and characteristics of each project in particular and the purpose of the evaluation. Even though several studies found in the literature focus on various aspects of project success, few studies address the evaluation process ( Varajão and Trigo, 2016 ; Varajão, 2018 ; Pereira et al., 2022 ) and their respective models and methods.

To find out what is the state-of-the-art and create a framework of models and methods for evaluating the success of IS projects, we formulated the following primary research question:

What are the models and methods for evaluating the success of IS projects currently available in the literature?

Since the models and methods used in other project areas may also be suitable to be used in IS projects, we formulated a secondary question:

What are the models and methods for evaluating the success of non-IS projects currently available in the literature?

3.3. Data sources, search strategy and article selection

We decided to concentrate the search on the well-known database Elsevier’s Scopus ( www.scopus.com ) due to its wide coverage of scientific outlets. We are well aware that there are other databases or search engines that may contain relevant articles. However, the selected database includes major journals and conferences from the IS and project management areas.

Since the terminology used in published studies is much diversified, the use of synonyms and word variations has been required. Hence, when conducting the literature search, the following terms and synonyms were used:

  • − Evaluat∗ (evaluation and evaluating), assess∗ (assessment and assessing), apprais∗ (appraisal, appraising), valuat∗ (valuation and valuating), estimat∗ (estimation and estimating), calculat∗ (calculation and calculating);
  • − Performance, success, attainment, accomplishment, achievement, realization, realisation;
  • − Method∗ (method and methods), technique∗ (technique and techniques), system∗ (system and systems), procedure∗ (procedure and procedures), process∗ (process and processes);
  • − Project∗ (project and projects);
  • − Information system∗ (information system and information systems), information technolog∗ (information technology and information technologies), information and communications technolog∗ (information and communications technology and information and communications technologies), IT/IS, IS/IT, ICT.

The search strings were formulated using logical expressions created from these terms. The total number of logically different expressions was 15. The search strings are listed in the appendix. Each search string’s logical structure was written according to the specific query format of the search engine.

We decided to carry out several searches, from more open searches to more restrictive ones, to guarantee at most that every relevant study was identified. Figure 2 synthesizes the searches performed and article selection results, including the search results and candidate publications per search expression (exp n ). The respective search expressions can be found in the appendix. It is important to note that, in some searches, we decided not to confine to IS projects to get answers to the secondary research question.

Figure 2

Systematic Literature Review search and selection breakdown.

Searches were completed by the end of January 2021. In each search was downloaded a CSV file with all the results. These files were then compiled into one single file to remove duplicates and identify candidate publications that could have some connection/correlation with the subject. The searches resulted in the identification of 755 unique references.

The articles were selected for additional analysis mainly based on their title, abstract, and keywords. The abstract was read in order to verify if the title explicitly mentioned models or methods for project success evaluation. When the abstract did not provide enough information, the full article was read to assess its relevance. Some articles were later excluded because their content did not correspond to what was described in the title or abstract.

The inclusion criteria of sources were as follows: to present a model or a method for evaluating the success of projects; published in an academic journal or a conference; written in English. In turn, the exclusion criteria were the following: not written in English; published as a preface, editorial, article summary, interview, workshop, panel, or poster. To be considered, a source should comply with all the criteria.

As in Savolainen et al. (2012) , the first selection was performed by one author and randomly checked by the others, resulting in 159 articles that could contain important information for the study. The second selection, made by two authors, was based on the full reading of the articles resulting from the first selection and led to the final set of 34 articles. These are the only articles related to techniques for evaluating projects (of IS and other areas). A significant difference is noticeable between the primary search results and the relevant results because we decided to have more open searches at first to avoid overlooking important references. This increased the article selection process effort but also increased confidence in the final results.

3.4. Data extraction and synthesis

The following data elements were extracted from the selected articles: model/method identification/name; underlying techniques/approaches; benefits (as presented by the authors); limitations/further developments (as presented by the authors); project types where the model/method was applied (e.g., information systems) and sample; and references.

It is noteworthy that we tried to be as objective as possible in the extraction and presentation of data. Good examples of this are the presented benefits and limitations since they are the ones stated by the authors (it is out of the scope of this article to carry out independent experimentation of each model/method, but it shapes an interesting path for further research).

A summary of the identified models/methods is presented in Section 4 .

3.5. Validity of the literature review

For the validity assessment, we adopted the same procedure as Savolainen et al. (2012) , which is described next.

3.5.1. Construct validity

The validity of the review is based on the assumption that the researchers authors of this study and the authors of the reviewed articles have a common understanding of the concepts presented in Section 2 .

3.5.2. Internal validity

The review’s internal validity is assured by the procedure used for the articles' search, selection, and subsequent analysis. One primary threat to the validity of the reasoning used in an analysis may arise from the subjective evaluation of the articles since the evaluation results depend solely on the evaluator. In this study, the evaluation procedure was predefined and approved by two researchers to make the reasoning valid and repeatable. The review’s internal validity has been ensured by the review procedure documentation and the complete analysis verification. Therefore, the threats to internal validity are not significant in this case.

3.5.3. Repeatability

The search for articles was performed systematically and can be easily repeated using the queries presented in the appendix. However, a major threat to this literature review’s repeatability is that it is based on search engine results. The results of new searches may not be exactly identical due to the constantly growing nature of the databases. The publishers frequently add new articles to the databases and may add old articles to them in some cases as well. Hence, new articles may appear in future search results. Nevertheless, that is also not expected, at least at a significant level. Searches might also have missed relevant articles. Since we included synonyms in the search strings and two of the researchers double-checked the results, this risk was reduced to a minimum.

3.5.4. Article selection and analysis

The first selection of articles was partially cross-checked. One researcher randomly checked the searches and article selection. This procedure avoided unnecessary errors. Two researchers performed the final selection and analysis of the articles.

Answering the primary and secondary research questions, Tables  1 and ​ and2 2 present, respectively, for IS projects and non-IS projects, the models/methods for evaluating the success of projects identified in the literature. For each model/method, the following is presented: identification/name; underlying techniques/approaches; benefits (as described by the authors); limitations/further developments (as described by the authors); project types where the model/method was applied, and sample; and references.

Table 1

Benefits/limitations of models/methods for the evaluation of IS project success.

Table 2

Benefits/limitations of models/methods for the evaluation of the non-IS project success.

It is worth stressing that the identified models/methods are presented by the original authors as “models”, “frameworks”, “approaches”, “methods”, “methodologies”, “tools”, and “techniques”, in general using these terms indistinctively.

To note that the application area (IS or non-IS) does not mean that a model/method is restricted to IS or non-IS projects evaluation. Conversely, it means that the application area used to describe the model/method is (or is not) an IS project. For instance, the models based on DEA ( Wray and Mathieu, 2008 ; Xu and Yeh, 2014 ) or MACBETH ( Lacerda et al., 2011 ; Dachyar and Pratama, 2014 ; Sanchez-Lopez et al., 2012 ) can be used in virtually all project areas/types. However, the conclusion of whether a specific model/method that was applied to IS projects is suitable for non-IS projects evaluation (or vice-versa) requires further research. For more details on each model/method, it is recommended to read the original sources (some references present processes or examples for using the model/method in practice).

5. Discussion

Based on the analysis of the identified models/methods, this section discusses the underlying techniques, the empirical basis, and the benefits/limitations of the models/methods as described by the authors. In the end, significant insights and future directions for research are discussed.

5.1. Models/methods' underlying techniques

Table 3 lists a summary of the models/methods' underlying techniques. For better reference, the techniques were grouped by purpose/main characteristics. Some techniques are repeated since they fall into more than one group.

Table 3

Complete list of techniques underlying the models/methods for evaluating the success of projects.

In Table 3 the prevalence of techniques of the group multi-criteria decision analysis is easily noticeable (e.g., Rigo et al. (2020) ) as well as the application of fuzzy logic (e.g., Basar (2020) ). This is understandable since the evaluation of the success of projects is multi-dimensional ( Silvius and Schipper, 2015 ), involving quantitative and qualitative criteria and subjective human judgments ( Cuellar, 2010 ).

This synthesis of techniques presents three main contributions: on the one hand, it identifies alternative ways for the practical evaluation of the success of projects by using techniques that have already been explored; it presents opportunities for research replication (in the same or new project types where the techniques have been applied); it also helps to identify gaps in research and opportunities to focus on techniques not yet explored or applied.

5.2. Empirical grounding

Even though it is out of the scope of this study to carry out independent experimentation of each model/method, it is important to reflect on the empirical grounding of the models/methods identified.

As previously noted, by looking at literature, project evaluation in practice seems to be an underrated process since most projects are evaluated with a limited set of criteria (typically related to time, scope, and budget) – and many times not formally. Furthermore, the used models/methods for the evaluation or the procedures for measuring success are not fully documented. Even so, the project’s success or failure is reported ( Standish, 2018 ; Marnewick, 2012 ; Varajão and Carvalho, 2018 ; Pereira et al., 2022 ). On the other hand, our study shows a rich set of models/methods that aim to support the evaluation of success following more or less sophisticated approaches.

However, results also show that many techniques extant in the literature are, in their essence, mostly theoretical exercises without evidence of extensive use in practice. This is evident when analyzing the column “project types/sample” in Tables  1 and ​ and2. 2 . Only a few cases (namely, Basar (2020) ; Bokovec et al. (2011) ; Lacerda et al. (2011) ; Barclay and Osei-Bryson (2009) ; Gonçalves and Belderrain (2012) ; Ismail (2020) ; Dachyar and Pratama (2014) ; Sanchez-Lopez et al. (2012) ) describe the use of the proposed model/method in real projects and discuss the practical implications of its use. In other words, only about 20% of the published models/methods for evaluating project success present empirical evidence of its feasibility and real usefulness. It is necessary to remark that, even in these cases, overall, only one project or a small set of projects of the same entity are reported, not being possible to ascertain if the model/method has been used beyond the reported project(s) (one exception is Ismail (2020) ).

The lack of pieces of evidence of the practical use of some models/methods can be explained, in some cases, by their novelty. In other cases, the models/methods may not be used in practice due to the difficulty of applying them or even due to not being known. Some evaluation models/methods are, as expected, more complex than others, and it is fair to mention that some of them are well structured and clearly explained in the original source. Contrarily, unfortunately, in most cases, the articles report only the very early stages of the development of the models/methods, and the descriptions are quite incomplete. They do omit important details, such as the steps that should be followed to perform the evaluation. These are not here identified here due to academic courtesy, but at least more than half of the identified references present deficient academic qualities, which can jeopardize their adoption and application in practice (some were included in this review only because the underlying idea is relevant).

The literature review reveals an evident lack of ongoing project case studies and replication studies. Consequently, it is recommended that more research is carried out, presenting case studies well-grounded in empirical data. There is also the need for new surveys (e.g., questionnaire-based) to create a real picture of organizations' practices (models/methods) concerning models/methods used for project success evaluation.

5.3. Benefits and limitations of the surveyed models

Table 4 summarizes the benefits of the models/methods highlighted in the reviewed articles, ordered by frequency. Some of the most mentioned benefits by authors are: decision-making support; ability to compare projects; ability to specify metrics; multi-dimensional evaluation support; reliability and accuracy; clarity of the objectives; risk management support; incorporation of stakeholders' perspectives; allowance of simulation and forecast; project monitoring support; ability to assess criteria weights; inclusion of subjective measures; inclusion of objective measurements; facilitation of communication; contribution to reflection and learning; among others. These are strong reasons for practitioners to take it into account in their projects.

Table 4

Reported benefits of the models/methods for evaluating the success of projects.

Table 5 summarizes the limitations of the models/methods reported in the articles reviewed. The most mentioned limitations are: it requires further research; risk of imprecision/lack of accuracy; and limited experimentation, sample, and data. This calls for further research since most of the proposed models/methods do not have an empirical evaluation supporting what is claimed by the authors.

Table 5

Reported limitations of the models/methods for evaluating the success of projects.

Understanding these benefits is important to practitioners so they can be aware of the importance of adopting well-defined techniques to assess success in their projects. Limitations are also important since they may entail risks for the project evaluation. Researchers can look at both benefits and limitations to guide research: By developing and proposing new models/methods, they should try to achieve the reported benefits; On the other hand, they should explore the limitations aiming at identifying further needs for research and carrying out new research to solve the identified issues.

We highlight that nearly 45% of the considered references – oddly – do not report any limitations. Nevertheless, it should be noted that, although not mentioned by the authors, a clear limitation of several of the proposed models/methods is the fact that they have not been tested in real projects as already mentioned. Thus, their practical effects have not been studied in organizations, making it difficult to assess their real value.

These results are also useful for improving existing methods. For example, some methods have limitations that have already been solved in others. By analyzing the methods that have addressed them, it is possible to look for solutions to evolve a specific method. For example, Barclay and Osei-Bryson (2009) state that managing conflicts among stakeholders need to be further explored and integrated into their approach. On the other hand, one advantage of Lacerda et al. (2011) ’s model is helping negotiations between stakeholders. Thus, looking at the advantages and limitations between models/methods, a synergy is possible that enables their mutual evolution.

5.4. Main insights and future directions for research

Although some models/methods are based on a single evaluation criterion, others use multiple criteria and more complex assessments. Some methods, such as the ones focused on the ex-post evaluation, examine the organizational performance of the project and the informational and transformational effects that result from it ( Gollner and Baumane-Vitolina, 2016 ), but this does not happen in most cases. In practice, model/methods for success evaluation should be defined considering not only the performance during the project but also the impacts post-project ( Varajão et al., 2022a ). This evaluation is fundamental for confirming the achievement of the expected benefits ( Slevin and Pinto, 1987 ).

Some models/methods are exclusively designed to be applied in IS projects because the required inputs only exist in this kind of project. In most of them, the model has been tested in a specific area but may be extendable to other areas. However, there is rare evidence of models/methods applied in more than one area or project type. This also opens avenues for replication studies.

It is noteworthy that nearly 40% of the identified models/methods were developed/applied in IS projects, possibly indicating more concern regarding measuring success in this kind of project (eventually due to the perceived low levels of success) or the need for different models/methods due to a higher diversity of projects. This is something that also requires further research.

A major limitation of many of the reported models/methods is the fact that they have not been tested in real-world projects. This demands more empirically based research, including replication and case studies.

Due to the complexity of projects and respective environments, it can be challenging to establish the models/methods most appropriate for each case. Furthermore, a model/method defined for evaluating the success of a specific project can be a customized combination of extant models/methods features.

In conclusion, as depicted in Figure 3 , the selection/definition of a model/method for evaluating the success of a specific project should take into account: the specific project and project’s environment characteristics (e.g., deliverables attributes and stakeholders' reporting requirements); the success evaluation requirements (e.g., multi-criteria evaluation) and purpose (e.g., reporting success to top management), considering the expected benefits and acceptable limitations; the models/methods for project evaluation characteristics (e.g., to allow simulations), including their features, benefits, and limitations. It is also recommended that organizations adopt well-defined Success Management processes (e.g., Varajão et al. (2022a) ). We strongly believe that Tables  1 and ​ and2, 2 , listing the extant models/methods, together with Tables  4 and ​ and5, 5 , identifying the main benefits and limitations of each model/method, can be valuable in this process.

Figure 3

Selecting/defining a model/method for evaluating the success of a specific project.

6. Conclusion

Evaluating an IS project’s success is a complex task because of the many perceptions about success, depending, for example, on the stakeholders, project characteristics, project management characteristics, and many other aspects. Therefore, adopting a model/method for the project evaluation is not a simple or elementary task.

We carried out a literature review aiming to identify and raise awareness of the existing models/methods to evaluate IS projects' success. It also included a review of models/techniques from non-IS projects since some may be suitable for IS projects.

One limitation of this study is related to other models/methods that may exist and that were not considered due to not being the focus of research or published in scientific outlets. Another limitation regards the benefits and limitations of the models/methods since the ones presented in this article are reported by the original authors. A description of each technique would also be useful. However, that was not possible to include it in this paper due to length limitations (moreover, they are available in the original sources). All of these limitations create paths for research.

The listing of models/methods, and discussion of its characteristics, current applications, benefits, and limitations, are the main contributions of this article. For researchers, it provides several insights into the state-of-the-art and helps to identify new avenues for research. For practitioners, it improves the understanding of the role of the evaluation models/methods. It also provides a basis for selecting the appropriate models/methods for particular projects according to their characteristics.

Declarations

Author contribution statement.

All authors listed have significantly contributed to the development and the writing of this article.

Funding statement

This work was been supported by FCT - Fundação para a Ciência e Tecnologia within the R&D Units Project Scope (UIDB/00319/2020).

Data availability statement

Declaration of interest’s statement.

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

The search strings are listed next.

  • APHA Glossary of administrative terms in public health, American Public Health Association. Am. J. Publ. Health. 1960; 50 :225–226. [ Google Scholar ]
  • Arviansyah, Spil T., Hillegersberg J. Development and assessment of an instrument to measure equivocal situation and its causes in IS/IT project evaluation. Int. J. Inf. Syst. Proj. Manag. 2015; 3 :25–45. [ Google Scholar ]
  • Atkinson R. Project management: cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria. Int. J. Proj. Manag. 1999; 17 :337–342. [ Google Scholar ]
  • Baccarini D. The logical framework method for defining project success. Proj. Manag. J. 1999; 30 :25–32. [ Google Scholar ]
  • Barclay C., Osei-Bryson K.-M. 2009. Determining the contribution of IS projects: an approach to measure performance; pp. 1–10. (HICSS 2009 - 42nd Hawaii International Conference on System Sciences). [ Google Scholar ]
  • Basar A. A novel methodology for performance evaluation of IT projects in a fuzzy environment: a case study. Soft Comput. 2020; 24 :10755–10770. [ Google Scholar ]
  • Baumeister R.F., Leary M.R. Writing narrative literature reviews. Rev. Gen. Psychol. 1997; 3 :311–320. [ Google Scholar ]
  • Bokovec K., Damij T., Rajkovič U. In: People and Sustainable Organization. Peter Lang. Kern T., Rajkovic V., editors. 2011. Multi-attribute decision model for evaluating ERP project implementations success by using the global efficiency factors approach; pp. 402–420. [ Google Scholar ]
  • Cameron K.S. Effectiveness as paradox: consensus and conflict in conceptions of organizational effectiveness. Manag. Sci. 1986; 32 :539–553. [ Google Scholar ]
  • Cecez-Kecmanovic D., Nagm F. 2008. Understanding IS projects evaluation in practice through an ANT inquiry; pp. 59–70. (ACIS 2008 - American Conference on Information Systems). [ Google Scholar ]
  • Cochrane Glossary in the Cochrane community. 2005. www.cochrane.org Available at:
  • Cuellar M. 2010. Assessing project success: Moving beyond the triple constraint. (2010 International Research Workshop on IT Project Management). [ Google Scholar ]
  • DAC . Development Assistance Committee. Organisation for Economic Co-operation and Development; Paris: 2002. Glossary of Key Terms in Evaluation and Results-Based Management. [ Google Scholar ]
  • Dachyar M., Pratama N.R. Performance evaluation of a drilling project in oil and gas service company in Indonesia by MACBETH method. J. Phys. Conf. 2014; 495 :1–9. [ Google Scholar ]
  • Dahooie J., Zavadskas E., Abolhasani M., et al. A novel approach for evaluation of projects using an interval–valued fuzzy additive ratio assessment (ARAS) method: a case study of oil and gas well drilling projects. Symmetry. 2018; 10 [ Google Scholar ]
  • David O.S., Lawrence D. Collaborative innovation for the management of information technology resources. Int. J. Hum. Cap. Inf. Technol. Prof. (IJHCITP) 2010; 1 :16–30. [ Google Scholar ]
  • Davis K. Different stakeholder groups and their perceptions of project success. Int. J. Proj. Manag. 2014; 32 :189–201. [ Google Scholar ]
  • De Wit A. Measurement of project success. Int. J. Proj. Manag. 1988; 6 :164–170. [ Google Scholar ]
  • Doskočil R., Škapa S., Olšová P. Success evaluation model for project management. E&M Econ. Manag. 2016; 19 :167–185. [ Google Scholar ]
  • Du W. 2015. A new project management performance evaluation method based on BP neural network. (2015 International Conference on Automation, Mechanical Control and Computational Engineering). [ Google Scholar ]
  • Duan B., Tang Y., Tian L., et al. IEEE; 2008. The evaluation of success degree in electric power engineering project based on principal component analysis and fuzzy neural network; pp. 339–344. (2008 Workshop on Power Electronics and Intelligent Transportation System). [ Google Scholar ]
  • Gollner J.A., Baumane-Vitolina I. Measurement of ERP-project success: findings from Germany and Austria. Eng. Econ. 2016; 27 :498–508. [ Google Scholar ]
  • Gonçalves T., Belderrain M. Performance evaluation with PROMETHEE GDSS and GAIA: a study on the ITA-SAT satellite project. J. Aero. Technol. Manag. 2012; 4 :381–392. [ Google Scholar ]
  • Hermawan H., Fauzi A., Anshari M. Performance measurement of project management by using FANP Balanced ScoreCard. J. Theor. Appl. Inf. Technol. 2016; 83 :262–269. [ Google Scholar ]
  • Hoang N., Deegan G., Rochford M. AIS; 2013. Managing IT project success: a case study in the public sector. (AMCIS 2013 - American Conference on Information Systems). [ Google Scholar ]
  • Hong-xiong Y., Yi-liu L., Chun-ling S. 2010. Research on supervising performance assessment method of public projects under the ideas of activity-based cost. (2010 International Conference on E-Product E-Service and E-Entertainment). [ Google Scholar ]
  • Huang Y., Liu Q., Tian L. IEEE; 2008. Evaluating degree of success in power plant construction projects based on variable weight grey cluster; pp. 251–255. (2008 ISECS International Colloquium on Computing, Communication, Control, and Management). [ Google Scholar ]
  • Iivari J. A framework for paradoxical tensions of project management. Int. J. Inf. Syst. Proj. Manag. 2021; 9 :5–35. [ Google Scholar ]
  • Iriarte C., Bayona S. IT projects success factors: a literature review. Int. J. Inf. Syst. Proj. Manag. 2020; 8 :49–78. [ Google Scholar ]
  • Ismail H. Measuring success of water reservoir project by using delphi and priority evaluation method. IOP Conf. Ser. Earth Environ. Sci. 2020; 588 :1.11–11.14. [ Google Scholar ]
  • Kääriäinen J., Pussinen P., Saari L., et al. Applying the positioning phase of the digital transformation model in practice for SMEs: toward systematic development of digitalization. Int. J. Inf. Syst. Proj. Manag. 2020; 8 :24–43. [ Google Scholar ]
  • Keeys L.A., Huemann M. Project benefits co-creation: shaping sustainable development benefits. Int. J. Proj. Manag. 2017; 35 :1196–1212. [ Google Scholar ]
  • Kendra K., Taplin L.J. Project success: a Cultural framework. Proj. Manag. J. 2004; 35 :30–45. [ Google Scholar ]
  • Kerzner H. John Wiley and Sons; 2017. Project Management – A Systems Approach to Planning, Scheduling, and Controlling. [ Google Scholar ]
  • Kutsch E., Maylor H. From failure to success: an investigation into managers' criteria for assessing the outcome of IT projects. Int. J. Manuf. Technol. Manag. 2009; 16 :265–282. [ Google Scholar ]
  • Lacerda R., Ensslin L., Ensslin S. A performance measurement view of IT project management. Int. J. Prod. Perform. Manag. 2011; 60 :132–151. [ Google Scholar ]
  • Lech P. Time, budget, and functionality?—IT project success criteria revised. Inf. Syst. Manag. 2013; 30 :263–275. [ Google Scholar ]
  • Ma B., Tan C., Jiang Z., et al. Intuitionistic fuzzy multicriteria group decision for evaluating and selecting information systems projects. Inf. Technol. J. 2013; 12 :2505–2511. [ Google Scholar ]
  • Marnewick C. 2012. A longitudinal analysis of ICT project success; pp. 326–334. (SAICSIT ’12 - South African Institute for Computer Scientists and Information Technologists Conference). [ Google Scholar ]
  • McCoy F.A. 1986. Measuring success: Establishing and Maintaining a Baseline. (PMI Annual Seminar & Symposium. Montreal). [ Google Scholar ]
  • Milis K., Vanhoof K. World Scientific; 2006. Analysing success criteria for ICT projects; pp. 343–350. (Applied Artificial Intelligence). [ Google Scholar ]
  • Ngereja B.J., Hussein B.A. An examination of the preconditions of learning to facilitate innovation in digitalization projects: a project team members’ perspective. Int. J. Inf. Syst. Proj. Manag. 2021; 9 :23–41. [ Google Scholar ]
  • Nguvulu A., Yamato S., Honma T. 2011. Project evaluation using a backpropagation deep belief network. (The 4th International Multi-Conference on Engineering and Technological Innovation). [ Google Scholar ]
  • Nguvulu A., Yamato S., Honma T. Project performance evaluation using deep belief networks. IEEJ Trans. Electron., Inf. Syst. 2012; 132 :306–312. [ Google Scholar ]
  • Oisen R.P. Can project management be defined? Proj. Manag. Q. 1971; 2 :12–14. [ Google Scholar ]
  • Osei-Kyei R., Chan A. Evaluating the project success index of public-private partnership projects in Hong Kong: the case of the Cross Harbour Tunnel. Construct. Innovat. 2018; 18 :371–391. [ Google Scholar ]
  • Paiva A., Domínguez C., Varajão J., et al. Principais aspectos na avaliação do sucesso de projectos de desenvolvimento de software. Há alguma relação com o que é considerado noutras indústrias? Interciencia. 2011; 36 :200–204. [ Google Scholar ]
  • Papanikolaou M., Xenidis Y. Risk-informed performance assessment of construction projects. Sustainability. 2020; 12 :5321. [ Google Scholar ]
  • Patnayakuni R., Ruppel C. A socio-technical approach to improving the systems development process. Inf. Syst. Front. 2010; 12 :219–234. [ Google Scholar ]
  • Patton M.Q. SAGE; 1996. Utilization-focused Evaluation: the new century Text. [ Google Scholar ]
  • Pereira J., Varajão J., Takagi N. Evaluation of information systems project success – insights from practitioners. Inf. Syst. Manag. 2022; 39 :138–155. [ Google Scholar ]
  • Pinter U., Pšunder I. Evaluating construction project success with use of the M-TOPSIS method. J. Civ. Eng. Manag. 2013; 19 :16–23. [ Google Scholar ]
  • Pinto J.K., Slevin D.P. Critical factors in successful project implementation. IEEE Trans. Eng. Manag. 1987; 34 :22–27. [ Google Scholar ]
  • Pujari C., Seetharam K. An evaluation of effectiveness of the software projects developed through six sigma methodology. Am. J. Math. Manag. Sci. 2015; 24 :67–88. [ Google Scholar ]
  • Rani R., Arifin A., Norazman N., et al. Evaluating the performance of multi-project by using data envelopment analysis (DEA) ASM Sci. J. 2020; 13 [ Google Scholar ]
  • Rigo P., Siluk J., Lacerda D., et al. A model for measuring the success of distributed small-scale photovoltaic systems projects. Sol. Energy. 2020; 205 :241–253. [ Google Scholar ]
  • Sanchez-Lopez R., Costa C.A.B., De Baets B. The MACBETH approach for multi-criteria evaluation of development projects on cross-cutting issues. Ann. Oper. Res. 2012; 199 :393–408. [ Google Scholar ]
  • Sauer C., Gemino A., Reich B.H. The impact of size and volatility on IT project performance. Commun. ACM. 2007; 50 :79–84. [ Google Scholar ]
  • Savolainen P., Ahonen J.J., Richardson I. Software development project success and failure from the supplier’s perspective: a systematic literature review. Int. J. Proj. Manag. 2012; 30 :458–469. [ Google Scholar ]
  • Scriven M. Sage; Newbury Park, CA: 1991. Evaluation Thesaurus. [ Google Scholar ]
  • Sharifi A.S., Nik A.R. In: Dynamics in Logistics. Kotzab H., Pannek J., Thoben K.-D., editors. Springer; 2016. Project balance evaluation method (PBE) - integrated method for project performance evaluation; pp. 693–702. [ Google Scholar ]
  • Silvius G., Schipper R. Developing a maturity model for assessing sustainable project management. J. Mod. Proj. Manag. 2015; 3 :334–342. [ Google Scholar ]
  • Slevin D.P., Pinto J.K. Balancing strategy and tactics in project implementation. Sloan Manag. Rev. 1987; 29 :33–41. [ Google Scholar ]
  • Standish . The Standish Group; 2018. The Chaos Report 2017-2018. [ Google Scholar ]
  • StandishGroup . Standish Group; 2020. CHAOS 2020: beyond Infinity Overview. [ Google Scholar ]
  • Sternberg R.J. Editorial. Psychol. Bull. 1991; 109 :3–4. [ Google Scholar ]
  • Sulistiyani E., Tyas S. 2019. Success measurement framework for information technology project: a conceptual model. (International Conference on Computer Science, Information Technology, and Electrical Engineering). [ Google Scholar ]
  • Tadić D., Arsovski S., Aleksić A., et al. In: Intelligent Techniques in Engineering Management: Theory and Applications. Kahraman C., Onar S., editors. Springer International Publishing; Switzerland: 2015. A fuzzy evaluation of projects for business process' quality improvement. [ Google Scholar ]
  • Tam C., Moura E., Oliveira T., et al. The factors influencing the success of on-going agile software development projects. Int. J. Proj. Manag. 2020; 38 :165–176. [ Google Scholar ]
  • Thomas G., Fernández W. Success in IT projects: a matter of definition? Int. J. Proj. Manag. 2008; 26 :733–742. [ Google Scholar ]
  • Turner R., Zolin R. Forecasting success on large projects: developing reliable scales to predict multiple perspectives by multiple stakeholders over multiple time frames. Proj. Manag. J. 2012; 43 :87–99. [ Google Scholar ]
  • Varajão J. Success management as a PM knowledge area – work-in-progress. Procedia Comput. Sci. 2016; 100 :1095–1102. [ Google Scholar ]
  • Varajão J. A new process for success management - bringing order to a typically ad-hoc area. J. Mod. Proj. Manag. 2018; 5 :92–99. [ Google Scholar ]
  • Varajão J., Carvalho J.A. Association for Information Systems; 2018. Evaluating the Success of IS/IT Projects: How Are Companies Doing it? (2018 International Research Workshop on IT Project Management). [ Google Scholar ]
  • Varajão J., Trigo A. Association for Information Systems; 2016. Evaluation of is project success in infsysmakers: an exploratory case study. (ICIS 2016 - International Conference on Information Systems). [ Google Scholar ]
  • Varajão J., Trigo A., Pereira J., et al. Information systems project management success. Int. J. Inf. Syst. Proj. Manag. 2021; 9 :62–74. [ Google Scholar ]
  • Varajão J., Magalhães L., Freitas L., et al. Success Management – from theory to practice. Int. J. Proj. Manag. 2022; 40 :481–498. [ Google Scholar ]
  • Varajão J., Marques R., Trigo A. Project management processes – impact on the success of information systems projects. Informatica. 2022; 33 :421–436. [ Google Scholar ]
  • Wohlin C., Andrews A.A. Assessing project success using subjective evaluation factors. Software Qual. J. 2001; 9 :43–70. [ Google Scholar ]
  • Wohlin C., Von Mayrhauser A., Höst M., et al. Subjective evaluation as a tool for learning from software project success. Inf. Software Technol. 2000; 42 :983–992. [ Google Scholar ]
  • Wray B., Mathieu R. Evaluating the performance of open source software projects using data envelopment analysis. Inf. Manag. Comput. Secur. 2008; 16 :449–462. [ Google Scholar ]
  • Xu Y., Yeh C.H. A performance-based approach to project assignment and performance evaluation. Int. J. Proj. Manag. 2014; 32 :218–228. [ Google Scholar ]
  • Yang C.L., Huang R.H., Ho M.T. IEEE; 2009. Multi-criteria evaluation model for a software development project; pp. 1840–1844. (IEEM 2009 - IEEE International Conference on Industrial Engineering and Engineering Management). [ Google Scholar ]
  • Yeh D.-Y., Cheng C.-H., Yio H.-W. IEEE; 2006. OWA and PCA integrated assessment model in software project; pp. 1–6. (2006 World Automation Congress). [ Google Scholar ]
  • Zidane Y.J.T., Johansen A., Hussein B.A., et al. PESTOL-framework for «project evaluation on strategic, tactical and operational levels. Int. J. Inf. Syst. Proj. Manag. 2016; 4 :25–41. [ Google Scholar ]
  • Zavadskas E.K., Vilutienė T., Turskis Z., et al. Multi-criteria analysis of Projects’ performance in construction. Arch. Civ. Mech. Eng. 2014; 14 :114–121. [ Google Scholar ]
  • Zhu J., Mostafavi A. Performance assessment in complex engineering projects using a system-of-systems framework. IEEE Syst. J. 2018; 12 :262–273. [ Google Scholar ]

Research project evaluation and selection: an evidential reasoning rule-based method for aggregating peer review information with reliabilities

  • Published: 13 October 2015
  • Volume 105 , pages 1469–1490, ( 2015 )

Cite this article

how can project evaluation help in boosting research area

  • Wei-dong Zhu 1 ,
  • Fang Liu 2 , 3 ,
  • Yu-wang Chen 3 ,
  • Jian-bo Yang 2 , 3 ,
  • Dong-ling Xu 2 , 3 &
  • Dong-peng Wang 2  

1078 Accesses

23 Citations

Explore all metrics

Research project evaluation and selection is mainly concerned with evaluating a number of research projects and then choosing some of them for implementation. It involves a complex multiple-experts multiple-criteria decision making process. Thus this paper presents an effective method for evaluating and selecting research projects by using the recently-developed evidential reasoning (ER) rule. The proposed ER rule based evaluation and selection method mainly includes (1) using belief structures to represent peer review information provided by multiple experts, (2) employing a confusion matrix for generating experts’ reliabilities, (3) implementing utility based information transformation to handle qualitative evaluation criteria with different evaluation grades, and (4) aggregating multiple experts’ evaluation information on multiple criteria using the ER rule. An experimental study on the evaluation and selection of research proposals submitted to the National Science Foundation of China demonstrates the applicability and effectiveness of the proposed method. The results show that (1) the ER rule based method can provide consistent and informative support to make informed decisions, and (2) the reliabilities of the review information provided by different experts should be taken into account in a rational research project evaluation and selection process, as they have a significant influence to the selection of eligible projects for panel review.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

how can project evaluation help in boosting research area

Evaluation of scientific research projects on the basis of evidential reasoning approach under the perspective of expert reliability

Evaluation, ranking and selection of r&d projects by multiple experts: an evidential reasoning rule based approach.

how can project evaluation help in boosting research area

The development of a reviewer selection method: a multi-level hesitant fuzzy VIKOR and TOPSIS approaches

Agarski, B., Budak, I., Kosec, B., & Hodolic, J. (2012). An approach to multi-criteria environmental evaluation with multiple weight assignment. Environmental Modeling and Assessment, 17 (3), 255–266.

Article   Google Scholar  

Bulathsinhala, N. A. (2014), Ex-ante evaluation of publicly funded R&D projects: Searching for exploration. Science and Public Policy , scu035, 1-14.

Carlsson, C., Fullér, R., Heikkilä, M., & Majlender, P. (2007). A fuzzy approach to R&D project portfolio selection. International Journal of Approximate Reasoning, 44 (2), 93–105.

Article   MathSciNet   MATH   Google Scholar  

Chen, X. T. (2009). National science fund and management sciences (1986–2008) . Beijing: Science Press.

Google Scholar  

Coffin, M. A., & Taylor, B. W, I. I. I. (1996). Multiple criteria R&D project selection and scheduling using fuzzy logic. Computers & Operations Research, 23 (3), 207–220.

Feng, B., Ma, J., & Fan, Z. P. (2011). An integrated method for collaborative R&D project selection: Supporting innovative research teams. Expert Systems with Applications, 38 (5), 5532–5543.

Henriksen, A. D., & Traynor, A. J. (1999). A practical R&D project-selection scoring tool. IEEE Transactions on Engineering Management, 46 (2), 158–170.

Horrobin, D. F. (1996). Peer review of grant applications: A harbinger for mediocrity in clinical research? The Lancet, 348 (9037), 1293–1295.

Hsu, Y. G., Tzeng, G. H., & Shyu, J. Z. (2003). Fuzzy multiple criteria selection of government-sponsored frontier technology R&D projects. R&D Management, 33 (5), 539–551.

Huang, C. C., Chu, P. Y., & Chiang, Y. H. (2008). A fuzzy AHP application in government-sponsored R&D project selection. Omega, 36 (6), 1038–1052.

Jayasinghe, U. W., Marsh, H. W., & Bond, N. (2006). A new reader trial approach to peer review in funding research grants: An Australian experiment. Scientometrics, 69 (3), 591–606.

Južnič, P., Pečlin, S., Žaucer, M., Mandelj, T., Pušnik, M., & Demšar, F. (2010). Scientometric indicators: Peer-review, bibliometric methods and conflict of interests. Scientometrics, 85 (2), 429–441.

Khalili-Damghani, K., Sadi-Nezhad, S., & Tavana, M. (2013). Solving multi-period project selection problems with fuzzy goal programming based on TOPSIS and a fuzzy preference relation. Information Sciences, 252 , 42–61.

Article   MathSciNet   Google Scholar  

Lawson, C. P., Longhurst, P. J., & Ivey, P. C. (2006). The application of a new research and development project selection model in SMEs. Technovation, 26 (2), 242–250.

Linton, J. D., Walsh, S. T., & Morabito, J. (2002). Analysis, ranking and selection of R&D projects in a portfolio. R&D Management, 32 (2), 139–148.

Mahmoodzadeh, S., Shahrabi, J., Pariazar, M., & Zaeri, M. S. (2007). Project selection by using fuzzy AHP and TOPSIS technique. World Academy of Science, Engineering and Technology, 30 , 333–338.

Meade, L. M., & Presley, A. (2002). R&D project selection using the analytic network process. IEEE Transactions on Engineering Management, 49 (1), 59–66.

Olsson, N. O., Krane, H. P., Rolstadås, A., & Veiseth, M. (2010). Influence of reference points in ex post evaluations of rail infrastructure projects. Transport Policy, 17 (4), 251–258.

Oral, M., Kettani, O., & Çınar, Ü. (2001). Project evaluation and selection in a network of collaboration: A consensual disaggregation multi-criterion approach. European Journal of Operational Research, 130 (2), 332–346.

Article   MATH   Google Scholar  

Provost, F., & Kohavi, R. (1998). Guest editors’ introduction: On applied research in machine learning. Machine Learning, 30 (2), 127–132.

Shafer, G. (1976). A mathematical theory of evidence (Vol. 1). Princeton: Princeton University Press.

MATH   Google Scholar  

Silva, T., Guo, Z., Ma, J., Jiang, H., & Chen, H. (2013). A social network-empowered research analytics framework for project selection. Decision Support Systems, 55 (4), 957–968.

Silva, T., Jian, M., & Chen, Y. (2014). Process analytics approach for R&D project selection. ACM Transactions on Management Information Systems (TMIS), 5 (4), 21.

Smarandache F, Dezert J, Tacnet J. (2010). Fusion of sources of evidence with different importances and reliabilities. In: 2010 13th conference on information fusion ( FUSION ). IEEE (pp. 1–8).

Solak, S., Clarke, J. P. B., Johnson, E. L., & Barnes, E. R. (2010). Optimization of R&D project portfolios under endogenous uncertainty. European Journal of Operational Research, 207 (1), 420–433.

Tavana, M., Khalili-Damghani, K., & Sadi-Nezhad, S. (2013). A fuzzy group data envelopment analysis model for high-technology project selection: A case study at NASA. Computers & Industrial Engineering, 66 (1), 10–23.

Tian, Q., Ma, J., Liang, J., Kwok, R. C., & Liu, O. (2005). An organizational decision support system for effective R&D project selection. Decision Support Systems, 39 (3), 403–413.

Wang, J., & Hwang, W. L. (2007). A fuzzy set approach for R&D portfolio selection using a real options valuation model. Omega, 35 (3), 247–257.

Wang, D. P., Zhu, W. D., Chen, B., & Liu, F. (2015). Analysis of the expert judgments in the NSFC peer review from the perspective of human cognition: A research based on the ER rule. Science of science and management of S. & T., 36 (4), 22–35.

Xu, D. L. (2012). An introduction and survey of the evidential reasoning approach for multiple criteria decision analysis. Annals of Operations Research, 195 (1), 163–187.

Yang, J. B. (2001). Rule and utility based evidential reasoning approach for multiattribute decision analysis under uncertainties. European Journal of Operational Research, 131 (1), 31–61.

Yang, J. B., & Singh, M. G. (1994). An evidential reasoning approach for multiple-attribute decision making with uncertainty. IEEE Transactions on Systems, Man and Cybernetics, 24 (1), 1–18.

Yang, J. B., & Xu, D. L. (2002). On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 32 (3), 289–304.

Yang, J. B., & Xu, D. L. (2013). Evidential reasoning rule for evidence combination. Artificial Intelligence, 205 , 1–29.

Download references

Acknowledgments

This research is partially supported by the National Natural Science Foundation of China under Grant No. 71071048 and the Scholarship from China Scholarship Council under Grant No. 201306230047.

Author information

Authors and affiliations.

School of Economics, Hefei University of Technology, 193 Tunxi Road, Hefei, 230009, Anhui, China

Wei-dong Zhu

School of Management, Hefei University of Technology, 193 Tunxi Road, Hefei, 230009, Anhui, China

Fang Liu, Jian-bo Yang, Dong-ling Xu & Dong-peng Wang

Manchester Business School, The University of Manchester, Manchester, M15 6PB, UK

Fang Liu, Yu-wang Chen, Jian-bo Yang & Dong-ling Xu

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Fang Liu .

Appendix 1: Reliabilities of experts for “ Results and comparative analysis ” section

As the reliabilities of some experts are not available in the data set, the average true positive rate of 0.2726 and the average true negative rate of 0.9592 are used as their reliabilities accordingly.

  • The original data set is available for research use with request

Rights and permissions

Reprints and permissions

About this article

Zhu, Wd., Liu, F., Chen, Yw. et al. Research project evaluation and selection: an evidential reasoning rule-based method for aggregating peer review information with reliabilities. Scientometrics 105 , 1469–1490 (2015). https://doi.org/10.1007/s11192-015-1770-8

Download citation

Received : 19 November 2014

Published : 13 October 2015

Issue Date : December 2015

DOI : https://doi.org/10.1007/s11192-015-1770-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research project evaluation and selection
  • Evidential reasoning
  • Reliability
  • Confusion matrix
  • Find a journal
  • Publish with us
  • Track your research

Filter by Keywords

Project Management

From good to great: everything you need to know about effective project evaluation.

December 30, 2023

For project managers, each project is like nurturing a baby—it needs constant attention to grow strong and reach its full potential. That’s why monitoring your project’s real-time progress and performance is the secret to consistent success. 

Project evaluation is your best ally in assessing how effectively your project aligns with its objectives and delivers value to stakeholders. Uncovering these evaluation insights will empower you to make smart decisions that significantly improve your business outcomes. 

Eager to discover the secrets of successful project evaluation? You’re in for a treat! 🍬

In this article, we’ll guide you through the five crucial steps to master your project evaluation process . Plus, we’ll delve into the perks and pitfalls of project evaluation and explore its primary types. Buckle up, and let’s begin!

What is Project Evaluation?

What are the main types of project evaluation, what are the benefits of performing a project evaluation, step 1: identify project goals and objectives, step 2: define the scope of the evaluation, step 3: develop a data collection plan, step 4: analyze data, step 5: report your findings, step 6: discuss the next project evaluation steps , common project evaluation mistakes to avoid.

Avatar of person using AI

Assessing a project’s success involves project evaluation—a meticulous process that involves gathering detailed project data and using project evaluation methods to uncover areas for performance improvement. 

Project evaluation isn’t just a routine check—it keeps stakeholders informed about project status, opportunities for enhancement, and potential budget or schedule adjustments. ✅

Every part of the project, from expenses and scope to risks and ROI, undergoes analysis to ensure alignment with the initial plan. Any hurdles or deviations encountered along the way become valuable insights that guide future improvements.

Tools like project dashboards and trackers are crucial in facilitating the evaluation process. They streamline access to crucial project data, making it readily available for informed decision-making and strategic adjustments.

In any project’s lifecycle, there are three pivotal moments demanding evaluation . While project evaluation can happen at any time, these particular points deserve official scheduling for a more structured approach.

Pre-project evaluation

Before starting a project, assessing its feasibility for successful completion is essential. This evaluation typically aligns with the development stage the project is currently in, and it’s a cornerstone for its effective execution. In this type of evaluation, you must establish a shared understanding of objectives and goals among all stakeholders before giving the project the thumbs up.

Ongoing project evaluation

Using metrics throughout the project’s lifecycle is important for confirming that completed tasks align with benchmarks. This includes staying within budget, meeting task completion rates, and ensuring overall work quality . Keeping the team focused on the initial objectives helps them stay on course as the project evolves.

Post-project evaluation

After project completion, analyzing impacts and outcomes is your number one priority. Outcomes provide a yardstick for measuring the project’s effectiveness in meeting predefined objectives and goals so you can see what worked and what didn’t. Evaluating impacts helps you effectively address and resolve issues in future projects .

The advantages of conducting a project evaluation span from internal team growth to external triumphs. Here’s a rundown of the main benefits:

  • Tracking the project’s progress: It helps track team performance across projects, providing a record of improvements or setbacks over time
  • Identifying improvement areas: By recognizing trends and patterns, evaluations pinpoint areas for improvement within the project or team processes
  • Measuring impact: Project evaluation quantifies the impact of your project, providing concrete metrics and feedback to measure the success of your endeavors
  • Engaging stakeholders: If you involve stakeholders in the evaluation process, you’ll reassure them of project quality, fostering trust and collaboration
  • Encouraging accountability: Project evaluation promotes accountability and reflection among team members, motivating them to work hard for continuous improvement
  • Informing future planning: Insights you gather from evaluations influence future project plans , allowing for adjustments based on past project performance and lessons learned 👨‍🏫

How to Conduct a Project Evaluation in 6 Steps

Unlocking the path to a successful project evaluation isn’t just about following a checklist —it’s about leveraging the right project management tools to streamline the journey! 

We’re here to provide you with the six essential steps to take during a project evaluation process and equip you with top-notch tools that’ll help you elevate your evaluation game. Let’s explore! 🧐

Crafting solid goals and objectives during your project’s development is like drawing a map for your team— it sets the course and direction .

Goals also play a crucial role in shaping the evaluation process tailored to your objectives. For instance, if your goal is to enhance customer satisfaction, your evaluation might focus on customer feedback , experience metrics, and service quality.

Luckily, the super important step of setting project goals is a piece of cake with an all-in-one project management solution like ClickUp . This powerful tool streamlines your project endeavors and kickstarts your project journey by helping you define clear goals and objectives—all in one place! 🌟

ClickUp Goals

With ClickUp Goals , nailing your targets becomes effortless. Set precise timelines and measurable goals, and let automatic progress tracking do the heavy lifting. Dive in by adding key details—name your goal, set the due date, assign a team member—and you’re ready to roll!

ClickUp equips you to:

  • Establish numerical targets for precise tracking
  • Mark Milestones as done or pending to track progress
  • Keep an eye on financial goals for better budget management
  • List individual tasks as targets to tackle complex objectives

Highlight pivotal moments by tagging them as Milestones and transform large goals into manageable chunks for your team to conquer effortlessly.

The cherry on top? You can group related goals into Folders to track progress across multiple objectives at a glance, leading to simpler decision-making. 🍒

Ready to dive into the evaluation process? First, let’s clarify why you’re doing it, what you’re aiming for, and what exactly you’re measuring. Remember to define the evaluation’s scope, including objectives, timeframe, key stakeholders, evaluation metrics, and methods or tools you plan to use for data collection and analysis.

This clarity in purpose and scope is your secret weapon—it sets the stage for a well-organized and effective evaluation, making your project planning and execution as easy as pie. 🥧

ClickUp has the perfect solution for documenting your scope of work without breaking a sweat. With the ClickUp Scope of Work Template , you get a ready-made framework to plug in all the essentials—covering everything from project background and goals to timelines and budget details.

ClickUp Scope of Work Template

Customize its handy tables to document the ins and outs of your evaluation process. Imagine your evaluation goal is to boost customer satisfaction. Here’s a sneak peek at how you’d document the scope:

  • Objectives: To enhance customer satisfaction by 20% within the next six months
  • Timeframe: Evaluation will be conducted quarterly over the next year
  • Stakeholders: Customer service team, marketing department, and selected customers for feedback
  • Criteria: Metrics include Net Promoter Score (NPS), customer feedback surveys, and resolution time for customer inquiries
  • Methods: Use surveys, feedback forms , focus groups, and analysis of complaint resolutions to gather data and insights on customer satisfaction

In ClickUp Docs , flexibility is the name of the game. You can add or remove sections and dive into real-time collaboration by inviting your team to modify the document through edits and comments. 💬

Each section comes preloaded with sample content, so personalizing your template will be a breeze whether you’re a seasoned pro or a newcomer to using Docs.

Now, it’s time to roll up your sleeves and gather the data that answers your evaluation queries. Get creative—there are plenty of ways to collect information: 

  • Create and distribute surveys 
  • Schedule interviews  
  • Organize focus group observations
  • Dig into documents and reports

Variety is key here, so use quantitative and qualitative data to capture every angle of your project. 

For invaluable insights on areas for improvement , we recommend heading straight to the source—your loyal customers! 🛒

With the ClickUp Feedback Form Template , you get a customizable form that centralizes all your feedback. It’s ready to capture feedback on everything from product features to customer support and pricing.

The template has a tailor-made feedback Form you can easily distribute to your customers. Once the forms are filled in, turn to the Service Rating List view—your personal feedback command center showcasing scores, reasons behind the ratings, and invaluable improvement suggestions.

Plus, you can delve into provider ratings in a dedicated list and explore the Overall Recommendations board to identify areas that need enhancement at a glance.

Clickup Feedback Form Template

Once the data’s in your hands, it’s analysis time! Pick the right tools from your kit—descriptive statistics, thematic analysis, or a SWOT analysis —to unlock insights and make sense of what you’ve gathered.

Tap into ClickUp Whiteboards to orchestrate a dynamic SWOT analysis , perfect for companies with remote or hybrid teams . 

ClickUp Whiteboards

Simply create color-coded squares (or any shape you fancy) representing S trengths, W eaknesses, O pportunities, and T hreats. Then, organize your data effortlessly by creating sticky notes and dragging them to the right square, and behold! Your shareable SWOT analysis Whiteboard is ready to roll! 🎲

ClickUp’s digital Whiteboards are like physical whiteboards but better! You can use them to:

  • Conduct collaborative brainstorming sessions
  • Leverage Mind Maps to break down big ideas into bite-sized portions
  • Create dedicated sections for OKRs , KPIs, and internal data as quick references
  • Share ideas with your team through sticky notes, comments, documents, and media files
  • Solve problems creatively with color-coded shapes, charts, and graphs 📊

ClickUp Dashboards are ideal for visualizing data and making data-driven decisions. Dive into a treasure trove of over 50 Cards, crafting your ideal Dashboard that mirrors your vision. Want to see your progress in a pie chart, line graph, or bar graph ? Take your pick and make it yours!

This panoramic view is excellent for monitoring goals, extracting crucial insights, and effortlessly tweaking your strategies. Rely on Burnup and Burndown charts to track performance against set goals and forecast the road. 🛣️

Whether sharing the Dashboard within your workspace or projecting it full screen in the office, it’s the perfect catalyst for team discussions on key project evaluation points.

ClickUp Dashboards

Once you’ve delved into the data, it’s time to bring those insights to light! Crafting a report is your next move—a clear, concise summary showcasing your evaluation’s key findings, conclusions, and recommendations. 📝

Reporting is all about delivering the right information to the right people, so customize your project evaluation report to suit your audience’s needs. Whether it’s your project team, sponsors, clients, or beneficiaries, tailor your report to meet their expectations and address their interests directly. 

Eliminate the need to start your report from square one using the ClickUp Data Analysis Report Template . This powerful tool provides separate subpages for:

  • Overview: Dive into the analysis backstory, covering objectives, scope, methodology, and data collection methods
  • Findings: Present your study’s results and use graphs and charts to illustrate the findings
  • Recommendations and conclusions: Outline your conclusions and provide actionable steps post-evaluation

The template is fully customizable, so you can tailor it to suit your business needs and audience preferences. Tweak tables or create new ones, adding rows and columns for flawless data presentation. ✨

ClickUp Data Analysis Report Template

Sharing evaluation findings isn’t just a formality—it’s a catalyst for stronger connections and brighter ideas. It sparks discussions, invites innovative suggestions for team enhancements, and nurtures stronger bonds with your stakeholders. Plus, it’s a roadmap for future projects , guiding the way to improvements based on the project’s outcomes and impact. 

With ClickUp, you can say goodbye to toggling between project management dashboards and messaging platforms. Dive into the Chat view —your gateway to real-time conversations and task-specific discussions, all in one convenient thread. It’s the ultimate connection hub, keeping everyone in the loop and engaged. 🕹️

ClickUp Chat view

ClickUp Docs ramps up collaboration with team edits, comment tagging, and action item assignments—all in one place. Plus, you can effortlessly turn text into actionable tasks, ensuring organization and efficiency at every turn.

ClickUp Docs

On top of this, ClickUp’s integrations include numerous messaging tools like Slack and Microsoft Teams, so you can communicate easily, whether directly in ClickUp or through your favorite messaging platforms! 💌

Identifying potential hurdles in your project evaluation journey is your first stride toward navigating this path more successfully. Relying on ClickUp’s project management tools and pre-built templates for project evaluation can act as your compass, steering you clear of these missteps. 🧭

Here’s a glimpse into some prevalent project evaluation blunders you should avoid:

  • Undefined goals and objectives: If you fail to establish clear, specific, and measurable goals, you can hinder the evaluation process because you won’t know where to place your focus
  • Misaligned focus: Evaluating irrelevant aspects or neglecting elements crucial for project success can lead to incomplete assessments
  • Neglecting data collection and analysis: Inadequate data gathering that lacks crucial information, coupled with superficial analysis, can result in incomplete insights and failure to evaluate the most critical project points
  • Misuse of data: If you use incorrect or irrelevant data or misinterpret the collected information, you’ll likely come to false conclusions, defeating the whole purpose of a project evaluation
  • Reactivity over responsiveness: Reacting emotionally instead of responding methodically to project challenges can cloud judgment and lead to ineffective evaluation
  • Lack of documentation: Failing to document the evaluation process thoroughly can cause inconsistency and lead to missed learning opportunities
  • Limited stakeholder involvement: Not engaging stakeholders for diverse perspectives and insights can limit the evaluation’s depth and relevance

Simplify Project Evaluation with ClickUp

To ensure your evaluation hits the bullseye, rely on our six-step project evaluation guide that guarantees a thorough dive into data collection, effective analysis, and collaborative problem-solving. Once you share all the findings with your stakeholders, we guarantee you’ll be cooking up the best solutions in no time.

Sign up for ClickUp for free today to keep your project evaluation centralized. This powerful tool isn’t just your ally in project evaluation—it’s your ultimate sidekick throughout the whole project lifecycle! 💖

Tap into its collaboration tools , save time with over 1,000 templates , and buckle up for turbocharged productivity with ClickUp AI , achieving success faster than ever! ⚡

Questions? Comments? Visit our Help Center for support.

Receive the latest WriteClick Newsletter updates.

Thanks for subscribing to our blog!

Please enter a valid email

  • Free training & 24-hour support
  • Serious about security & privacy
  • 99.99% uptime the last 12 months

This website may not work correctly because your browser is out of date. Please update your browser .

Planning and managing evaluations of development research

This guidance has been prepared for those commissioning evaluations of development research projects and programming. “Commissioners” are those who ask for an evaluation, who are responsible for making sure it happens and is well-honed to the needs of those who will use the evaluation.

It is a comprehensive guide that can be used for each step of large-scale, multi-stakeholder evaluation processes. It prompts you to consider documenting your decisions in a formal Terms of Reference (ToR) that all stakeholders can refer to. For simpler evaluations, you might skim or skip some sections of this guide. In those cases, the ToR may be less formal and it may be sufficient for you to document your decisions for yourself, your colleagues, and evaluators you decide to contract.

What is specific about evaluating research?

The International Development Research Centre (IDRC) in Canada primarily funds and facilitates global South-based research for development (R4D).  Its mandate is: “ To initiate, encourage, support, and conduct research into the problems of the developing regions of the world and into the means for applying and adapting scientific, technical, and other knowledge to the economic and social advancement of those regions. ”

Evaluating research for development (R4D) includes several unique features when compared to evaluating other international development interventions.  It is also different from evaluating other areas of research.  These differences are described below along with tools to evaluate R4D programming that may be relevant to the evaluation you are commissioning.

Long, non-linear results chains:  As shown below, three different projects may all aim to improve health: building a hospital has a clear link with improved health; training health care workers also has a plausible connection to improve health outcomes, though there are more links in a causal chain between that training and improved health; and, finally, a research project that studies food consumption and nutrition in children has an ultimate aim of improving health, but there are many intermediary links in the causal chain before that research can make a difference to the health of the children. Evaluating research for development starts with figuring out what results in that long causal chain you want to evaluate, and then, to assess the contribution of research to the outcomes sought. To evaluate the results of research for development programming, one must accept that result pathways are typically non-linear, context is crucial, and complexity is the norm.

The  types of outcomes  of R4D are also different from those of other development interventions. They can include, for example, increased capacity of the individuals, organizations and networks doing the research and using the research. Outcome evaluations might focus on the influence of research on technological development, innovation, or policy and practice changes. They also include efforts to scale up the influence of research.  The following resources can help evaluate R4D outcomes:

  • The Knowledge Translation Toolkit. Bridging the Know–Do Gap: A Resource for Researchers  –gives an overview of what knowledge translation entails and how to use it effectively to bridge the “know–do” gap between research, policy, practice, and people. It describes underlying theories and provides strategies and tools to encourage and enable evidence-informed decision making.
  • Evaluating policy influence is the subject of the freely available book by Fred Carden:  Knowledge to policy. Making the most of development research  – The book starts from a sophisticated understanding about how research influences public policy and decision-making. It shows how research can contribute to better governance in at least three ways: by encouraging open inquiry and debate, by empowering people with the knowledge to hold governments accountable, and by enlarging the array of policy options and solutions available to the policy process.
  • The Overseas Development Institute has several useful guides for evaluating policy influence. For example, the  RAPID Outcome Mapping Approach (ROMA)  and brief guides such as  Monitoring and evaluation of policy influence and advocacy . 
  • Tools to evaluate capacity development include the framework used in IDRC’s capacity development evaluation for  individuals, organizations and networks  and the European Centre for Development Policy Management (ECDPM)  5C framework .
  • In the  Monitoring and Evaluation Strategy Brief , Douthwaite and colleagues give an overview of the monitoring and evaluation (M&E) system of the CGIAR Research Program on Aquatic Agricultural Systems (AAS) and describes how the M&E system is designed to support the program to achieve its goals.  The brief covers: (1) the objectives of the AAS M&E system in keeping with the key program elements; (2) the theory drawn upon to design the M&E system; and, (3) the M&E system components.
  • CIRAD’s Impress project  describes a research project that explores the impacts of international agricultural research including the methodology used and several case studies.

Finally,  evaluating research for development is also different from evaluating academic research . Typically, academic research evaluation is done through deliberative means (such as peer review) and analytics (such as bibliometrics). IDRC uses a holistic approach that acknowledges scientific merit as a necessary but insufficient condition for judging research quality and the role of multiple stakeholders and potential users in determining the effectiveness of research (in terms of its relevance, use and impact). IDRC developed the  Research Quality Plus (or RQ+) Assessment Framework  which consists of three components:

  • Key influences (enabling or constraining factors) either within the research endeavor or in the external environment including: (a) maturity of the research field; (b) intention to strengthen research capacity; (c) risk in the research environment; (d) risk in the political environment; and, (e) risk in the data environment.
  • Research quality dimensions and sub-dimensions which are closely inter-related including: (a) scientific integrity; (b) research legitimacy; (c) importance; and, (d) positioning for use.
  • Customizable assessment rubrics (or 'evaluative rubrics') that make use of both qualitative and quantitative measures to characterize each key influence and to judge the performance of the research study on the various quality dimensions and sub-dimensions.

The work of IDRC contributes to the wider ongoing debate about how to evaluate research quality and acknowledges valuable approaches of other organizations working in this area (see, for example, resources below).  IDRC invites funders of research and researchers to treat the RQ+ framework as a dynamic tool for adaptation to their specific purposes.

Further information & Resources

  • Ofir Z, Schwandt T, Colleen D, McLean R (2016).  RQ+ Research Quality Plus. A Holistic Approach to Evaluating Research.  Ottawa: International Development Research Centre (IDRC) – This report describes IDRC’s approach and inaugural version of the RQ+ assessment framework for evaluating research quality. The report includes valuable lessons learned from the implementation of the framework and discusses a range of potential uses of the framework.
  • For information about bibliometrics, see:  https://www.nature.com/articles/520429a  and  http://www.leidenmanifesto.org/ .  These describe some typical ways in which research quality is assessed and the problems those hold.

See also:  http://www.researchtoaction.org/2013/08/altmetrics-and-the-global-south-increasing-research-visibility/  which includes  Altmetrics and ImpactStory  (sites that track impact of written work by analysing the uptake of research within social media ) and  The Scholarly Communication in Africa Programme (SCAP)  (an initiative seeking to increase the visibility and developmental impact of research outputs from universities in Southern Africa).

Steps in the commissioning process

Typically, commissioners define what is to be evaluated; decide who will be involved in the evaluation; what results are considered important; and, what evidence is relevant. While commissioners, generally, rely on the expertise of evaluators to make decisions about the specific methods to be used, they often set the overall parameters (i.e., overall approach, budget, time line) within which the evaluation is to take place.

To evaluate or not?

With the exception of external program reviews, which are managed centrally by the Policy and Evaluation Division,  evaluation at IDRC is strategic as opposed to routine . This means that evaluations managed by IDRC program staff and grantees are undertaken selectively and in cases where the rationale for undertaking the evaluation is clear.

Within IDRC’s decentralized evaluation system, program staff and grantees generally have jurisdiction over evaluation decisions at the project level and, to a certain extent, at the program level.

At the project level:  project evaluations  are normally conducted under the direction of program officers or project partners. Not all projects are routinely evaluated.

The decision to evaluate a project is usually motivated by one of the following “evaluation triggers”:

  • Significant materiality (investment);
  • Novel / innovative approach;
  • High political priority and/or scrutiny;
  • Advanced phase or maturity of the partnership.

At the program level:  pr o gram-led evaluations  are defined and carried out by a program in accordance with its needs. Program-led evaluations often focus in on burning questions or learning needs at the program level. They strategically assess any important defined aspect within the program’s portfolio (e.g., project(s), organization(s), issues, modality, etc.). Program-led evaluations can be conducted either internally, externally or via a hybrid approach. The primary intended users are usually the program team or its partners (e.g., collaborating donors, project partners, like-minded organizations, etc.).

When NOT to evaluate

In general, there are some circumstances in which undertaking a project or program evaluation is not advisable:

  • Middle/Main Content Area;
  • Constant change. When a project or program has experienced constant upheaval and change, evaluation runs the risk of being premature and inconclusive;
  • Some projects and programs are simply too young and too new to evaluate, unless the evaluation is designed as an accompaniment and/or developmental evaluation;
  • Lack of clarity and consensus on objectives. This makes it difficult for the evaluator to establish what s/he is evaluating;
  • Primarily for promotional purposes. Although “success stories” and “best practices” are often a welcome byproduct of an evaluation, it is troublesome to embark on an evaluation if unearthing “success stories” is the primary goal. Evaluations should systematically seek to unearth what did and also what didn’t work.

A key question is: Realistically, do you have enough time to undertake an evaluation?

When the timelines for planning and/or conducting an evaluation are such that they compromise the credibility of the evaluation findings, it is better not to proceed but to focus efforts on how to make the conditions more conducive. 

Roles and responsibilities for evaluation

Within IDRC’s decentralized evaluation system, responsibility for conducting and using evaluation is shared:

  • Senior management actively promotes a culture of learning, creating incentives for evaluation and learning from failures and disappointing results. It allots resources for evaluation and incorporates evaluation findings into its decision-making.
  • Program staff and project partners engage in and support high-quality, use-oriented evaluations. They seek opportunities to build their evaluation capacities, think evaluatively, and develop evaluation approaches and methods relevant to development research.

IDRC resources

  • IDRC's overall approach to evaluation, guiding principles, components, and roles within our decentralized system, read  Evaluation at IDRC
  • USAID Checklist for deciding to evaluate

Adherence to ethical principles and standards

Both research and evaluations supported by IDRC endeavor to comply with accepted ethical principles:

  • Respect for persons, animals, and the environment. When research/evaluation involves human participants, it should respect the autonomy of the individual.
  • Concern for the welfare of participants: researchers/evaluators should act to benefit or promote the wellbeing of participants (beneficence) and should do no harm (non-maleficence).
  • Justice: the obligation to treat people fairly, equitably, and with dignity.

IDRC-supported research and evaluation should also adhere to universal concepts of justice and equity while remaining sensitive to the cultural norms and practices of the localities where the work is carried out.

Participants must be informed that they are taking part in a research or evaluation study and that they have the right to refuse to participate or cease participation at any time without negative consequences.

The guiding principle of “first, do no harm” applies equally to staff, consultants, and beneficiaries during the evaluation process.

The American Evaluation Association (AEA) developed and adopted important guiding principles for evaluators :

  • Systematic inquiry : Evaluators conduct systematic, data-based inquiry.
  • Competence : Evaluators provide competent performance to stakeholders.
  • Integrity/Honesty : Evaluators display honesty and integrity of the entire evaluation process.
  • Respect for people : Evaluators respect the security, dignity and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
  • Responsibilities for general and public welfare: Evaluators articulate and take into account the diversity of general and public interests and values.

[Source: American Evaluation Association, 2004.]

Importance of cultural competence in evaluation

Cultural competence is the ability to possess sensitivity to and understanding of the cultural values of individuals and groups. Culture can be described as the socially transmitted pattern of beliefs, values, and actions shared by groups of people.

Cultural competence in evaluation is an essential competency that allows an evaluator to demonstrate an understanding of and sensitivity to cultural values. This ensures that an evaluation is respectful and responsive to those involved. Cultural competence helps you work effectively in cross-cultural settings.

A culturally competent perspective can promote effective collaboration. It can also ensure that cultural competency is integrated into the entire evaluation process from choosing the methodology, selecting the right surveys or data collection tools, to reporting the data and findings.

To be culturally competent, a person should:

  • Value the differences between groups and individuals
  • Be knowledgeable about different cultures
  • Be aware of the interaction between cultures
  • Be knowledgeable of negative perceptions or stereotypes a group may face
  • Be able to adapt, as needed, to adequately reach diverse groups

[Source: The University of Minnesota. Find more details on Cultural Competence in Evaluation ]

IDRC resources

  • IDRC Corporate Principles on Research Ethics

Acknowledgements

The guide was developed with funding from the International Development Research Centre (IDRC) in Canada by: Dr Greet Peersman and Professor Patricia Rogers[content] and Nick Herft [design] of the  BetterEvaluation  (BE) Project, Australian and New Zealand School of Government (ANZSOG), Melbourne, Australia with input from IDRC staff. 

We would like to thank the content reviewers: ​Farid Ahmad, Head Strategic Planning, Monitoring and Evaluation, International Centre for Integrated Mountain Development (ICIMOD), Kathmandu, Nepal & Vanessa Hood, Evaluation Lead, Strategy & Planning, Sustainability Victoria, Melbourne, Australia.

Back to top

© 2022 BetterEvaluation. All right reserved.

👋 We're hiring!

How To Evaluate and Measure the Success of a Project

Master key project evaluation metrics for effective decision-making in project management.

Liz Lockhart

Liz Lockhart,   PMP and Agile Leader

  • project management

Attention all business leaders, project managers, and PMO enthusiasts! If you're passionate about making your projects successful, implementing the right strategies and leveraging technology can make all the difference. Project evaluation is the process you need to comprehend and measure that success. 

Keep in mind, though, evaluating a project's success is more complex than it may appear. There are numerous factors to consider, which can differ from one project to another.

In this article, we'll walk you through the fundamentals of an effective project evaluation process and share insights on measuring success for any project. With this information, you'll be well-prepared to assess if a project has met its intended goals, allowing you to make informed decisions and set benchmarks for future endeavors. 

Let's get started on the path to successful project evaluation!

What is project evaluation? 

Project evaluation is all about objectively examining the success or effectiveness of a project once it's completed. 

Remember that each project has unique goals and objectives, so each evaluation will differ. The assessment typically measures how well the project has met its objectives and goals. Throughout the evaluation process, you'll need to consider various factors, such as:

  • Quality of deliverables
  • Customer satisfaction

These factors help determine whether a project can be considered successful or not. It's crucial to remember that evaluation should happen continuously during the project, not just at the end. This approach allows teams to make informed decisions and adjust their course if necessary.

A practical evaluation process not only pinpoints areas for improvement but also celebrates the project's successes. By analyzing project performance and harnessing the insights gained through project evaluation, organizations, and project leaders can fine-tune their strategies to boost project outcomes and make the most of their investment in time, money, and resources for the project or initiative.

What are the steps for measuring the success of a project?

Measuring the success of a project largely depends on its desired outcomes. Since different projects have varying goals, their criteria for success will also differ.

For instance, a team launching a new product might measure success based on customer engagement, sales figures, and reviews, while a team organizing an event may assess success through ticket sales and attendee feedback. Even projects with similar objectives can have different measurements of success. So, there's no one-size-fits-all approach to evaluating project results; each assessment should be customized to the specific goals in mind.

In general, the process of measuring the success of any project includes the following steps:

1. Define the purpose and goals of the project

Before measuring its success, you need a clear understanding of its objectives, scope, and timeline. Collaborate with your team and stakeholders to establish these elements, ensuring everyone is aligned. 

A well-defined project scope helps you set realistic expectations, allocate resources efficiently, and monitor progress effectively.

2. Assess the current status of the project

Regularly examine the project's progress in relation to its goals, timeline, and budget. This step enables you to identify potential issues early and make necessary adjustments. Maintaining open communication with your team and stakeholders during this phase is crucial for staying on track and addressing any concerns.

3. Analyze the results achieved by the project so far

Continuously evaluate your project's performance by looking at the results you've achieved against your goals. Organize retrospectives with your team to discuss what has worked well, what could be improved, and any lessons learned. 

Use this feedback to inform your decision-making process and fine-tune your approach moving forward.

4. Identify any risks associated with the project

Proactively identify and document any potential issues affecting your project's success. 

Develop a risk management plan that includes strategies for mitigating or transferring these risks. Regularly review and update this plan as the project progresses, and communicate any changes to your team and stakeholders. 

Effective risk management helps minimize surprises and allows you to adapt to unforeseen challenges.

5. Establish KPIs (key performance indicators) to measure success

KPIs are quantifiable metrics that help you assess whether your project is on track to achieve its goals. Work with your project team, stakeholders, and sponsor to identify KPIs that accurately reflect the project's success. Ensure these metrics align with the project's purpose and goals and are meaningful to your organization. 

Examples of KPIs include the number of leads generated, customer satisfaction scores, or cost savings.

6. Monitor these KPIs over time to gauge performance

Once you've established your project-specific KPIs, track them throughout the project's duration. Regular monitoring helps you stay informed about your project's performance, identify trends, and make data-driven decisions. 

If your KPIs show that your project is deviating from its goals, revisit the previous steps to assess the current status, analyze the results, and manage risks. Repeat this process as needed until the project is complete.

Project planning software like Float gives you a bird’s eye view of team tasks, capacity, and hours worked, and you can generate valuable reports to help with future planning.

In addition to these steps, strive for transparency in your project reporting and results by making them easily accessible to your team and stakeholders. Use project dashboards, automated reporting, and self-serve project update information to keep everyone informed and engaged. 

This approach saves time and fosters a culture of openness and collaboration, which is essential for achieving project success.

15 project management metrics that matter 

To effectively measure success and progress, it's essential to focus on the metrics that matter. These metrics vary depending on the organization, team, or project, but some common ones include project completion rate, budget utilization, and stakeholder satisfaction.

We have divided these metrics into waterfall projects (predictive) and agile projects (adaptive). While some metrics may apply to both types of projects, this categorization ensures a more tailored approach to evaluation. Remember that these metrics assume a project has a solid plan or a known backlog to work against, as measuring progress relies on comparing actual outcomes to planned outcomes.

Waterfall project management metrics (predictive)

Waterfall projects typically have a defined scope, schedule, and cost at the outset. If changes are required during project execution, the project manager returns to the planning phase to determine a new plan and expectations across scope, schedule, and cost (commonly called the iron triangle). 

Here are eight waterfall metrics:

1. Schedule variance (SV) - Schedule variance is the difference between the work planned and completed at a given time. It helps project managers understand whether the project is on track, ahead, or behind schedule. A positive SV indicates that the project is ahead of schedule, while a negative SV suggests that the project is behind schedule. Monitoring this metric throughout the project allows teams to identify potential bottlenecks and make necessary adjustments to meet deadlines.

2. Actual cost (AC) : Actual cost represents the total amount of money spent on a project up to a specific point in time. It includes all expenses related to the project, such as personnel costs, material costs, and equipment costs. Keeping track of the actual cost is crucial for managing the project budget and ensuring it stays within the allocated funds. Comparing actual cost to the planned budget can provide insights into the project's financial performance and areas where cost-saving measures may be needed.

3. Cost variance (CV) : Cost variance is the difference between a project's expected and actual costs. A positive CV indicates that the project is under budget, while a negative CV suggests that the project is over budget . Monitoring cost variance helps project managers identify areas where the project may be overspending and implement corrective actions to prevent further cost overruns.

4. Planned value (PV) : Planned value is the estimated value of the work that should have been completed by a specific point in time. It is a valuable metric for comparing the project's progress against the original plan. PV calculates other vital metrics, such as earned value (EV) and schedule performance index (SPI).

5. Earned value (EV) : Earned value is a measure of the progress made on a project , represented by the portion of the total budget earned by completing work on the project up to this point. EV can be calculated by multiplying the percentage complete by the total budget. Monitoring earned value helps project managers assess whether the project is progressing as planned and whether any corrective actions are needed to get the project back on track.

6. Schedule performance index (SPI) : The schedule performance index measures how efficiently a project team completes work relative to the amount of work planned. SPI is calculated by dividing the earned value (EV) by the planned value (PV). An SPI of 1.0 indicates that the project is on schedule, while an SPI of less than 1.0 means that the project is behind schedule. This metric helps identify scheduling issues and make adjustments to improve efficiency and meet deadlines.

7. Cost performance index (CPI) : The cost performance index measures how efficient a project team is in completing work relative to the amount of money budgeted. CPI is calculated by dividing the earned value (EV) by the actual cost (AC). A CPI of 1.0 indicates that the project is on budget, while a CPI of less than 1.0 shows that the project is over budget. Monitoring CPI can help project managers identify areas where costs can be reduced and improve overall project financial performance.

8. Estimate at completion (EAC) : Estimate at completion is an updated total cost estimation after the project is completed. EAC can be calculated using several methods, including bottom-up estimating, top-down estimating, analogous estimating, and parametric estimating. Regularly updating the EAC helps project managers stay informed about the project's financial performance and make informed decisions about resource allocation and cost control.

Agile project management metrics (adaptive)

Agile projects differ from waterfall projects as they often start without a clear final destination, allowing for changes along the way.

It's generally not appropriate to use waterfall metrics to evaluate agile projects. Each project is unique and should be assessed based on its purpose, objectives, and methodology.

Here are seven standard agile metrics:

  • Story points: Story points are used to estimate the workload required to complete a task, taking into account the time, effort, and risk involved. Different teams may use various scales for measuring story points, so comparing story points between teams is not advisable, as it may lead to misleading conclusions.
  • Velocity : This metric represents the work a team can complete within a specific period, measured in story points. Velocity helps gauge a team's progress, predicting the amount of work that can be completed in future sprints and estimating the number of sprints needed to finish the known product backlog. Since story points are not standardized, comparing teams or projects based on story points or velocity is not appropriate.
  • Burndown charts : Burndown charts are graphical representations used to track the progress of an agile development cycle. These charts show the amount of known and estimated work remaining over time, counting down toward completion. They can help identify trends and predict when a project will likely be finished based on the team's velocity.
  • Cumulative flow diagrams : These graphs, related to burndown charts, track the progress of an agile development cycle by showing the amount of work remaining to be done over time, counting up. Cumulative flow diagrams (CFDs) can help identify trends and predict when a project will likely be completed based on the team's velocity.
  • Lead time : Lead time is the duration between the identification of a task and its completion. It is commonly used in agile project management to assess a team's progress and predict how much work can be completed in future sprints. Lead time is a standard Kanban metric, as Kanban focuses on promptly completing tasks and finishing ongoing work before starting new tasks.
  • Cycle time : Cycle time is when it takes to complete a task once it has been identified and work begins, not including any waiting time before the job is initiated. Cycle time is frequently used in agile project management to evaluate a team's progress and predict how much work can be completed in future iterations.  
  • Defect density : As a crucial measure of quality and long-term success, defect density is the number of defects per unit of code or delivered output. It is often employed in software development to assess code quality and pinpoint areas needing improvement. If a team provides the output with a high defect density, the quality of the project's deliverables and outcomes may be significantly compromised.

Take your project planning to the next level

Keep an eye on progress against the project plan, and get useful reports on utilization, capacity, completed work, and budgets.

Not all metrics are created equal

It's essential to recognize that not every metric suits every project. Project metrics shouldn't be seen as a one-size-fits-all approach.

With so many metrics, it can be easy to feel overwhelmed, but the key is to focus on the specific metrics that significantly impact your project's outcome. Project managers can make informed, strategic decisions to drive success by measuring the right aspects.

Your choice of project metrics will depend on various factors, such as the type of project, its purpose, and the desired outcomes. Be cautious about using the wrong metrics to measure your project's progress, which can lead to unintended consequences. After all, you get what you measure, and if you measure incorrectly, you might not achieve the results you're aiming for!

Tips on communicating metrics and learnings

Clear communication is crucial to ensure that insightful metrics and learnings have a meaningful impact on your team. To keep your team members engaged and your communications effective, consider the following tips:

  • Use straightforward, informative language : Opt for concise, easily understood language to ensure everyone has a clear grasp of the data and its implications.
  • Avoid abbreviations : Use full terms to avoid confusion, particularly for new team members.
  • Tell a story : Present metrics and learnings within a narrative context, helping team members better understand the project's journey.
  • Use humor and wit : Lighten the mood with humor to make your points more memorable and relatable while ensuring your message is taken seriously.
  • Be transparent : Foster trust by being open and honest about project progress, encouraging collaboration, and being the first to inform stakeholders if something goes wrong.

By incorporating these friendly and informative communication techniques, you can effectively engage your team members and maintain a united front throughout your project.

Cracking the code on project evaluation success

Project evaluation is a vital component of the project management process. To make informed, decisive decisions, project managers need a thorough understanding of various metrics aligned with the project's purpose and desired outcomes.

Effective teams utilize multiple metrics to assess the success or failure of a project. Establishing key metrics and delving into their implications allows teams to base their decisions on accurate, relevant information. Remember, one size doesn't fit all. Tailor success metrics to the specific goals of your project. 

By implementing a robust evaluation process and leveraging insights, project leaders can adapt strategies, enhance project outcomes, maximize the value of investments, and make data-driven decisions for upcoming projects.

Related reads

Strategic project management: how to be the future of pm, free project status report templates to keep people in the loop, project estimation guide: 5 techniques for accurate planning.

Research Project Evaluation-Learnings from the PATHWAYS Project Experience

Affiliations.

  • 1 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland. [email protected].
  • 2 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland. [email protected].
  • 3 Fondazione IRCCS, Neurological Institute Carlo Besta, 20-133 Milano, Italy. [email protected].
  • 4 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland. [email protected].
  • PMID: 29799452
  • PMCID: PMC6025380
  • DOI: 10.3390/ijerph15061071

Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project's evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring.

Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis.

Results: A methodology for longitudinal EU projects' evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%.

Conclusions: There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

Keywords: SWOT analysis; internal evaluation; project achievements; project management and monitoring; project process evaluation; public health.

Publication types

  • Research Support, Non-U.S. Gov't
  • Global Health
  • Occupational Health*
  • Program Evaluation / methods*
  • Research / economics
  • Research / organization & administration*
  • Research / standards

IMAGES

  1. Evaluation Research Examples

    how can project evaluation help in boosting research area

  2. External Evaluation

    how can project evaluation help in boosting research area

  3. Improving Your Project Evaluation Process

    how can project evaluation help in boosting research area

  4. Evaluation process cycle, adapted from the better evaluation initiative

    how can project evaluation help in boosting research area

  5. An Integrated MERLA (Monitoring, Evaluation, Research, Learning, and

    how can project evaluation help in boosting research area

  6. 9 Sample Project Evaluation Templates to Download

    how can project evaluation help in boosting research area

VIDEO

  1. Project Evaluation and Review Technique (PERT)

  2. 10 important points the evaluators should consider while evaluating PhD Research Proposals

  3. Project 11: Hyperparameter tuning of XGBoost

  4. NIH Peer Review Process

  5. Impact Evaluation, what is it and why is it done?

  6. 04 Project Evaluation Methods

COMMENTS

  1. Increasing the Impact of Program Evaluation: The Importance of

    These remarks, which were given as the 2022 Recipient of the Peter H. Rossi Award for Contributions to the Theory or Practice of Program Evaluation, emphasize ways to increase the impact of program evaluation. First, is the importance of asking good questions, including ones that challenge the assumptions and models that dominate the field.

  2. (PDF) Four Approaches to Project Evaluation

    evaluation types: (1) constructive process evaluation, (2) conclusive process evaluation, (3) constructive. outcome evaluation, (4) conclusive outcome evaluation and (5) hybrid evaluations derived ...

  3. Research Project Evaluation—Learnings from the PATHWAYS Project

    1.1. Theoretical Framework. The first step has been the clear definition of what is an evaluation strategy or methodology.The term evaluation is defined by the Cambridge Dictionary as the process of judging something's quality, importance, or value, or a report that includes this information [] or in a similar way by the Oxford Dictionary as the making of a judgment about the amount, number ...

  4. Project management evaluation techniques for research and development

    In doing so, it explains the purpose of evaluating projects and the challenge in evaluating R&D projects. It then outlines the evaluation formula, a mathematical equation that helps project managers create a communication bridge between research and development. It details the equation's three critical elements: procedures, data, and users.

  5. Understanding project evaluation

    Project evaluation is a multi-layered affair. Because projects vary in size, industrial sector, availability of resources and specific goals, they must be adapted to the uniqueness of its. context ...

  6. How to Evaluate a Scientific Project: A Guide

    Define the purpose. 2. Identify the criteria. 3. Select the methods. Be the first to add your personal experience. 4. Conduct the evaluation. Be the first to add your personal experience.

  7. PDF Effective project planning and evaluation in biomedical research

    Defining the purpose and scope of the project is the first phase in the process of project planning and evaluation.In module 3 participants are guided through this phase as they: establish the project statement. define the project goal. define the project objectives. define key indicators for each objectives.

  8. Assessing research excellence: Evaluating the Research Excellence

    Research must be evaluated, not the quality of teaching and degree programmes; The evaluation must be ex post, and must not be an ex ante evaluation of a research or project proposal; The output(s) of research must be evaluated; The distribution of funding from Government must depend upon the evaluation results; The system must be national.

  9. PDF Four Approaches to Project Evaluation

    The lack of advice on how to perform project evaluations makes it difficult to design an evaluation framework, which was the task the authors behind this paper was faced with, when engaging in a large

  10. Program and Project Evaluation

    Logic Models and Program/Project Evaluation. By way of summary, when conducting applied or development research, one may have a new educational technology or system that one believes will be beneficial in some way. This situation is a prime target for research and inquiry. One kind of inquiry often associated with development research is ...

  11. Project Evaluation Process: Definition, Methods & Steps

    Project evaluation is the process of measuring the success of a project, program or portfolio. This is done by gathering data about the project and using an evaluation method that allows evaluators to find performance improvement opportunities. Project evaluation is also critical to keep stakeholders updated on the project status and any ...

  12. What is Project Evaluation? The Complete Guide with Templates

    Project evaluation is a key part of assessing the success, progress and areas for improvement of a project. It involves determining how well a project is meeting its goals and objectives. Evaluation helps determine if a project is worth continuing, needs adjustments, or should be discontinued. A good evaluation plan is developed at the start of ...

  13. Effective Strategies for Conducting Evaluations, Monitoring, and Research

    Evaluations, monitoring, and research are indispensable tools for understanding the performance, impact, and progress of projects, policies, and programs. Effectively conducting these activities requires careful planning, thorough data collection and analysis, clear communication of findings, and careful consideration of ethical issues.

  14. Models and methods for information systems project success evaluation

    Limitations are also important since they may entail risks for the project evaluation. Researchers can look at both benefits and limitations to guide research: By developing and proposing new models/methods, they should try to achieve the reported benefits; On the other hand, they should explore the limitations aiming at identifying further ...

  15. Research project evaluation and selection: an evidential ...

    The evaluation of a research project can be divided into three different stages: ex-ante evaluation, monitoring and ex-post evaluation. The evaluation criteria, as well as evaluation approaches, are usually different for the three different stages of evaluation (Bulathsinhala 2014).The ex-ante evaluation is conducted before project start-up, while monitoring is for an ongoing project, and the ...

  16. Project Evaluation: Steps, Benefits, and Common Mistakes

    Step 1: Identify project goals and objectives. Crafting solid goals and objectives during your project's development is like drawing a map for your team— it sets the course and direction. Goals also play a crucial role in shaping the evaluation process tailored to your objectives.

  17. Planning and managing evaluations of development research

    To evaluate the results of research for development programming, one must accept that result pathways are typically non-linear, context is crucial, and complexity is the norm. The types of outcomes of R4D are also different from those of other development interventions. They can include, for example, increased capacity of the individuals ...

  18. Developing a Multidimensional Conception of Project Evaluation to

    In the quest to improve projects, project actors rely on sound project evaluation. However, project evaluation can be complex and challenging. This study aims to explore and define project evaluation and reveal how it can promote continuous improvements within and across projects and organizations.

  19. How To Evaluate and Measure the Success of a Project

    SPI is calculated by dividing the earned value (EV) by the planned value (PV). An SPI of 1.0 indicates that the project is on schedule, while an SPI of less than 1.0 means that the project is behind schedule. This metric helps identify scheduling issues and make adjustments to improve efficiency and meet deadlines. 7.

  20. Research Project Evaluation-Learnings from the PATHWAYS Project

    This paper describes key project's evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes.

  21. Evaluating the Comprehensive Benefit of Urban Renewal Projects on the

    In academia, the existing research studies are mainly focused on single-project evaluation. An integrated framework that provides a holistic assessment of area-scale project benefits is missing. Few fully consider the coupling coordination benefits between several urban renewal projects from an area-scale perspective.

  22. How can Project Evaluation help in boosting research area?

    Answer. Answer: Having firm definitions of 'evaluation' can link the purpose of research, general questions associated with methodological issues, expected results, and the implementation of results to specific strategies or practices. Attention paid to projects' evaluation shows two concurrent lines of thought in this area. Explanation: