• Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Research Questions

Research Questions – Types, Examples and Writing...

  • Evaluation Research Design: Examples, Methods & Types

busayo.longe

As you engage in tasks, you will need to take intermittent breaks to determine how much progress has been made and if any changes need to be effected along the way. This is very similar to what organizations do when they carry out  evaluation research.  

The evaluation research methodology has become one of the most important approaches for organizations as they strive to create products, services, and processes that speak to the needs of target users. In this article, we will show you how your organization can conduct successful evaluation research using Formplus .

What is Evaluation Research?

Also known as program evaluation, evaluation research is a common research design that entails carrying out a structured assessment of the value of resources committed to a project or specific goal. It often adopts social research methods to gather and analyze useful information about organizational processes and products.  

As a type of applied research , evaluation research typically associated  with real-life scenarios within organizational contexts. This means that the researcher will need to leverage common workplace skills including interpersonal skills and team play to arrive at objective research findings that will be useful to stakeholders. 

Characteristics of Evaluation Research

  • Research Environment: Evaluation research is conducted in the real world; that is, within the context of an organization. 
  • Research Focus: Evaluation research is primarily concerned with measuring the outcomes of a process rather than the process itself. 
  • Research Outcome: Evaluation research is employed for strategic decision making in organizations. 
  • Research Goal: The goal of program evaluation is to determine whether a process has yielded the desired result(s). 
  • This type of research protects the interests of stakeholders in the organization. 
  • It often represents a middle-ground between pure and applied research. 
  • Evaluation research is both detailed and continuous. It pays attention to performative processes rather than descriptions. 
  • Research Process: This research design utilizes qualitative and quantitative research methods to gather relevant data about a product or action-based strategy. These methods include observation, tests, and surveys.

Types of Evaluation Research

The Encyclopedia of Evaluation (Mathison, 2004) treats forty-two different evaluation approaches and models ranging from “appreciative inquiry” to “connoisseurship” to “transformative evaluation”. Common types of evaluation research include the following: 

  • Formative Evaluation

Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project.  Formative evaluation is the starting point of evaluation research because it sets the tone of the organization’s project and provides useful insights for other types of evaluation.  

  • Mid-term Evaluation

Mid-term evaluation entails assessing how far a project has come and determining if it is in line with the set goals and objectives. Mid-term reviews allow the organization to determine if a change or modification of the implementation strategy is necessary, and it also serves for tracking the project. 

  • Summative Evaluation

This type of evaluation is also known as end-term evaluation of project-completion evaluation and it is conducted immediately after the completion of a project. Here, the researcher examines the value and outputs of the program within the context of the projected results. 

Summative evaluation allows the organization to measure the degree of success of a project. Such results can be shared with stakeholders, target markets, and prospective investors. 

  • Outcome Evaluation

Outcome evaluation is primarily target-audience oriented because it measures the effects of the project, program, or product on the users. This type of evaluation views the outcomes of the project through the lens of the target audience and it often measures changes such as knowledge-improvement, skill acquisition, and increased job efficiency. 

  • Appreciative Enquiry

Appreciative inquiry is a type of evaluation research that pays attention to result-producing approaches. It is predicated on the belief that an organization will grow in whatever direction its stakeholders pay primary attention to such that if all the attention is focused on problems, identifying them would be easy. 

In carrying out appreciative inquiry, the research identifies the factors directly responsible for the positive results realized in the course of a project, analyses the reasons for these results, and intensifies the utilization of these factors. 

Evaluation Research Methodology 

There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality

  • Output/Performance Measurement

Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization. In other words, performance measurement pays attention to the results achieved by the resources invested in a specific activity or organizational process. 

More than investing resources in a project, organizations must be able to track the extent to which these resources have yielded results, and this is where performance measurement comes in. Output measurement allows organizations to pay attention to the effectiveness and impact of a process rather than just the process itself. 

Other key indicators of performance measurement include user-satisfaction, organizational capacity, market penetration, and facility utilization. In carrying out performance measurement, organizations must identify the parameters that are relevant to the process in question, their industry, and the target markets. 

5 Performance Evaluation Research Questions Examples

  • What is the cost-effectiveness of this project?
  • What is the overall reach of this project?
  • How would you rate the market penetration of this project?
  • How accessible is the project? 
  • Is this project time-efficient? 

performance-evaluation-survey

  • Input Measurement

In evaluation research, input measurement entails assessing the number of resources committed to a project or goal in any organization. This is one of the most common indicators in evaluation research because it allows organizations to track their investments. 

The most common indicator of inputs measurement is the budget which allows organizations to evaluate and limit expenditure for a project. It is also important to measure non-monetary investments like human capital; that is the number of persons needed for successful project execution and production capital. 

5 Input Evaluation Research Questions Examples

  • What is the budget for this project?
  • What is the timeline of this process?
  • How many employees have been assigned to this project? 
  • Do we need to purchase new machinery for this project? 
  • How many third-parties are collaborators in this project? 

evaluation a research project

  • Impact/Outcomes Assessment

In impact assessment, the evaluation researcher focuses on how the product or project affects target markets, both directly and indirectly. Outcomes assessment is somewhat challenging because many times, it is difficult to measure the real-time value and benefits of a project for the users. 

In assessing the impact of a process, the evaluation researcher must pay attention to the improvement recorded by the users as a result of the process or project in question. Hence, it makes sense to focus on cognitive and affective changes, expectation-satisfaction, and similar accomplishments of the users. 

5 Impact Evaluation Research Questions Examples

  • How has this project affected you? 
  • Has this process affected you positively or negatively?
  • What role did this project play in improving your earning power? 
  • On a scale of 1-10, how excited are you about this project?
  • How has this project improved your mental health? 

evaluation a research project

  • Service Quality

Service quality is the evaluation research method that accounts for any differences between the expectations of the target markets and their impression of the undertaken project. Hence, it pays attention to the overall service quality assessment carried out by the users. 

It is not uncommon for organizations to build the expectations of target markets as they embark on specific projects. Service quality evaluation allows these organizations to track the extent to which the actual product or service delivery fulfils the expectations. 

5 Service Quality Evaluation Questions

  • On a scale of 1-10, how satisfied are you with the product?
  • How helpful was our customer service representative?
  • How satisfied are you with the quality of service?
  • How long did it take to resolve the issue at hand?
  • How likely are you to recommend us to your network?

evaluation a research project

Uses of Evaluation Research 

  • Evaluation research is used by organizations to measure the effectiveness of activities and identify areas needing improvement. Findings from evaluation research are key to project and product advancements and are very influential in helping organizations realize their goals efficiently.     
  • The findings arrived at from evaluation research serve as evidence of the impact of the project embarked on by an organization. This information can be presented to stakeholders, customers, and can also help your organization secure investments for future projects. 
  • Evaluation research helps organizations to justify their use of limited resources and choose the best alternatives. 
  •  It is also useful in pragmatic goal setting and realization. 
  • Evaluation research provides detailed insights into projects embarked on by an organization. Essentially, it allows all stakeholders to understand multiple dimensions of a process, and to determine strengths and weaknesses. 
  • Evaluation research also plays a major role in helping organizations to improve their overall practice and service delivery. This research design allows organizations to weigh existing processes through feedback provided by stakeholders, and this informs better decision making. 
  • Evaluation research is also instrumental to sustainable capacity building. It helps you to analyze demand patterns and determine whether your organization requires more funds, upskilling or improved operations.

Data Collection Techniques Used in Evaluation Research

In gathering useful data for evaluation research, the researcher often combines quantitative and qualitative research methods . Qualitative research methods allow the researcher to gather information relating to intangible values such as market satisfaction and perception. 

On the other hand, quantitative methods are used by the evaluation researcher to assess numerical patterns, that is, quantifiable data. These methods help you measure impact and results; although they may not serve for understanding the context of the process. 

Quantitative Methods for Evaluation Research

A survey is a quantitative method that allows you to gather information about a project from a specific group of people. Surveys are largely context-based and limited to target groups who are asked a set of structured questions in line with the predetermined context.

Surveys usually consist of close-ended questions that allow the evaluative researcher to gain insight into several  variables including market coverage and customer preferences. Surveys can be carried out physically using paper forms or online through data-gathering platforms like Formplus . 

  • Questionnaires

A questionnaire is a common quantitative research instrument deployed in evaluation research. Typically, it is an aggregation of different types of questions or prompts which help the researcher to obtain valuable information from respondents. 

A poll is a common method of opinion-sampling that allows you to weigh the perception of the public about issues that affect them. The best way to achieve accuracy in polling is by conducting them online using platforms like Formplus. 

Polls are often structured as Likert questions and the options provided always account for neutrality or indecision. Conducting a poll allows the evaluation researcher to understand the extent to which the product or service satisfies the needs of the users. 

Qualitative Methods for Evaluation Research

  • One-on-One Interview

An interview is a structured conversation involving two participants; usually the researcher and the user or a member of the target market. One-on-One interviews can be conducted physically, via the telephone and through video conferencing apps like Zoom and Google Meet. 

  • Focus Groups

A focus group is a research method that involves interacting with a limited number of persons within your target market, who can provide insights on market perceptions and new products. 

  • Qualitative Observation

Qualitative observation is a research method that allows the evaluation researcher to gather useful information from the target audience through a variety of subjective approaches. This method is more extensive than quantitative observation because it deals with a smaller sample size, and it also utilizes inductive analysis. 

  • Case Studies

A case study is a research method that helps the researcher to gain a better understanding of a subject or process. Case studies involve in-depth research into a given subject, to understand its functionalities and successes. 

How to Formplus Online Form Builder for Evaluation Survey 

  • Sign into Formplus

In the Formplus builder, you can easily create your evaluation survey by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on “Create Form ” to begin. 

formplus

  • Edit Form Title

Click on the field provided to input your form title, for example, “Evaluation Research Survey”.

evaluation a research project

Click on the edit button to edit the form.

Add Fields: Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for surveys in the Formplus builder. 

evaluation a research project

Edit fields

Click on “Save”

Preview form.

  • Form Customization

With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 

evaluation-research-from-builder

  • Multiple Sharing Options

Formplus offers multiple form sharing options which enables you to easily share your evaluation survey with survey respondents. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages. 

You can send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Conducting evaluation research allows organizations to determine the effectiveness of their activities at different phases. This type of research can be carried out using qualitative and quantitative data collection methods including focus groups, observation, telephone and one-on-one interviews, and surveys. 

Online surveys created and administered via data collection platforms like Formplus make it easier for you to gather and process information during evaluation research. With Formplus multiple form sharing options, it is even easier for you to gather useful data from target markets.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • characteristics of evaluation research
  • evaluation research methods
  • types of evaluation research
  • what is evaluation research
  • busayo.longe

Formplus

You may also like:

Assessment vs Evaluation: 11 Key Differences

This article will discuss what constitutes evaluations and assessments along with the key differences between these two research methods.

evaluation a research project

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

Formal Assessment: Definition, Types Examples & Benefits

In this article, we will discuss different types and examples of formal evaluation, and show you how to use Formplus for online assessments.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

evaluation a research project

Home Market Research

Evaluation Research: Definition, Methods and Examples

Evaluation Research

Content Index

  • What is evaluation research
  • Why do evaluation research

Quantitative methods

Qualitative methods.

  • Process evaluation research question examples
  • Outcome evaluation research question examples

What is evaluation research?

Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal.

Evaluation research is closely related to but slightly different from more conventional social research . It uses many of the same methods used in traditional social research, but because it takes place within an organizational context, it requires team skills, interpersonal skills, management skills, political smartness, and other research skills that social research does not need much. Evaluation research also requires one to keep in mind the interests of the stakeholders.

Evaluation research is a type of applied research, and so it is intended to have some real-world effect.  Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.

LEARN ABOUT: Action Research

Why do evaluation research?

The common goal of most evaluations is to extract meaningful information from the audience and provide valuable insights to evaluators such as sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived value as useful if it helps in decision-making. However, evaluation research does not always create an impact that can be applied anywhere else, sometimes they fail to influence short-term decisions. It is also equally true that initially, it might seem to not have any influence, but can have a delayed impact when the situation is more favorable. In spite of this, there is a general agreement that the major goal of evaluation research should be to improve decision-making through the systematic utilization of measurable feedback.

Below are some of the benefits of evaluation research

  • Gain insights about a project or program and its operations

Evaluation Research lets you understand what works and what doesn’t, where we were, where we are and where we are headed towards. You can find out the areas of improvement and identify strengths. So, it will help you to figure out what do you need to focus more on and if there are any threats to your business. You can also find out if there are currently hidden sectors in the market that are yet untapped.

  • Improve practice

It is essential to gauge your past performance and understand what went wrong in order to deliver better services to your customers. Unless it is a two-way communication, there is no way to improve on what you have to offer. Evaluation research gives an opportunity to your employees and customers to express how they feel and if there’s anything they would like to change. It also lets you modify or adopt a practice such that it increases the chances of success.

  • Assess the effects

After evaluating the efforts, you can see how well you are meeting objectives and targets. Evaluations let you measure if the intended benefits are really reaching the targeted audience and if yes, then how effectively.

  • Build capacity

Evaluations help you to analyze the demand pattern and predict if you will need more funds, upgrade skills and improve the efficiency of operations. It lets you find the gaps in the production to delivery chain and possible ways to fill them.

Methods of evaluation research

All market research methods involve collecting and analyzing the data, making decisions about the validity of the information and deriving relevant inferences from it. Evaluation research comprises of planning, conducting and analyzing the results which include the use of data collection techniques and applying statistical methods.

Some of the evaluation methods which are quite popular are input measurement, output or performance measurement, impact or outcomes assessment, quality assessment, process evaluation, benchmarking, standards, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. There are also a few types of evaluations that do not always result in a meaningful assessment such as descriptive studies, formative evaluations, and implementation analysis. Evaluation research is more about information-processing and feedback functions of evaluation.

These methods can be broadly classified as quantitative and qualitative methods.

The outcome of the quantitative research methods is an answer to the questions below and is used to measure anything tangible.

  • Who was involved?
  • What were the outcomes?
  • What was the price?

The best way to collect quantitative data is through surveys , questionnaires , and polls . You can also create pre-tests and post-tests, review existing documents and databases or gather clinical data.

Surveys are used to gather opinions, feedback or ideas of your employees or customers and consist of various question types . They can be conducted by a person face-to-face or by telephone, by mail, or online. Online surveys do not require the intervention of any human and are far more efficient and practical. You can see the survey results on dashboard of research tools and dig deeper using filter criteria based on various factors such as age, gender, location, etc. You can also keep survey logic such as branching, quotas, chain survey, looping, etc in the survey questions and reduce the time to both create and respond to the donor survey . You can also generate a number of reports that involve statistical formulae and present data that can be readily absorbed in the meetings. To learn more about how research tool works and whether it is suitable for you, sign up for a free account now.

Create a free account!

Quantitative data measure the depth and breadth of an initiative, for instance, the number of people who participated in the non-profit event, the number of people who enrolled for a new course at the university. Quantitative data collected before and after a program can show its results and impact.

The accuracy of quantitative data to be used for evaluation research depends on how well the sample represents the population, the ease of analysis, and their consistency. Quantitative methods can fail if the questions are not framed correctly and not distributed to the right audience. Also, quantitative data do not provide an understanding of the context and may not be apt for complex issues.

Learn more: Quantitative Market Research: The Complete Guide

Qualitative research methods are used where quantitative methods cannot solve the research problem , i.e. they are used to measure intangible values. They answer questions such as

  • What is the value added?
  • How satisfied are you with our service?
  • How likely are you to recommend us to your friends?
  • What will improve your experience?

LEARN ABOUT: Qualitative Interview

Qualitative data is collected through observation, interviews, case studies, and focus groups. The steps for creating a qualitative study involve examining, comparing and contrasting, and understanding patterns. Analysts conclude after identification of themes, clustering similar data, and finally reducing to points that make sense.

Observations may help explain behaviors as well as the social context that is generally not discovered by quantitative methods. Observations of behavior and body language can be done by watching a participant, recording audio or video. Structured interviews can be conducted with people alone or in a group under controlled conditions, or they may be asked open-ended qualitative research questions . Qualitative research methods are also used to understand a person’s perceptions and motivations.

LEARN ABOUT:  Social Communication Questionnaire

The strength of this method is that group discussion can provide ideas and stimulate memories with topics cascading as discussion occurs. The accuracy of qualitative data depends on how well contextual data explains complex issues and complements quantitative data. It helps get the answer of “why” and “how”, after getting an answer to “what”. The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret.

Learn more: Qualitative Market Research: The Complete Guide

Survey software can be used for both the evaluation research methods. You can use above sample questions for evaluation research and send a survey in minutes using research software. Using a tool for research simplifies the process right from creating a survey, importing contacts, distributing the survey and generating reports that aid in research.

Examples of evaluation research

Evaluation research questions lay the foundation of a successful evaluation. They define the topics that will be evaluated. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to collect, how to analyze it, and how to report it.

Evaluation research questions must be developed and agreed on in the planning stage, however, ready-made research templates can also be used.

Process evaluation research question examples:

  • How often do you use our product in a day?
  • Were approvals taken from all stakeholders?
  • Can you report the issue from the system?
  • Can you submit the feedback from the system?
  • Was each task done as per the standard operating procedure?
  • What were the barriers to the implementation of each task?
  • Were any improvement areas discovered?

Outcome evaluation research question examples:

  • How satisfied are you with our product?
  • Did the program produce intended outcomes?
  • What were the unintended outcomes?
  • Has the program increased the knowledge of participants?
  • Were the participants of the program employable before the course started?
  • Do participants of the program have the skills to find a job after the course ended?
  • Is the knowledge of participants better compared to those who did not participate in the program?

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.

institution icon

  • Library Trends

Evaluation Research: An Overview

  • Ronald R. Powell
  • Johns Hopkins University Press
  • Volume 55, Number 1, Summer 2006
  • pp. 102-120
  • 10.1353/lib.2006.0050
  • View Citation

Related Content

Additional Information

Purchase from JHUP

Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes, as a specific research methodology, and as an assessment process that employs special techniques unique to the evaluation of social programs. After the reasons for conducting evaluation research are discussed, the general principles and types are reviewed. Several evaluation methods are then presented, including input measurement, output/performance measurement, impact/outcomes assessment, service quality assessment, process evaluation, benchmarking, standards, quantitative methods, qualitative methods, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. Other aspects of evaluation research considered are the steps of planning and conducting an evaluation study and the measurement process, including the gathering of statistics and the use of data collection techniques. The process of data analysis and the evaluation report are also given attention. It is concluded that evaluation research should be a rigorous, systematic process that involves collecting data about organizations, processes, programs, services, and/or resources. Evaluation research should enhance knowledge and decision making and lead to practical applications.

pdf

Project MUSE Mission

Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves.

MUSE logo

2715 North Charles Street Baltimore, Maryland, USA 21218

+1 (410) 516-6989 [email protected]

©2024 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries.

Now and Always, The Trusted Content Your Research Requires

Project MUSE logo

Built on the Johns Hopkins University Campus

Learn how to develop a ToC-based evaluation

http://A%20group%20of%20ideas%20discussing%20and%20exploring%20ideas

  • Evaluation Goals and Planning
  • Identify ToC-Based Questions
  • Choose an Evaluation Design
  • Select Measures

Choose an Appropriate Evaluation Design

Once you’ve identified your questions, you can select an appropriate evaluation design. Evaluation design refers to the overall approach to gathering information or data to answer specific research questions.

There is a spectrum of research design options—ranging from small-scale feasibility studies (sometimes called road tests) to larger-scale studies that use advanced scientific methodology. Each design option is suited to answer particular research questions.

The appropriate design for a specific project depends on what the project team hopes to learn from a particular implementation and evaluation cycle. Generally, as projects and programs move from small feasibility tests to later stage studies, methodological rigor increases.

Evaluation Design Studies Graphic

In other words, you’ll use more advanced tools and processes that allow you to be more confident in your results. Sample sizes get larger, the number of measurement tools increases, and assessments are often standardized and norm-referenced (designed to compare an individual’s score to a particular population).

In the IDEAS Framework, evaluation is an ongoing, iterative process. The idea is to investigate your ToC one domain at a time, beginning with program strategies and gradually expanding your focus until you’re ready to test the whole theory. Returning to the domino metaphor, we want to see if each domino in the chain is falling the way we expect it to.

Feasibility Study

Begin by asking:

“Are the program strategies feasible and acceptable?”

If you’re designing a program from scratch and implementing it for the first time, you’ll almost always need to begin by establishing feasibility and acceptability. However, suppose you’ve been implementing a program for some time, even without a formal evaluation. In that case, you may have already established feasibility and acceptability simply by demonstrating that the program is possible to implement and that participants feel it’s a good fit. If that’s the case, you might be able to skip over this step, so to speak, and turn your attention to the impact on targets, which we’ll go over in more detail below. On the other hand, for a long-standing program being adapted for a new context or population, you may need to revisit its feasibility and acceptability.

The appropriate evaluation design for answering questions about feasibility and acceptability is typically a feasibility study with a relatively small sample and a simple data collection process.

In this phase, you would collect data on program strategies, including:

  • Fidelity data (is the program being implemented as intended?)
  • Feedback from participants and program staff (through surveys, focus groups, and interviews)
  • Information about recruitment and retention
  • Participant demographics (to learn about who you’re serving and whether you’re serving who you intended to serve)

Through fast-cycle iteration, you can use what you learn from a feasibility study to improve the program strategies.

Pilot Study

Once you have evidence to suggest that your strategies are feasible and acceptable, you can take the next step and turn your attention to the impact on targets by asking:

“Is there evidence to suggest that the targets are changing in the anticipated direction?”

The appropriate evaluation design to begin to investigate the impact on targets is usually a pilot study. With a somewhat larger sample and more complex design, pilot studies often gather information from participants before and after they participate in the program. In this phase, you would collect data on program strategies and targets. Note that in each phase, the focus of your evaluation expands to include more domains of your ToC. In a pilot study, in addition to data on targets (your primary focus), you’ll want to gather information on strategies to continue looking at feasibility and acceptability.

In this phase, you would collect data on:

  • Program strategies

Later Stage Study

Once you’ve established feasibility and acceptability and have evidence to suggest your targets are changing in the expected direction, you’re ready to ask:

“Is there evidence to support our full theory of change?”

In other words, you’ll simultaneously ask:

  • Do our strategies continue to be feasible and acceptable?
  • Are the targets changing in the anticipated direction?
  • Are the outcomes changing in the anticipated direction?
  • Do the moderators help explain variability in impact?

The appropriate evaluation design for investigating your entire theory of change is a later-stage study, with a larger sample and more sophisticated study design, often including some kind of control or comparison group. In this phase, you would collect data on all domains of your ToC: strategies, targets, outcomes, and moderators

Common Questions

There may be cases where it does make sense to skip the earlier steps and move right to a later-stage study. But in most cases, investigating your ToC one domain at a time has several benefits. First, later-stage studies are typically costly in terms of time and money. By starting with a relatively small and low-cost feasibility study and working toward more rigorous evaluation, you can ensure that time and money will be well spent on a program that’s more likely to be effective. If you were to skip ahead to a later-stage study, you might be disappointed to find that your outcomes aren’t changing because of problems with feasibility and acceptability, or because your targets aren’t changing (or aren’t changing enough).

Many programs do gather data on outcomes without looking at strategies and targets. One challenge with that approach is that if you don’t see evidence of impact on program outcomes, you won’t be able to know why that’s the case. Was there a problem with feasibility, and the people implementing the program weren’t able to deliver the program as it was intended? Was there an issue with acceptability, and participants tended to skip sessions or drop out of the program early? Maybe the implementation went smoothly, but the strategies just weren’t effective at changing your targets, and that’s where the causal chain broke down. Unless you gather data on strategies and targets, it’s hard to know what went wrong and what you can do to improve the program’s effectiveness.  

Evaluation Research

  • First Online: 13 April 2022

Cite this chapter

evaluation a research project

  • Yanmei Li 3 &
  • Sumei Zhang 4  

854 Accesses

This chapter focuses on evaluation research, which is frequently used in planning and public policy. Evaluation research is often divided into two phases. The first phase is called ex-ante evaluation research and the second phase is the ex-post evaluation research. Ex-ante evaluation research is often referred to feasibility studies prior to implementing a planning project or activity, which often investigates the regulatory, financial, market, and political feasibility of implementing the proposed projects and activities. Data sources and tools of feasibility study are identified in this chapter. The second phase of evaluation research is the ex-post research, which evaluates the outcomes after implementing a planning project or activity. Before-after comparisons, experimental or quasi-experimental methods, goal achievement matrixes, and other measurements are often used in ex-post policy and planning evaluation research. The chapter also explains the differences among cost-benefit analysis, cost-effectiveness analysis, and cost-revenue

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Biamah, E. K., Kiio, J., & Kogo, B. (2013). Chapter 18 – Environmental impact assessment in Kenya. Developments in Earth Surface Processes, 16 , 237–264. https://doi.org/10.1016/B978-0-444-59559-1.00018-9

Article   Google Scholar  

Cao, X., & Wang, D. (2016). Environmental correlates of residential satisfaction: An exploration of mismatched neighborhood characteristics in the Twin Cities. Landscape and Urban Planning, 150 , 26–35. https://doi.org/10.1016/j.landurbplan.2016.02.007

Cities Alliance. (2020). Tool 17: Goal Achievement Matrix (GAM) . Accessed on Nov. 20 from: http://city-development.org/tool-17-goal-achievement-matrix-gam/#1472738150329-85a5d0f0-9b68

Dharmadasa, I., Gunaratne, L., Weerasinghe, A. R., & Deheragoda, K. (2015). Solar villages for sustainable development and reduction of poverty . Sheffield Hallam University Research Archive (SHURA). http://shura.shu.ac.uk/9783

ELAW (Environmental Law Alliance Worldwide). (2020). Chapter 2 – Overview of the EIA process. In Guidebook for evaluating mining project EIAs . Accessed on Nov. 20 from: https://www.elaw.org/files/mining-eia-guidebook/Chapter2.pdf

Freeman, M., & Beale, P. (1992). Measuring project success. Project Management Journal, 23 (1), 8–17.

Google Scholar  

Huo, X., Yu, A. T. W., Darko, A., & Wu, Z. (2019). Critical factors in site planning and design of green buildings: A case of China. Journal of Cleaner Production, 222 (10), 685–694.

IISD (International Institute for Sustainable Development). (2020). EIA: 7 steps . Accessed on Nov. 20 from: https://www.iisd.org/learning/eia/eia-7-steps/

Kalisky, S. & Mani, A. (2019). How GIS and machine learning work together . ESRI User Conference, 2019. https://www.esri.com/content/dam/esrisites/en-us/about/events/media/UC-2019/technical-workshops/tw-6165-494.pdf

Kaplan, R. (1985). Nature at the doorstep: Residential satisfaction and the nearby environment. Journal of Architectural and Planning Research, 2 (2), 115–127.

Patton, C. V., Sawicki, D. S., & Clark, J. J. (2013). Basic methods of policy analysis and planning (3rd ed.). Pearson.

Rentsch, A. (2019). Machine learning for urban planning: Estimating parking capacity. Harvard Data Science Capstone Project, Fall 2019. Towards data science, December 15, 2019. https://towardsdatascience.com/machine-learning-for-urban-planning-estimating-parking-capacity-15aabd490cf8

Ross, S. A., Westerfield, R. W., & Jordan, B. D. (2013). Fundamentals of corporate finance (10th ed.). McGraw-Hill.

University of Wisconsin-Madison (UWM). (2020). Downtown and Business District market analysis. UWM extension market analysis toolbox. Retrieved on October 22, 2020 from: https://fyi.extension.wisc.edu/downtown-market-analysis/

Wholefoodsmarket.com. (2020). Real estate . Retrieved on October 21, 2020 from: https://www.wholefoodsmarket.com/company-info/real-estate

York, P., & Bamberger, M. (2020). Measuring results and impact in the age of big data: The nexus of evaluation, analytics, and digital technology . The Rockefeller Foundation.

Web Resources

Urban Land Institute.: https://uli.org/

Congress for New Urbanism.: https://www.cnu.org/

U.S. Department of Housing and Urban Development.: https://www.hud.gov/

Download references

Author information

Authors and affiliations.

Florida Atlantic University, Boca Raton, FL, USA

University of Louisville, Louisville, KY, USA

Sumei Zhang

You can also search for this author in PubMed   Google Scholar

Electronic Supplementary Material

(docx 23 kb), rights and permissions.

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Li, Y., Zhang, S. (2022). Evaluation Research. In: Applied Research Methods in Urban and Regional Planning. Springer, Cham. https://doi.org/10.1007/978-3-030-93574-0_11

Download citation

DOI : https://doi.org/10.1007/978-3-030-93574-0_11

Published : 13 April 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-93573-3

Online ISBN : 978-3-030-93574-0

eBook Packages : Mathematics and Statistics Mathematics and Statistics (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Environ Res Public Health

Logo of ijerph

Research Project Evaluation—Learnings from the PATHWAYS Project Experience

Aleksander galas.

1 Epidemiology and Preventive Medicine, Jagiellonian University Medical College, 31-034 Krakow, Poland; [email protected] (A.G.); [email protected] (A.P.)

Aleksandra Pilat

Matilde leonardi.

2 Fondazione IRCCS, Neurological Institute Carlo Besta, 20-133 Milano, Italy; [email protected]

Beata Tobiasz-Adamczyk

Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and multi-cultural aspects of re/integrating chronically ill patients into labor markets in different countries. This paper describes key project’s evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes. The project used a mixed evaluation approach and included Strengths (S), Weaknesses (W), Opportunities (O), and Threats (SWOT) analysis. Results: A methodology for longitudinal EU projects’ evaluation is described. The evaluation process allowed to highlight strengths and weaknesses and highlighted good coordination and communication between project partners as well as some key issues such as: the need for a shared glossary covering areas investigated by the project, problematic issues related to the involvement of stakeholders from outside the project, and issues with timing. Numerical SWOT analysis showed improvement in project performance over time. The proportion of participating project partners in the evaluation varied from 100% to 83.3%. Conclusions: There is a need for the implementation of a structured evaluation process in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every multidisciplinary research projects.

1. Introduction

Over the last few decades, a strong discussion on the role of the evaluation process in research has developed, especially in interdisciplinary or multidimensional research [ 1 , 2 , 3 , 4 , 5 ]. Despite existing concepts and definitions, the importance of the role of evaluation is often underestimated. These dismissive attitudes towards the evaluation process, along with a lack of real knowledge in this area, demonstrate why we need research evaluation and how research evaluation can improve the quality of research. Having firm definitions of ‘evaluation’ can link the purpose of research, general questions associated with methodological issues, expected results, and the implementation of results to specific strategies or practices.

Attention paid to projects’ evaluation shows two concurrent lines of thought in this area. The first is strongly associated with total quality management practices and operational performance; the second focuses on the evaluation processes needed for public health research and interventions [ 6 , 7 ].

The design and implementation of process’ evaluations in fields different from public health have been described as multidimensional. According to Baranowski and Stables, process evaluation consists of eleven components: recruitment (potential participants for corresponding parts of the program); maintenance (keeping participants involved in the program and data collection); context (an aspect of environment of intervention); resources (the materials necessary to attain project goals); implementation (the extent to which the program is implemented as designed); reach (the extent to which contacts are received by the targeted group); barriers (problems encountered in reaching participants); exposure (the extent to which participants view or read material); initial use (the extent to which a participant conducts activities specified in the materials); continued use (the extent to which a participant continues to do any of the activities); contamination (the extent to which participants receive interventions from outside the program and the extent to which the control group receives the treatment) [ 8 ].

There are two main factors shaping the evaluation process. These are: (1) what is evaluated (whether the evaluation process revolves around project itself or the outcomes which are external to the project), and (2) who is an evaluator (whether an evaluator is internal or external to the project team and program). Although there are several existing gaps in current knowledge about the evaluation process of external outcomes, the use of a formal evaluation process of a research project itself is very rare.

To define a clear evaluation and monitoring methodology we performed different steps. The purpose of this article is to present experiences from the project evaluation process implemented in the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS project. The manuscript describes key project evaluation issues as: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits. The PATHWAYS project can be understood as a specific case study—presented through a multidimensional approach—and based on the experience associated with general evaluation, we can develop patterns of good practices which can be used in other projects.

1.1. Theoretical Framework

The first step has been the clear definition of what is an evaluation strategy or methodology . The term evaluation is defined by the Cambridge Dictionary as the process of judging something’s quality, importance, or value, or a report that includes this information [ 9 ] or in a similar way by the Oxford Dictionary as the making of a judgment about the amount, number, or value of something [ 10 ]; assessment and in the activity, it is frequently understood as associated with the end rather than with the process. Stufflebeam, in its monograph, defines evaluation as a study designed and conducted to assist some audience to assess an object’s merit and worth. Considering this definition, there are four categories of evaluation approaches: (1) pseudo-evaluation; (2) questions and/or methods-oriented evaluation; (3) improvement/accountability evaluation; (4) social agenda/advocacy evaluation [ 11 ].

In brief, considering Stufflebeam’s classification, pseudo-evaluations promote invalid or incomplete findings. This happens when findings are selectively released or falsified. There are two pseudo-evaluation types proposed by Stufflebeam: (1) public relations-inspired studies (studies which do not seek truth but gather information to solicit positive impressions of program), and (2) politically controlled studies (studies which seek the truth but inappropriately control the release of findings to right-to-know audiences).

The questions and/or methods-oriented approach uses rather narrow questions, which are oriented on operational objectives of the project. Questions oriented uses specific questions, which are of interest by accountability requirements or an expert’s opinions of what is important, while method oriented evaluations favor the technical qualities of program/process. The general concept of these two is that it is better to ask a few pointed questions well to get information on program merit and worth [ 11 ]. In this group, one may find the following evaluation types: (a) objectives-based studies: typically focus on whether the program objectives have been achieved through an internal perspective (by project executors); (b) accountability, particularly payment by results studies: stress the importance of obtaining an external, impartial perspective; (c) objective testing program: uses standardized, multiple-choice, norm-referenced tests; (d) outcome evaluation as value-added assessment: a recurrent evaluation linked with hierarchical gain score analysis; (e) performance testing: incorporates the assessment of performance (by written or spoken answers, or psychomotor presentations) and skills; (f) experimental studies: program evaluators perform a controlled experiment and contrast the outcomes observed; (g) management information system: provide information needed for managers to conduct their programs; (h) benefit-cost analysis approach: mainly sets of quantitative procedures to assess the full cost of a program and its returns; (i) clarification hearing: an evaluation of a trial in which role-playing evaluators competitively implement both a damning prosecution of a program—arguing that it failed, and a defense of the program—and arguing that it succeeded. Next, a judge hears arguments within the framework of a jury trial and controls the proceedings according to advance agreements on rules of evidence and trial procedures; (j) case study evaluation: focused, in-depth description, analysis, and synthesis of a particular program; (k) criticism and connoisseurship: certain experts in a given area do in-depth analysis and evaluation that could not be done in other way; (l) program theory-based evaluation: based on the theory beginning with another validated theory of how programs of a certain type within similar settings operate to produce outcomes (e.g., Health Believe Model, Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation and Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development - thus so called PRECEDE-PROCEED model proposed by L. W. Green or Stage of Change Theory by Prochaska); (m) mixed method studies: include different qualitative and quantitative methods.

The third group of methods considered in evaluation theory are improvement/accountability-oriented evaluation approaches. Among these, there are the following: (a) decision/accountability oriented studies: emphasizes that evaluation should be used proactively to help improve a program and retroactively to assess its merit and worth; (b) consumer-oriented studies: wherein the evaluator is a surrogate consumer who draws direct conclusions about the evaluated program; (c) accreditation/certification approach: an accreditation study to verify whether certification requirements have been/are fulfilled.

Finally, a social agenda/advocacy evaluation approach focuses on the assessment of difference, which is/was intended to be the effect of the program evaluation. The evaluation process in this type of approach works in a loop, starting with an independent evaluator who provides counsel and advice towards understanding, judging and improving programs as evaluations to serve the client’s needs. In this group, there are: (a) client-centered studies (or responsive evaluation): evaluators work with, and for, the support of diverse client groups; (b) constructivist evaluation: evaluators are authorized and expected to maneuver the evaluation to emancipate and empower involved and affected disenfranchised people; (c) deliberative democratic evaluation: evaluators work within an explicit democratic framework and uphold democratic principles in reaching defensible conclusions; (d) utilization-focused evaluation: explicitly geared to ensure that program evaluations make an impact.

1.2. Implementation of the Evaluation Process in the EU PATHWAYS Project

The idea to involve the evaluation process as an integrated goal of the PATHWAYS project was determined by several factors relating to the main goal of the project, defined as a special intervention to existing attitudes to occupational mobility and work activity reintegration of people of working age, suffering from specific chronic conditions into the labor market in 12 European Countries. Participating countries had different cultural and social backgrounds and different pervasive attitudes towards people suffering from chronic conditions.

The components of evaluation processes previously discussed proved helpful when planning the PATHWAYS evaluation, especially in relation to different aspects of environmental contexts. The PATHWAYS project focused on chronic conditions including: mental health issues, neurological diseases, metabolic disorders, musculoskeletal disorders, respiratory diseases, cardiovascular diseases, and persons with cancer. Within this group, the project found a hierarchy of patients and social and medical statuses defined by the nature of their health conditions.

According to the project’s monitoring and evaluation plan, the evaluation process followed specific challenges defined by the project’s broad and specific goals and monitored the progress of implementing key components by assessing the effectiveness of consecutive steps and identifying conditions supporting the contextual effectiveness. Another significant aim of the evaluation component on the PATHWAYS project was to recognize the value and effectiveness of using a purposely developed methodology—consisting of a wide set of quantitative and qualitative methods. The triangulation of methods was very useful and provided the opportunity to develop a multidimensional approach to the project [ 12 ].

From the theoretical framework, special attention was paid to the explanation of medical, cultural, social and institutional barriers influencing the chance of employment of chronically ill persons in relation to the characteristics of the participating countries.

Levels of satisfaction with project participation, as well as with expected or achieved results and coping with challenges on local–community levels and macro-social levels, were another source of evaluation.

In the PATHWAYS project, the evaluation was implemented for an unusual purpose. This quasi-experimental design was developed to assess different aspects of the multidimensional project that used a variety of methods (systematic review of literature, content analysis of existing documents, acts, data and reports, surveys on different country-levels, deep interviews) in the different phases of the 3 years. The evaluation monitored each stage of the project and focused on process implementation, with the goal of improving every step of the project. The evaluation process allowed to perform critical assessments and deep analysis of benefits and shortages of the specific phase of the project.

The purpose of the evaluation was to monitor the main steps of the Project, including the expectations associated with a multidimensional, methodological approach used by PATHWAYS partners, as well as improving communication between partners, from different professional and methodological backgrounds involved in the project in all its phases, so as to avoid errors in understanding the specific steps as well as the main goals.

2. Materials and Methods

The paper describes methodology and results gathered during the implementation of Work Package 3, Evaluation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the PATHWAYS) project. The work package was intended to keep internal control over the run of the project to achieve timely fulfillment of tasks, milestones, and purpose by all project partners.

2.1. Participants

The project consortium involved 12 partners from 10 different European countries. There were academics (representing cross-disciplinary research including socio-environmental determinants of health, clinicians), institutions actively working for the integration of people with chronic and mental health problems and disability, educational bodies (working in the area of disability and focusing on inclusive education), national health institutes (for rehabilitation of patients with functional and workplace impairments), an institution for inter-professional rehabilitation at a country level (coordinating medical, social, educational, pre-vocational and vocational rehabilitation), a company providing patient-centered services (in neurorehabilitation). All the partners represented vast knowledge and high-level expertise in the area of interest and all agreed with the World Health Organization’s (WHO) International Classification of Functioning, Disability and Health-ICF and of the biopsychosocial model of health and functioning. The consortium was created based on the following criteria:

  • vision, mission, and activities in the area of project purposes,
  • high level of experience in the area (supported by publications) and in doing research (being involved in international projects, collaboration with the coordinator and/or other partners in the past),
  • being able to get broad geographical, cultural and socio-political representation from EU countries,
  • represent different stakeholder type in the area.

2.2. Project Evaluation Tool

The tool development process involved the following steps:

  • (1) Review definitions of ‘evaluation’ and adopt one which consorts best with the reality of public health research area;
  • (2) Review evaluation approaches and decide on the content which should be applicable in the public health research;
  • (3) Create items to be used in the evaluation tool;
  • (4) Decide on implementation timing.

According to the PATHWAYS project protocol, an evaluation tool for the internal project evaluation was required to collect information about: (1) structure and resources; (2) process, management and communication; (3) achievements and/or outcomes and (4) SWOT analysis. A mixed methods approach was chosen. The specific evaluation process purpose and approach are presented in Table 1 .

Evaluation purposes and approaches adopted for the purpose in the PATHWAYS project.

* Open ended questions are not counted here.

The tool was prepared following different steps. In the paragraph to assess structure and resources, there were questions about the number of partners, professional competences, assigned roles, human, financial and time resources, defined activities and tasks, and the communication plan. The second paragraph, process, management and communication, collected information about the coordination process, consensus level, quality of communication among coordinators, work package leaders, and partners, whether project was carried out according to the plan, involvement of target groups, usefulness of developed materials, and any difficulties in the project realization. Finally, the paragraph achievements and outcomes gathered information about project specific activities such as public-awareness raising, stakeholder participation and involvement, whether planned outcomes (e.g., milestones) were achieved, dissemination activities, and opinions on whether project outcomes met the needs of the target groups. Additionally, it was decided to implement SWOT analysis as a part of the evaluation process. SWOT analysis derives its name from the evaluation of Strengths (S), Weaknesses (W), Opportunities (O), and Threats (T) faced by a company, industry or, in this case, project consortium. SWOT analysis comes from the business world and was developed in the 1960s at Harvard Business School as a tool for improving management strategies among companies, institutions, or organization [ 13 , 14 ]. However, in recent years, SWOT analysis has been adapted in the context of research to improve programs or projects.

For a better understanding of SWOT analysis, it is important to highlight the internal features of Strengths and Weaknesses, which are considered controllable. Strengths refers to work inside the project such as capabilities and competences of partners, whereas weaknesses refers to aspects, which needs improvement, such as resources. Conversely, Opportunities and Threats are considered outside factors and uncontrollable [ 15 ]. Opportunities are maximized to fit the organization’s values and resources and threats are the factors that the organization is not well equipped to deal with [ 9 ].

The PATHWAYS project members participated in SWOT analyses every three months. They answered four open questions about strengths, weaknesses, opportunities, and threats identified in evaluated period (last three months). They were then asked to assess those items on 10-point scale. The sample included results from nine evaluated periods from partners from ten different countries.

The tool for the internal evaluation of the PATHWAYS project is presented in Appendix A .

2.3. Tool Implementation and Data Collection

The PATHWAYS on-going evaluation took place at three-month intervals. It consisted of on-line surveys, and every partner assigned a representative who was expected to have good knowledge on the progress of project’s progress. The structure and resources were assessed only twice, at the beginning (3rd month) and at the end (36th month) of the project. The process, management, and communication questions, as well as SWOT analysis questions, were asked every three months. The achievements and outcomes questions started after the first year of implementation (i.e., after 15th month), and some of items in this paragraph, (results achieved, whether project outcomes meet the needs of the target groups and published regular publications), were only implemented at the end of the project (36th month).

2.4. Evaluation Team

The evaluation team was created from professionals with different backgrounds and extensive experience in research methodology, sociology, social research methods and public health.

The project started in 2015 and was carried out for 36 months. There were 12 partners in the PATHWAYS project, representing Austria, Belgium, Czech Republic, Germany, Greece, Italy, Norway, Poland, Slovenia and Spain and a European Organization. The on-line questionnaire was sent to all partners one week after the specified period ended and project partners had at least 2 weeks to fill in/answer the survey. Eleven rounds of the survey were performed.

The participation rate in the consecutive evaluation surveys was 11 (91.7%), 12 (100%), 12 (100%), 11 (91.7%), 10 (83.3%), 11 (91.7%), 11 (91.7%), 10 (83.3%), and 11 (91.7%) till the project end. Overall, it rarely covered the whole group, which may have resulted from a lack of coercive mechanisms at a project level to answer project evaluation questions.

3.1. Evaluation Results Considering Structure and Resources (3rd Month Only)

A total of 11 out of 12 project partners participated in the first evaluation survey. The structure and resources of the project were not assessed by the project coordinator and as such, the results in represent the opinions of the other 10 participating partners. The majority of respondents rated the project consortium as having at least adequate professional competencies. In total eight to nine project partners found human, financial and time resources ‘just right’ and the communication plan ‘clear’. More concerns were observed regarding the clarity of tasks, what is expected from each partner, and how specific project activities should be or were assigned.

3.2. Evaluation Results Considering Process, Management and Communication

The opinions about project coordination, communication processes (with coordinator, between WP leaders, and between individual partners/researchers) were assessed as ‘good’ and ‘very good’, along the whole period. There were some issues, however, when it came to the realization of specific goals, deliverables, or milestones of the project.

Given the broad scope of the project and participating partner countries, we created a glossary to unify the common terms used in the project. It was a challenge, as during the project implementation there were several discussions and inconsistencies in the concepts provided ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g001.jpg

Partners’ opinions about the consensus around terms (shared glossary) in the project consortium across evaluation waves (W1—after 3-month realization period, and at 3-month intervals thereafter).

Other issues, which appeared during project implementation, were recruitment of, involvement with, and cooperation with stakeholders. There was a range of groups to be contacted and investigated during the project including individual patients suffering from chronic conditions, patients’ advocacy groups and national governmental organizations, policy makers, employers, and international organizations. It was found that during the project, the interest and the involvement level of the aforementioned groups was quite low and difficult to achieve, which led to some delays in project implementation ( Figure 2 ). This was the main cause of smaller percentages of “what was expected to be done in designated periods of project realization time”. The issue was monitored and eliminated by intensification of activities in this area ( Figure 3 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g002.jpg

Partners’ reports on whether the project had been carried out according to the plan ( a ) and the experience of any problems in the process of project realization ( b ) (W1—after 3-month realization period, and at 3-month intervals thereafter).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g003.jpg

Partners’ reports on an approximate estimation (in percent) of the project plan implementation (what has been done according to the plan) ( a ) and the involvement of target groups (W1—after 3-month realization period, and at 3-month intervals thereafter) ( b ).

3.3. Evaluation Results Considering Achievements and Outcomes

The evaluation process was prepared to monitor project milestones and deliverables. One of the PATHWAYS project goals was to raise public awareness surrounding the reintegration of chronically ill people into the labor market. This was assessed subjectively by cooperating partners and only half (six) felt they achieved complete success on that measure. The evaluation process monitored planned outcomes according to: (1) determination of strategies for awareness rising activities, (2) assessment of employment-related needs, and (3) development of guidelines (which were planned by the project). The majority of partners completely fulfilled this task. Furthermore, the dissemination process was also carried out according to the plan.

3.4. Evaluation Results from SWOT

3.4.1. strengths.

Amongst the key issues identified across all nine evaluated periods ( Figure 4 ), the “strong consortium” was highlighted as the most important strength of the PATHWAYS project. The most common arguments for this assessment were the coordinator’s experience in international projects, involvement of interdisciplinary experts who could guarantee a holistic approach to the subject, and a highly motivated team. This was followed by the uniqueness of the topic. Project implementers pointed to the relevance of the analyzed issues, which are consistent with social needs. They also highlighted that this topic concerned an unexplored area in employment policy. The interdisciplinary and international approach was also emphasized. According to the project implementers, the international approach allowed mapping of vocational and prevocational processes among patients with chronic conditions and disability throughout Europe. The interdisciplinary approach, on the other hand, enabled researchers to create a holistic framework that stimulates innovation by thinking across boundaries of particular disciplines—especially as the PATHWAYS project brings together health scientists from diverse fields (physicians, psychologists, medical sociologists, etc.) from ten European countries. This interdisciplinary approach is also supported by the methodology, which is based on a mixed-method approach (qualitative and quantitative data). The involvement of an advocacy group was another strength identified by the project implementers. It was stressed that the involvement of different types of stakeholders increased validity and social triangulation. It was also assumed that it would allow for the integration of relevant stakeholders. The last strength, the usefulness of results, was identified only in the last two evaluation waves, when the first results had been measured.

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g004.jpg

SWOT Analysis—a summary of main issues reported by PATHWAYS project partners.

3.4.2. Weaknesses

The survey respondents agreed that the main weaknesses of the project were time and human resources. The subject of the PATHWAYS project turned out to be very broad, and therefore the implementers pointed to the insufficient human resources and inadequate time for the implementation of individual tasks, as well as the project overall. This was related to the broad categories of chronic diseases chosen for analysis in the project. On one hand, the implementers complained about the insufficient number of chronic diseases taken into account in the project. On the other hand, they admitted that it was not possible to cover all chronic diseases in details. The scope of the project was reported as another weakness. In the successive waves of evaluation, the implementers more often pointed out that it was hard to cover all relevant topics.

Nevertheless, some of the major weaknesses reported during the project evaluation were methodological problems. Respondents pointed to problems with the implementation of tasks on a regular basis. For example, survey respondents highlighted the need for more open questions in the survey that the questionnaire was too long or too complicated, that the tools were not adjusted for relevancy in the national context, etc. Another issue was that the working language was English, but all tools or survey questionnaire needed to be translated into different languages and this issue was not always considered by the Commission in terms of timing and resources. This issue could provide useful for further projects, as well as for future collaborations.

The difficulties of involving stakeholders were reported, especially during tasks, which required their active commitment, like participation in in-depth interviews or online questionnaires. Interestingly, the international approach was considered both strength and weakness of the project. The implementers highlighted the complexity of making comparisons between health care and/or social care in different countries. The budget was also identified as a weakness by the project implementers. More funds obtained from the partners could have helped PATHWAYS enhance dissemination and stakeholders’ participation.

3.4.3. Opportunities

A list of seven issues within the opportunities category reflects the positive outlook of survey respondents from the beginning of the project to its final stage. Social utility was ranked as the top opportunity. The implementers emphasized that the project could fill a gap between the existing solutions and the real needs of people with chronic diseases and mental disorders. The implementers also highlighted the role of future recommendations, which would consist of proposed solutions for professionals, employees, employers, and politicians. These advantages are strongly associated with increasing awareness of employment situations of people with chronic diseases in Europe and the relevance of the problem. Alignment with policies, strategies, and stakeholders’ interests were also identified as opportunities. The topic is actively discussed on the European and national level, and labor market and employment issues are increasingly emphasized in the public discourse. What is more relevant is that the European Commission considers the issue crucial, and the results of the project are in line with its requests for the future. The implementers also observed increasing interest from the stakeholders, which is very important for the future of the project. Without doubt, the social network of project implementers provides a huge opportunity for the sustainability of results and the implementation of recommendations.

3.4.4. Threats

Insufficient response from stakeholders was the top perceived threat selected by survey respondents. The implementers indicated that insufficient involvement of stakeholders resulted in low response rates in the research phase, which posed a huge threat for the project. The interdisciplinary nature of the PATHWAYS project was highlighted as a potential threat due to differences in technical terminology and different systems of regulating the employment of persons with reduced work capacity in each country, as well as many differences in the legislation process. Insufficient funding and lack of existing data were identified as the last two threats.

One novel aspect of the evaluation process in the PATHWAYS project was a numerical SWOT analysis. Participants were asked to score strengths, weaknesses, opportunities, and threats from 0 (meaning the lack of/no strengths, weaknesses) to 10 (meaning a lot of ... several ... strengths, weaknesses). This concept enabled us to get a subjective score of how partners perceive the PATHWAYS project itself and the performance of the project, as well as how that perception changes over time. Data showed an increase in both strengths and opportunities and a decrease in weaknesses and threats over the course of project implementation ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-15-01071-g005.jpg

Numerical SWOT, combined, over a period of 36 months of project realization (W1—after 3-month realization period, and at 3-month intervals thereafter).

4. Discussion

The need for project evaluation was born from an industry facing challenges regarding how to achieve market goals in more efficient way. Nowadays, every process, including research project implementation, faces questions regarding its effectiveness and efficiency.

The challenge of a research project evaluation is that the majority of research projects are described as unique, although we believe several projects face similar issues and challenges as those observed in the PATHWAYS project.

The main objectives of the PATHWAYS Project were (a) to identify integration and re-integration strategies that are available in Europe and beyond for individuals with chronic diseases and mental disorders experiencing work-related problems (such as unemployment, absenteeism, reduced productivity, stigmatization), (b) to determine their effectiveness, (c) to assess the specific employment-related needs of those people, and (d) to develop guidelines supporting the implementation of effective strategies of professional integration and reintegration. The broad area of investigation, partial knowledge in the field, diversity of determinants across European Union countries, and involvement with stakeholders representing different groups caused several challenges in the project, including:

  • problem : uncovered, challenging, demanding (how to encourage stakeholders to participate, share experiences),
  • diversity : different European regions; different determinants: political, social, cultural; different public health and welfare systems; differences in law regulations; different employment policies and issues in the system,
  • multidimensionality of research: some quantitative, qualitative studies including focus groups, opinions from professionals, small surveys in target groups (workers with chronic conditions).

The challenges to the project consequently led to several key issues, which should be taken, into account during project realization:

  • partners : with their own expertise and interests; different expectations; different views on what is more important to focused on and highlighted;
  • issues associated with unification : between different countries with different systems (law, work-related and welfare definitions, disability classification, others);
  • coordination : as multidimensionality of the project may have caused some research activities by partners to move in a wrong direction (data, knowledge which is not needed for the project purposes), a lack of project vision in (some) partners might postpone activities through misunderstanding;
  • exchange of information : multidimensionality, the fact that different tasks were accomplished by different centers and obstacles to data collection required good communication methods and smooth exchange of information.

Identified Issues and Implemented Solutions

There were several issues identified through the semi-internal evaluation process performed during the project. Those, which might be more relevant for the project realization, are mentioned in the Table 2 .

Issues identified by the evaluation process and solutions implemented.

The PATHWAYS project included diverse partners representing different areas of expertise and activity (considering broad aspect of chronic diseases, decline in functioning and of disability, and its role in a labor market) in different countries and social security systems, which caused a challenge when developing a common language to achieve effective communication and better understanding of facts and circumstances in different countries. The implementation of continuous project process monitoring, and proper adjustment, enabled the team to overcome these challenges.

The evaluation tool has several benefits. First, it covers all key areas of the research project including structure and available resources, the run of the process, quality and timing of management and communication, as well as project achievements and outcomes. Continuous evaluation of all of these areas provides in-depth knowledge about project performance. Second, the implementation of SWOT tool provided opportunities to share out good and bad experiences by all project partners, and the use of a numerical version of SWOT provided a good picture about inter-relations strengths—weaknesses and opportunities—threats in the project and showed the changes in their intensity over time. Additionally, numerical SWOT may verify whether perception of a project improves over time (as was observed in the PATHWAYS project) showing an increase in strengths and opportunities and a decrease in weaknesses and threats. Third, the intervals in which partners were ‘screened’ by the evaluation questionnaire seems to be appropriate, as it was not very demanding but frequent enough to diagnose on-time some issues in the project process.

The experiences with the evaluation also revealed some limitations. There were no coercive mechanisms for participation in the evaluation questionnaires, which may have caused a less than 100% response rate in some screening surveys. Practically, that was not a problem in the PATHWAYS project. Theoretically, however, this might lead to unrevealed problems, as partners experiencing troubles might not report them. Another point is asking about quality of the consortium to the project coordinator, which has no great value (the consortium is created by the coordinator in the best achievable way and it is hard to expect other comments especially at the beginning of the project). Regarding the tool itself, the question Could you give us approximate estimation (in percent) of the project plan realization (what has been done according to the plan)? was expected to collect information about the project partners collecting data on what has been done out of what should be done during each evaluation period, meaning that 100% was what should be done in 3-month time in our project. This question, however, was slightly confusing at the beginning, as it was interpreted as percentage of all tasks and activities planned for the whole duration of the project. Additionally, this question only works provided that precise, clear plans on the type and timing of tasks were allocated to the project partners. Lastly, there were some questions with very low variability in answer types across evaluation surveys (mainly about coordination and communication). Our opinion is that if the project runs/performs in a smooth manner, one may think such questions useless, but in more complicated projects, these questions may reveal potential causes of troubles.

5. Conclusions

The PATHWAYS project experience shows a need for the implementation of structured evaluation processes in multidisciplinary projects involving different stakeholders in diverse socio-environmental and political conditions. Based on the PATHWAYS experience, a clear monitoring methodology is suggested as essential in every project and we suggest the following steps while doing multidisciplinary research:

  • Define area/s of interest (decision maker level/s; providers; beneficiaries: direct, indirect),
  • Identify 2–3 possible partners for each area (chain sampling easier, more knowledge about; check for publications),
  • Prepare a research plan (propose, ask for supportive information, clarify, negotiate),
  • Create a cross-partner groups of experts,
  • Prepare a communication strategy (communication channels, responsible individuals, timing),
  • Prepare a glossary covering all the important issues covered by the research project,
  • Monitor the project process and timing, identify concerns, troubles, causes of delays,
  • Prepare for the next steps in advance, inform project partners about the upcoming activities,
  • Summarize, show good practices, successful strategies (during project realization, to achieve better project performance).

Acknowledgments

The current study was part of the PATHWAYS project, that has received funding from the European Union’s Health Program (2014–2020) Grant agreement no. 663474.

The evaluation questionnaire developed for the PATHWAYS Project.

SWOT analysis:

What are strengths and weaknesses of the project? (list, please)

What are threats and opportunities? (list, please)

Visual SWOT:

Please, rate the project on the following continua:

How would you rate:

(no strengths) 0 1 2 3 4 5 6 7 8 9 10 (a lot of strengths, very strong)

(no weaknesses) 0 1 2 3 4 5 6 7 8 9 10 (a lot of weaknesses, very weak)

(no risks) 0 1 2 3 4 5 6 7 8 9 10 (several risks, inability to accomplish the task(s))

(no opportunities) 0 1 2 3 4 5 6 7 8 9 10 (project has a lot of opportunities)

Author Contributions

A.G., A.P., B.T.-A. and M.L. conceived and designed the concept; A.G., A.P., B.T.-A. finalized evaluation questionnaire and participated in data collection; A.G. analyzed the data; all authors contributed to writing the manuscript. All authors agreed on the content of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

  • Skip to main content
  • Skip to "About this site"
  • Departments

Language selection

  • Search and menus

A Guide to Evaluation in Health Research

We want to hear from you!

Prepared by: Sarah Bowen, PhD Associate Professor Department of Public Health Sciences, School of Public Health University of Alberta [email protected]

Table of contents

Purpose and objectives of module, scope and limitations of the module, how this module is organized, case studies used in the module, similarities and differences between research and evaluation, why is it important that researchers understand how to conduct an evaluation, defining evaluation.

  • Common misconceptions about evaluation

Evaluation approaches

  • To make a judgement

To improve a program

  • To help support design and development

To create new knowledge

Step 2: identify intended users of the evaluation.

  • Step 3: Create a structure and process for collaboration
  • The issue of internal versus external evaluators

Step 5: Gather relevant evidence

Special issues related to collaborative evaluation.

  • The issue of logic models

An evaluation planning matrix

Confirming the purpose of the evaluation, focusing an evaluation, the importance of focus and sequence.

  • Get the questions on the table
  • Create overarching questions

Guide discussion of sequence

Prioritize the questions, select methods for the specific evaluation questions, identify data sources.

  • Identify appropriate indicators
  • Step 11: Clarify resources and responsibility areas

Step 12: Implement evaluation activities

Step 13: communicate evaluation findings, the ethics of evaluation, issues related to reb review, evaluating in complex environments.

  • The complexity of 'causation'

The special case of economic evaluation

  • The concept of the 'personal' factor

Other useful resources

  • Evaluation Checklist
  • Appendix A: Evaluation Planning Matrix
  • Appendix B: Sample Evaluation Matrix

Introduction

The need for a learning module on evaluation has been identified as a priority for both researchers and peer reviewers. In response to the myriad of challenges facing the health system, both researchers and health system managers are proposing significant changes to current clinical, management and public health practice. This requires timely and rigorous assessment of current programs and innovations. Evaluation is a useful strategy for generating knowledge that can be immediately applied in a specific context, and, if certain evaluation approaches are used, can also generate transferable knowledge useful to the broader health system.

In addition to the common need for some research proposals to include an evaluation component, funders are also initiating funding opportunities that focus on trialing innovations and moving knowledge into action. These proposals, by their nature, require well-designed robust evaluation plans if useful knowledge is to be gained. However, many researchers (like health system managers) have limited evaluation knowledge and skills.

The purpose of this learning module, therefore, is to build knowledge and skill in the area of evaluation of health and health research initiatives (including knowledge translation initiatives).

Objectives of the module are to:

  • Build knowledge among both researchers and reviewers of the potential for evaluation to support evidence-informed action;
  • Support development of appropriate evaluation plans required and/or appropriate for research funding proposals, and;
  • Facilitate assessment of evaluation plans by peer and merit reviewers.

This learning module will:

  • provide an overview of the scope and potential of evaluation activities;
  • define key concepts and address common misconceptions about evaluation;
  • explore the relationship between evaluation and knowledge translation;
  • provide guidance for selecting the purpose, approach and focus of an evaluation; and,
  • identify, and provide guidance for, the key steps in the evaluation planning and implementation process.

While it will provide a brief overview of key concepts in evaluation, and is informed by various evaluation approaches (theories), the primary purpose of this module is to serve as a practical guide for those with research skill but limited experience conducting evaluations. While this module focuses on evaluation in the context of health research and knowledge translation, it is important to keep in mind that there are multiple evaluation approaches, theories and methods appropriate for different contexts.

Because of increasing awareness of the benefits of including knowledge users 1 as partners in many evaluation activities, the module also includes additional guidance for those conducting evaluation in collaboration with health system or other partners.

The module does not attempt to address all the important topics in evaluation design (e.g. how to minimize and control bias, develop a budget, or implement the evaluation plan). This is because the resource is designed for researchers – and it is assumed that the readers will be equipped to address these issues. In addition, while the steps in designing an evaluation plan are outlined and elaborated, the module does not provide a 'template' that can be applied uniformly to any evaluation. Rather, it provides guidance on the process of developing an evaluation plan, as evaluation design requires the creative application of evaluation principles to address specific evaluation questions in a particular context.

This module is divided into five sections. Section 1: Evaluation: A Brief Overview provides a short overview of evaluation, addresses common misconceptions, and defines key terminology that will be used in the module. This is followed by Section 2, Getting started, whichprovides guidance for the preliminary work that is required in planning an evaluation, and Section 3, Designing an Evaluation , which will lead you through the steps of developing an evaluation plan.

Section 4, Special Issues in Evaluation discusses some of the ethical, conceptual and logistical issues specific to evaluation. This is followed by Section 5: Resources , which includes a glossary, a checklist for evaluation planning and sample evaluation templates.

Throughout the module, concepts will be illustrated with concrete examples – drawn from case studies of actual evaluations. While based on real-life evaluations, they have been adapted for this module in order to maintain confidentiality. A summary of these cases is found on the following page.

Case Study 1: Unassigned Patients

A provincial health department contracts for an evaluation of three different models of care provided to hospitalized patients who were without a family physician to follow their care in hospital (unassigned patients). In addition to wanting to know which models 'work best' for patients, the province is also interested in an economic evaluation, as each of the models has a different payment structure and overall costs are not clear.

Case Study 2: Computerized decision support

There is a decision to pilot a commercially developed software program that will combine computerized order entry and physician decision support for test ordering. The decision-support module is based on evidence-based guidelines adopted by a national medical association. Funders require an evaluation of the pilot, as there is an intention to extend use across the region and into other jurisdictions if the results of the evaluation demonstrate that this is warranted.

Section 1: Evaluation: A Brief Overview

The current field of evaluation has emerged from different roots, overall purposes and disciplines, and includes many different "approaches" (theories and models). Various authors define evaluation differently, categorize types of evaluation differently, emphasize diverse aspects of evaluation theory and practice, and (because of different conceptual frameworks) use terms in very different ways.

This section will touch on some of the various approaches to evaluation and highlight some differences between them. However, the focus of the module is to provide a practical guide for those with research experience, but perhaps limited exposure to evaluation. There are a myriad of evaluation handbooks and resources available on the internet sponsored by evaluation organizations, specific associations and individuals. Both the evaluation approaches used and the quality and usefulness of these resources vary significantly (Robert Wood Johnson Foundation, 2004). While there are excellent resources (many of which offer the benefit of framing evaluation for a specific sector or issue); far too many evaluation guides offer formulistic approaches to evaluation – a template to guide the uninitiated.

This is not the approach taken here. Rather than provide a template that can be applied to any initiative, this module aims to provide the necessary background that will equip those with a research background to understand the concepts and alternatives related to evaluation, and to creatively apply these to a specific evaluation activity.

The similarities and differences between research and evaluation have long been the subject of intense debate: there are diverse and often conflicting perspectives (Levin-Rozalis, 2003). It is argued by some that evaluation and research are distinctly different. Proponents of this position cite such factors as the centrality of 'valuing' to evaluation; the inherently political nature of evaluation activities; the limited domain of application of evaluation findings (local and specific rather than transferable or generalizable); and the important role of theory in research compared to evaluation activities. It is also argued that the political and contextual framing of evaluation means that evaluators require a unique set of skills. Some describe evaluation as a profession , in contrast to research, where researchers belong to specific disciplines .

This sense of 'differentness' is reinforced by the fact that evaluators and researchers often inhabit very different worlds. Researchers are largely based in academic institutions and are most often engaged in research that is described as curiosity-driven. Many have no exposure to evaluation during their academic preparation. Evaluators (many of whom are not PhD prepared) are more likely to be working within the health system or established as consultants. In most cases, the two belong to different professional organizations, attend different conferences, and follow different (though overlapping) ethical guidelines.

Taking an alternative view are those who view evaluation as a form of research – using research methodology (and standards) to answer practical questions in a timely fashion. They argue that the commonly cited differences between evaluation and research do not apply to all forms of research, only to some of them. It is noted that the more engaged forms of research have many of the same characteristics and requirements as evaluation and similar skills are needed – skills in communication and collaboration, political astuteness, responsiveness to context, and ability to produce timely and useful findings. It is noted that the pragmatic use of diverse perspectives, disciplines and methods is not limited to evaluation, but applied by many researchers as well. Many evaluators stress the knowledge-generating aspects of evaluation (Preskill, 2008), and there is increasing interest in theory-driven evaluation (Coryn et al, 2010). This interest reflects increasing criticism of what is often called "black box" evaluation (the simple measurement of effects of interventions with little attention to how the effects are achieved). Findings from theory-driven evaluations can potentially be applied to other contexts – i.e. they are transferable. It is also argued that not all forms of evaluation are focused on determining value or worth; that there may be other purposes of evaluation. Many writers highlight the benefits of 'evaluative thinking' in planning and conducting research activities.

There are a number of reasons why researchers should be knowledgeable about evaluation:

System needs

The first reason is that the urgency of the problems facing the health system means that many new 'solutions' are being tried, and established processes and programs questioned.

Discussions with health care managers and executives highlight the reality that many of the 'research' questions they want addressed are, in reality, evaluation questions. They want to know whether a particular strategy is working, or will work, to address a known problem. They want accurate, credible, and timely information to inform decisions within the context in which they are working. Consequently, there is growing recognition of the need for evaluation expertise to guide decisions. Evaluation can address these needs and evaluation research, conducted by qualified evaluation researchers, can ensure the rigour of evaluation activities and optimize the potential that findings will be useful in other settings.

Research skills are required to ensure that such evaluations (which inform not only decisions about continuing or spreading an innovation, but also whether to discontinue current services, or change established processes) are well designed, implemented and interpreted. Poorly designed and overly simplistic evaluations can lead to flawed decision making – a situation that can be costly to all Canadians.

Expectations of research funders

Many research proposals include some form of evaluation. For example, there are an increasing number of funding opportunities that result in researchers proposing 'pilot' programs to test new strategies. For these proposals, a rigorous evaluation plan is an essential component – one that will be discussed by the review panel. Results of this discussion are likely to influence the ranking of the proposal.

Evaluation as a knowledge translation strategy

Researchers interested in knowledge translation theory and practice will also benefit from developing evaluation skills. Evaluation (particularly collaborative evaluation) brings the potential of promoting appropriate evidence use. Two of the often-stated frustrations of decision-makers are that a) there is often insufficient published research available to inform the challenges they are facing, and b) there is a need to incorporate contextual knowledge with research in order to inform local decisions. In turn, researchers often express concern that decision-makers are not familiar with research concepts and methods.

Early stages of well-designed and well-resourced evaluation research begin with a critical review and synthesis of the literature with local and contextual data. This can inform both evaluators and the program team on what is known about the issue, and about current leading practices. The process of designing an evaluation plan, guiding implementation of the evaluation, interpreting data, and making decisions on the data as the evaluation evolves can promote use of evidence throughout the planning/ implementation/ evaluation cycle. Even more importantly, a collaborative evaluation approach that incorporates key stakeholders in meaningful ways will help build evaluative thinking capacity, a culture that values evaluation and research literacy at the program/ organizational level. These skills can then be transferred to other organizational activities. An evaluation can be designed to provide some early results that may inform ongoing decision-making . And finally, because a collaboratively-designed evaluation reflects the questions of concern to decision-makers, evaluation can increase the likelihood that they will trust the evidence identified through the evaluation, and act in response to it.

It has been observed that "evaluation — more than any science — is what people say it is, and people currently are saying it is many different things" (Glass, 1980). This module will adopt the following definition, adapted from a commonly used definition of evaluation (Patton 1997, page 23):

The systematic collection of information about the activities, characteristics, and outcomes of program, services, policy, or processes, in order to make judgments about the program/process, improve effectiveness, and/or inform decisions about future development.

The definition, like many others, highlights the systematic nature of quality evaluation activities. For example Rossi et al., define evaluation as "... the use of social research methods to systematically investigate the effectiveness of social intervention programs" (2004, p. 28). It also highlights a number of other points that are often a source of misunderstandings and misconceptions about evaluation.

Common Misconceptions about Evaluation

There are a number of common misconceptions about evaluation, misconceptions that have contributed to the limited use of evaluation by health researchers.

Evaluation = program evaluation

As the above definition illustrates, in addition to programs, evaluation can also focus on policy, products, processes or the functioning of whole organizations. Nor are evaluation findings limited to being useful to the particular program evaluated. While program evaluation activities are designed to inform program management decisions, evaluation research can generate knowledge potentially applicable to other settings.

Evaluation is about determining the value or worth of a program

The concept of 'valuing' is central to evaluation. In fact, some authors define evaluation in exactly these terms. Scriven, for example, defines evaluation as " the process of determining the merit, worth, or value of something, or the product of that process" (1991; p. 139).Data that is simply descriptive is not evaluation. (For example, " How many people participated in program X ?" is not an evaluation question, although this data may be needed to answer an evaluation question). However, there are other purposes for undertaking an evaluation in addition to that of making a judgment about the value or worth of a program or activity (summative evaluation). Evaluation may also be used to refine or improve a program (often called formative evaluation) or to help support the design and development o f a program or organization (developmental evaluation). Where there is limited knowledge on a specific topic, evaluation may also be used specifically to generate new knowledge . The appropriate selection of evaluation purpose is discussed in more detail in Section 2, Step 1 .

Evaluation occurs at the end of an initiative

This misconception is related to the previous one: if evaluation is only about judging the merit of an initiative, then it seems to make sense that this judgment should occur when the program is well established. The often-heard comment that it is ' too soon' to evaluate a program, reflects this misconception, with the result that evaluation – if it occurs at all - happens at the end of a program. Unfortunately, this often means that many opportunities have been missed to use evaluation to guide the development of a program (anticipate and prevent problems and make ongoing improvements), and to ensure that there is appropriate data collection to support end-of-project evaluation. It also contributes to the misconception that evaluation is all about outcomes.

Evaluation is all about outcomes

In recent years, there has been an increasing emphasis on outcome, rather than process, evaluation. This is appropriate, as too often what is measured is what is easily measurable (e.g. services provided to patients) rather than what is important (e.g. did these services result in improvements to health?). The emphasis on outcome evaluation can, however, result in neglect of other forms of evaluation and even lead to premature attempts to measure outcomes. It is important to determine whether and when it is appropriate to measure outcomes in the activity you are evaluating. By measuring outcomes too early, one risks wasting resources and providing misleading information.

As this module will illustrate, much useful knowledge can be generated from an evaluation even if it is not appropriate or possible to measure outcomes at a particular point in time. In addition, even when an initiative is mature enough to measure outcomes, focusing only on outcomes may result in neglect of key program elements that need policy maker/program manager attention (Bonar Blalock, 1999). Sometimes what is just as (or more) important is understanding what factors contributed to the outcomes observed.

There are two types of evaluation: summative and formative.

Some writers (and evaluation guides) identify only two purposes or 'types' of evaluation: summative and formative. Summative evaluation refers to judging the merit or worth of a program at the end of the program activities, and usually focuses on outcomes. In contrast, formative evaluation is intended as the basis for improvement and is typically conducted in the development or implementation stages of an initiative. Robert Stake is famously quoted on this topic as follows: " when the cook tastes the soup, that's formative; when the guests taste the soup, that's summative ." However, as will be covered in later sections of this resource, the evaluation landscape is more nuanced and offers more potential than this simple dichotomy suggests. Section 2, Step 1 , and Section 3, Step 8 provide more detail on evaluation alternatives.

Evaluation = performance measurement

A common misconception among many health care decision-makers is that evaluation is simply performance measurement. Performance measurement is primarily a planning and managerial tool, whereas evaluation research is a research tool (Bonar Blalock, 1999). Performance measurement focuses on results, most often measured by a limited set of quantitative indicators. This reliance on outcome measures and pre/ post measurement designs poses a number of risks, including that of attributing any observed change to the intervention under study without considering other influences; and failing to investigate important questions that cannot be addressed by quantitative measures. It also contributes to a common misperception that evaluation must rely only on quantitative measures.

Tending to rely on a narrow set of quantitative gross outcome measures accessible through Management Information Systems, performance management systems have been slow to recognize and address data validity, reliability, comparability, diversity, and analysis issues that can affect judgments of programs. Performance management systems usually do not seek to isolate the net impact of a program – that is, to distinguish between outcomes that can be attributed to the program rather than to other influences. Therefore, one cannot make trustworthy inferences about the nature of the relationship between program interventions and outcomes, or about the relative effects of variations in elements of a program's design, on the basis of performance monitoring alone (Bonar Blalock, 1999).

It is important to be aware of these common misconceptions as you proceed in developing an evaluation plan; not only to avoid falling into some of these traps yourself, but in order to prepare for conversations with colleagues and evaluation stakeholders, many of whom may come to the evaluation activity with such assumptions.

Evaluation can be described as being built on the dual foundations of a) accountability and control and b) systematic social inquiry (Alkin & Christie, 2004). For good reasons, governments (and other funders) have often emphasized the accountability functions of evaluation, which is one of the reasons for confusion between performance measurement and evaluation. Because the accountability focus usually leads to reliance on performance measurement approaches, a common result is a failure to investigate or collect data on the question of why the identified results occurred (Bonar Blalock, 1999).

There are dozens, even hundreds, of different approaches to evaluation (what some would call " philosophies ", and others " theories "). Alkins and Christie (2004) describe an evaluation theory tree with three main branches: a) methods; b) valuing; and c) utilization. Some authors exemplifying these three 'branches' are Rossi (methods) (Rossi et al, 2004), Scriven (valuing) (Scriven, 1991), and Patton (utilization) (Patton, 1997). While some authors (and practitioners) may align themselves more closely with one of these traditions, these are not hard and fast categories – over time many evaluation theorists have incorporated approaches and concepts first proposed by others, and evaluation practitioners often take a pragmatic approach to evaluation design.

Each of these "branches" includes many specific evaluation approaches. It is beyond the scope of this module to review all of them here, but some examples are outlined below.

The methods tradition was originally dominated by quantitative methodologists. Over time, this has shifted, and greater value is now being given to incorporation of qualitative methods in evaluation within the methods theory branch.

The methods branch, with its emphasis on rigour, research design, and theory has historically been closest to research. Indeed, some of the recognized founders of the methods branch are also recognized for their work as researchers. The seminal paper "Experimental and Quasi-experimental Designs for Research " (Campbell and Stanley, 1966) has informed both the research and evaluation world. Theorists in this branch emphasize the importance of controlling bias and ensuring validity.

Of particular interest to researcher-evaluators is theory driven evaluation (Chen & Rossi, 1984). Theory-driven evaluation promotes and supports exploration of program theory – and the mechanisms behind any observed change. This helps promote theory generation and testing, and transferability of new knowledge to other contexts.

Theorists in this branch believe that what distinguishes evaluators from other researchers is that evaluators must place value on their findings – they make value judgments (Shadish et al., 1991). Michael Scriven is considered by many to be the primary mainstay of this branch: his view was that evaluation is about the ' science of valuing' (Alkins & Christie, 2004). Scriven felt that the greatest failure of an evaluator is to simply provide information to decision-makers without making a judgement (Scriven, 1983). Other theorists (e.g. Lincoln and Guba) also stress valuing, but rather than placing this responsibility on the evaluator, see the role of the evaluator as helping facilitate negotiation among key stakeholders as they assign value (Guba & Lincoln, 1989).

In contrast to those promoting theory-driven evaluation, those in the valuing branch may downplay the importance of understanding why a program works, as this is not always seen as necessary to determining its value.

The centrality of valuing to evaluation may present challenges to researchers from many disciplines, who often deliberately avoid making recommendations; cautiously remind users of additional research needed; and believe that 'the facts should speak for themselves'. However, with increasing demands for more policy and practice relevant research, many researchers are grappling with their role in providing direction as to the relevance and use of their findings.

Utilization

A number of approaches to evaluation (see for example Patton, Stufflebeam, Cousins, Pawley, add others), have a "utilization" focused orientation. This branch began with what are often referred to as decision-oriented theories (Alkin & Christie, 2004), developed specifically to assist stakeholders in program decision-making. This branch is exemplified by, but not limited to, the work of Michael Q. Patton (author of Utilization-focused evaluation (1997)). Many collaborative approaches to evaluation incorporate principles of utilization-focused evaluation.

The starting point for utilization approaches is the realization that, like the results of research, many evaluation reports end up sitting on the shelf rather than being acted on – even when the evaluation has been commissioned by one or more stakeholders. With this in mind, approaches that emphasize utilization incorporate strategies to promote appropriate action on findings. They emphasize the importance of early and meaningful collaboration with key stakeholders and build in strategies to promote 'buy in' and use of evaluation findings.

Authors closer to the utilization branch of evaluation find much in common with knowledge translation theorists and practitioners: in fact, the similarities in principle and approach between integrated knowledge translation (iKT) 2 and utilization-focused evaluation (UFE) are striking.

Both iKT and UFE:

  • Keep utilization (of evaluation results or research findings) prominent through all phases of the process.
  • Promote research and evaluation conducted in response to stakeholder identified needs.
  • Promote early and meaningful involvement of the intended users of the research or evaluation activity. Utilization-focused evaluation urges identification of the specific individuals who will be expected to act on results and focus on evaluation questions of concern to them.
  • View the evaluator/ researcher as a member of a collaborative team, with respect for the experience, insights and priorities of users.

While it is helpful to have knowledge of the different roots of, and various approaches to, evaluation, it is also important to be aware that there are many common threads in these diverse evaluation approaches (Shadish, 2006) and that evaluation societies have established agreement on key evaluation principles.

Section 2: Getting started

The previous section provided a brief overview of evaluation concepts. This section is the first of two that will provide a step-by-step guide to preparing for and developing an evaluation plan. Both sections will provide additional information particularly helpful to those who are conducting collaborative evaluations or have partners outside of academia.

While the activities outlined in this section are presented sequentially, you will likely find that that the activities of a) considering the evaluation purpose, b) identifying stakeholders, c) assessing evaluation expertise, d) gathering relevant evidence, and e) building consensus are iterative. Depending on the evaluation, you may work through these tasks in a different order.

Step 1: Consider the purpose(s) of the evaluation

One of the first steps in planning evaluation activities is to determine the purpose of the evaluation. It is possible, or even likely, that the purpose of the evaluation may change – sometimes significantly – as you undertake other preparatory activities (e.g. engaging stakeholders, gathering additional information). For this reason, the purpose is best finalized in collaboration with key stakeholders. However, as an evaluator, you need to be aware of the potential purposes of the evaluation and be prepared to explore various alternatives.

As indicated earlier, there are four broad purposes for conducting an evaluation:

To make a judgment about the worth or merit of a program or activity

This is the form of evaluation (summative evaluation) most people are familiar with. It is appropriate when a program is well established, and decisions need to be made about its impacts, continuation or spread.

  • Example 1: Program X has been piloted in Hospital Y. A decision must be made about whether to continue the program.
  • Example 2: A new therapy is being trialed on physiotherapy patients. The evaluation is to determine what impacts the therapy has on patient outcomes.

Many researchers are involved in pilot studies (small studies to determine the feasibility, safety, usefulness or impacts of an intervention before it is implemented more broadly). The purpose of these studies is to determine whether there is enough merit in an initiative to develop it further, adopt it as is, or to expand it to other locations. Pilot studies, therefore, require some level of summative evaluation – there is a need to make a judgment about one or more of these factors. What is often overlooked, however, is that an evaluation of a pilot study can do more than assess merit – in other words, it can have more than this one purpose. A well-designed evaluation of a pilot can also identify areas for program improvement, or explore issues related to implementation, cost effectiveness, or scaling up the intervention. It may even identify different strategies to achieve the objective of the pilot.

If a program is still 'getting up and running' it is too soon for summative evaluation. In such cases, evaluation can be used to help guide development of the initiative (formative evaluation). However, an improvement-oriented approach can also be used to assess an established intervention. A well-designed evaluation conducted for the purpose of program improvement can provide much of the same information as a summative evaluation (e.g. information as to what extent the program is achieving its goals). The main difference is that the purpose is to help improve, rather than to make a summative judgment. For example, program staff often express the wish to evaluate their programs in order to ensure that they are doing the 'best possible job' they can. Their intent (purpose) is to make program improvements. One advantage of improvement-oriented evaluation is that, compared to summative evaluation, it tends to be less threatening to participants and more likely to promote joint problem-solving.

  • Example 1: Program X has been operating for several years. Staff are confident it is a useful and needed program but want to ensure that it is 'doing the best it can' for its clients.
  • Example 2: Program Y has undergone major redesign in order to promote greater self-management by patients of their chronic disease. The sponsors want to ensure that this redesigned initiative is implemented appropriately, and that any necessary changes to processes are made before patient outcomes are measured.

To help support the design and development of a program or organization

Developmental evaluation uses evaluation processes , including asking evaluative questions and applying evaluation logic, to support program, product, staff and/or organizational development. Reflecting the principles of complexity theory, it is used to support an ongoing process of innovation. A developmental approach also assumes that the measures and monitoring mechanisms used in the evaluation will continue to evolve with the program. A strong emphasis is placed on the ability to interpret data emerging from the evaluation process (Patton, 2006).

In developmental evaluation, the primary role of the evaluator, who participates as a team member rather than an outside agent, is to develop evaluative thinking. There is collaboration among those involved in program design and delivery to conceptualize, design and test new approaches in a long-term, on-going process of continual improvement, adaptation and intentional change. Development, implementation and evaluation are seen as integrated activities that continually inform each other.

In many ways developmental evaluation appears similar to improvement-oriented evaluation. However, in improvement-oriented evaluation, a particular intervention (or model) has been selected: the purpose of evaluation is to make this model better . In developmental evaluation, in contrast, there is openness to other alternatives – even to changing the intervention in response to identified conditions. In other words, the emphasis is not on the model (whether this is a program, a product or a process), but the intended objectives of the intervention. A team may consider an intervention, evaluated as 'ineffective', a success if thoughtful analysis of the intervention provides greater insights and direction to a more informed solution.

  • Example 1: Agency X has designed intervention Y in an attempt to prevent youth-at-risk from becoming involved in crime. While they believe that the intervention is a good one, there is little evidence in the literature on what would be effective in this context. Agency staff are open to discovering, through their evaluation, other strategies (besides the intervention) that would achieve the goal of reduced youth crime.

Developmental evaluation is appropriate when there is a need to support innovation and development in evolving, complex, and uncertain environments (Patton, 2011; Gamble 2006). While considered by many a new (and potentially trendy) evaluation strategy, it is not appropriate in all situations. First, evaluation of straightforward interventions usually will not require this approach (e.g. evaluation of the implementation of an intervention found effective in other settings). Second, there must be an openness to innovation and flexibility of approach by both the evaluation sponsor and the evaluators. Third, it requires an ongoing relationship between the evaluator and the initiative to be evaluated.

A final purpose of evaluation is to create new knowledge – evaluation research. Often, when there is a request to evaluate a program, a critical review of the literature will reveal that very little is known about the issue or intervention to be evaluated. In such cases, evaluators may design the evaluation with the specific intent of generating knowledge that will potentially be applicable in other settings – or provide more knowledge about a specific aspect of the intervention.

While it is unusual that evaluation would be designed solely for this purpose (in most cases such an endeavor would be defined as a research project), it is important for researchers to be aware that appropriately-designed evaluation activities can contribute to the research literature.

As can be seen from the potential evaluation questions listed below, an evaluation may develop in very different ways depending on its purpose.

Examples of evaluation purpose: Case Study #1 Unassigned Patients

  • Judgment oriented questions : Are the models of care meeting their objective? Should we continue to fund all of them? Is one model better than another?
  • Improvement oriented questions: How can current models be improved?
  • Developmental oriented questions: What is the best strategy to achieve our objectives of continuity of patient care and decreased patient length of stay?
  • Knowledge oriented questions: Are the theorized benefits of each of these models of care found in practice?

Examples of evaluation purpose: Case Study #2: Computerized Decision Support

  • Judgment oriented questions : Should the software program be adopted across the region? Should it be promoted / marketed more widely?
  • Improvement oriented questions: How can the software program be improved? How should implementation of these process and practice changes be facilitated?
  • Developmental oriented questions: What is the best strategy to achieve appropriate test ordering?
  • Knowledge oriented questions: What can be learned about physician test ordering practices and barriers and facilitators to changing such practice?

Case Study # 1: Unassigned Patients

In our case study example, a provincial department of health was originally looking for a summative evaluation (i.e. they wanted to know which model was 'best'). There was an implication (whether or not explicitly stated) that results would inform funding and policy decisions (e.g. implementation of the 'best' model in all funded hospitals).

However, in this case preliminary activities determined that:

  • there was no clear direction from a systematic review of the literature on which model would be advised in this particular context, and;
  • there was a great deal of anxiety among participants about the proposed evaluation activity, which was viewed by the hospitals as an attempt to force a standard model on all institutions.

There was also concern that the focus on 'models of care' may avoid consideration of larger system issues believed to be affecting the concerns the models were intended to address, and little confidence that the evaluation would consider all the information the different programs felt was important.

As a result, the evaluators suggested that the purpose of the evaluation be improvement oriented (looking for ways each of the models could be improved) rather than summative. This recommendation was accepted, with the result that evaluators were more easily able to gain the support and participation of program staff in a politically-charged environment. By also recognizing the need to generate knowledge in an area where little was at that time known, the evaluators were able to design the evaluation to maximize the generation of knowledge. In the end, the evaluation also included an explicit research component, supported by research funding.

While an evaluation may achieve more than one purpose, it is important to be clear about the main intent(s) of the activity. As the above discussion indicates, the purpose of an evaluation may evolve during the preparatory phases.

The concept of intended users (often called 'key' or 'primary' stakeholders) is an important one in evaluation. It is consistent with that of 'knowledge user' in knowledge translation . In planning an evaluation it is important to distinguish intended users (those you are hoping will take action on the results of the evaluation) from stakeholders (interested and affected parties) in general. Interested and affected parties are those who care about, or will be affected by the issue and your evaluation results. In health care, these are often patients and families, or sometimes staff. Entire communities may also be affected. However, depending on the questions addressed in the evaluation, not all interested or affected parties will be in a position to act on findings. For example, the users of an evaluation of a new service are less likely to be patients (as much as we may believe they should be) than they are to be senior managers and funders.

The experiences and preferences of interested and affected parties need to be incorporated into the evaluation if the evaluation is to be credible. However, these parties are often not the primary audience for evaluation findings. They may be appropriately involved by ensuring (for example) that there is incorporation of a systematic assessment of patient/ family or provider experience in the evaluation. However, depending on the initiative, these patients or staff may – or may not – be the individuals who must act on evaluation findings. The intended audience (those who need to act on the findings) may not be staff – but rather a senior executive or a provincial funder.

It is important to keep in mind the benefits of including, in meaningful ways, the intended users of an evaluation from the early stages. As we know from the literature, a key strategy for bridging the gap between research and practice is to build commitment to (and 'ownership of') the evaluation findings by those in a position to act on them (Cargo & Mercer, 2008). This does not mean that these individuals need to be involved in all aspects of the research (e.g. data collection) but that, at a minimum, they are involved in determining the evaluation questions and interpreting the data. Many evaluators find that the best strategy for ensuring this is to create a steering/ planning group to guide the evaluation – and to design it in such a way as to ensure that all key stakeholders can, and will, participate.

Funders (or future funders) can be among the most important audiences for an evaluation. This is because they will be making the decision as to whether to fund continuation of the initiative. Take, for example, a pilot or demonstration project that is research-funded. It may not be that difficult to obtain support from a health manager (or senior management of a health region) to provide the site for a pilot program of an innovation if the required funding comes from a research grant. If, however, it is hoped that a positive evaluation will result in adoption of the initiative on an ongoing basis, it is wise to ensure that those in a position to make such a decision are integrally involved in design of the evaluation of the pilot, and that the questions addressed in the evaluation are of interest and importance to them.

Step 3: Create a structure and process for collaboration on the evaluation

Once key stakeholders have been identified, evaluators are faced with the practical task of creating a structure and process to support the collaboration. If at all possible, try to find an existing group (or groups) that can take on this role. Because people are always busy, it may be easier to add a steering committee function to existing activities.

In other cases, there may be a need to create a new body, particularly if there are diverse groups and perspectives. Creating a neutral steering body (and officially recognizing the role and importance of each stakeholder by inviting them to participate on it) may be the best strategy in such cases. Whatever structure is selected, be respectful of the time you ask from the stakeholders – use their time wisely.

Another important strategy, as you develop your evaluation plan, is to build in costs of stakeholders. These costs vary depending on whether stakeholders are from grassroots communities or larger health/ social systems. What must be kept in mind is that key stakeholders (intended evaluation users), like knowledge users in research, do not want to simply be used as 'data sources'. If they are going to put their time into the evaluation they need to know that they will be respected partners, their expertise will be recognized, and that there will be benefit to their organization. A general principle is that the 'costs' of all parties who are contributing to the evaluation should be recognized – and as much as possible – compensated for. This compensation may not always need to be financial. Respect and valuing can also be demonstrated through:

  • Shared decision-making, including determining the purpose, focus and questions of the evaluation.
  • A steering committee structure that formally recognizes the value given to partners in the evaluation process.
  • The extent to which there is multi-directional communication between the evaluation team and the initiative to be evaluated.

If key stakeholders are direct care staff in the health system, it will be difficult to ensure their participation unless the costs of 'back-filling' their functions are provided to the organization. Similarly, physicians in non-administrative positions may expect compensation due to lost income.

You may be wondering about the time it will take to set up such a committee, maintain communication, and attend meetings. This does take time, but it is time well spent as it will:

  • Reflect respect for the intended users of the evaluation;
  • Contribute to a more credible (and higher quality) evaluation; and,
  • Significantly increase the likelihood that the results of the evaluation will be acted upon.

It is particularly important to have a steering committee structure if you are working in a culture new to you (whether this is an organizational culture, an ethno-cultural community, or in a field or on an issue with which you are unfamiliar). Ensuring that those with needed cultural insights are part of the steering/ planning committee is one way of facilitating an evaluation that is culturally 'competent' and recognizes the sensitivities of working in a specific context.

It may not be feasible to have all those on a collaborative committee attend at the same time, particularly if you are including individuals in senior positions. It may not even be appropriate to include all intended users of the evaluation (e.g. funders) with other stakeholders. However, all those that you hope will take an interest in evaluation findings and act on the results need to be included in a minimum of three ways:

  • Providing input into the evaluation questions.
  • Receiving regular updates on progress. A well-thought-out communication plan (that ensures ongoing two-way communication as the initiative develops and as findings emerge) is needed.
  • Participating in interpretation of evaluation results and implications for action.

It is also essential that the lead evaluators are part of this steering committee structure, although – depending on your team make-up ( Step 4 ) – it may not be necessary to have all those supporting the evaluation attend every meeting.

Step 4: Assess evaluation capacity, build an evaluation team

A key challenge is to ensure that you have the evaluation expertise needed on your team:

  • To design the evaluation component of your proposal;
  • To negotiate the environment and stakeholder communication; and,
  • To guide and conduct the evaluation activities.

A common mistake of researchers is to assume that their existing research team has the required evaluation skills. Sometimes assessment of this team by the review panel will reveal either limited evaluation expertise, or a lack of specific evaluation skills needed for the proposed evaluation plan. For example, an evaluation plan that relies on assessment of staff/ patient perspectives of an innovation will require a qualitative researcher on the research team. Remember that there is a need for knowledge and experience of the 'culture' in which the evaluation takes place, as well as generic skills of communication, political astuteness and negotiation. Ensuring that you have, on the team, all the expertise that is needed for the particular evaluation you are proposing, will strengthen your proposal. Do not rely simply on contracting with an evaluation consultant who has not been involved to date.

Reviewing your evaluation team composition is an iterative activity – as the evaluation plan develops, you may find a need to add evaluation questions (and consequently expand the methods employed). This may require review of your existing expertise.

The issue of internal vs. external evaluators

The evaluation literature frequently distinguishes between "internal" and "external" evaluators. Internal evaluators are those who are already working with the initiative – whether health system staff or researchers. External evaluators do not have a relationship with the initiative to be evaluated. It is commonly suggested that use of internal evaluators is appropriate for formative evaluation, and external evaluation is required for summative evaluation.

The following table summarizes commonly identified differences between internal and external evaluation.

This dichotomy, however, is too simplistic for the realities of many evaluations, and does not – in itself - ensure that the evaluation principles of competence and integrity are met. Nor does it recognize that there may be other, more creative solutions than this internal/ external dichotomy suggests. Three potential strategies, with the aims of gaining the advantages of both internal and external evaluation, are elaborated in more detail below:

A collaborative approach , where evaluators participate with stakeholders as team members. This is standard practice in many collaborative evaluation approaches, and a required element of a utilization-focused or developmental evaluation. Some evaluators differentiate between objectivity in evaluation (which implies some level of indifference to the results) and neutrality (meaning that the evaluator does not 'take sides') (Patton, 1997). Collaborative approaches are becoming more common, reflecting awareness of the benefits of collaborative research and evaluation.

Identifying specific evaluation components requiring external expertise (whether for credibility or for skill), and incorporation of both internal and external evaluators into the evaluation plan. Elements of an evaluation that require external evaluation (whether formative or summative) are those:

  • That will explore the perspectives of participants (such as staff and/ or patients). For example, it is not recommended that staff or managers associated with a program interview patients about their satisfaction with a service. This is both to ensure evaluation rigour (avoiding the risk of patients or staff giving the responses expected of them – social desirability bias), and to ensure that ethical standards are met. The principle of 'voluntariness' may not be met if patients feel obligated to participate because they are asked by providers of a service on which they depend.
  • Where there is potential (or perceived) conflict of interest by the evaluator. While the reasons for not using a manager to assess his/ her own program are usually self-evident, the potential for bias by researchers must also be considered. If they are the ones who have designed and proposed the intervention, there may be a bias, whether intended or not, to find the intervention successful.
  • Where there is not the internal expertise for a particular component.
  • Where use of a certain person/ group would affect credibility of results with any of the key stakeholders.

Activities that may be well suited to evaluation by those internal to the initiative are those where there are the resources in skill and time to conduct them, and participation of internal staff will not affect evaluation credibility. One example might be the collection and collation of descriptive program data. In some situations it may be appropriate to contract with a statistical consultant for specialized expertise, while using staff data analysts to actually produce the data reports.

Use of internal expertise that is at 'arms length' from the specific initiative. A classic example of this would be contracting with an organization's internal research and evaluation unit to conduct the evaluation (or components of it). While this is not often considered an external evaluation (and may not be the optimal solution in situations where the evaluation is highly politicized), it often brings together a useful combination of:

  • Contextual knowledge, which brings the benefits of both time saved, and greater appreciation of potential impacts and confounding factors.
  • Commitment to the best outcomes for the organization, rather than loyalty to a specific program (i.e. may be able to be objective).
  • Potential to promote appropriate use of results, both directly and – where appropriate – by making links to other initiatives throughout the organization.

The program, product, service, policy or process you will be evaluating exists in a particular context. Understanding context is critical for most evaluation activities: it is necessary to undertake some pre-evaluation work to determine the history of the initiative, who is affected by it, perspectives and concerns of key stakeholders, and the larger context in which the initiative is situated (e.g. the organizational and policy context). How did the initiative come to be? Has it undergone previous evaluation? Who is promoting evaluation at this point in time and why?

In addition, a literature review of the issue(s) under study is usually required before beginning an evaluation. Identifying, accessing and using evidence to apply to an evaluation is an important contribution of research. Such a review may focus on:

  • Current research on issues related to the initiative to be evaluated. Many interventions developed organically, and were not informed by research. Even if the initiative was informed by research at one point in time, program staff may not be up to date on current work in the area.
  • Evaluations of similar initiatives.

If an evaluation is potentially contentious, it is also often a good idea to meet individually with each of the stakeholders, in order to promote frank sharing of their perspectives.

Case Study #1: Unassigned Patients

The first step in this evaluation was to undertake a literature review. While it was hoped that a systematic review would provide some guidance as a recommended model, this was not the case – almost no literature on the topic addressed issues related to the specific context. Presentation of this finding at a meeting of stakeholders also indicated that there were a number of tensions and diverse perspectives among stakeholders.

One of the next steps proposed by the evaluators was to make a site visit to each of the sites. This included a walk-through of the programs, and meetings with nursing and physician leadership. These tours accomplished two things: a) additional information on 'how things worked' that would have been difficult to gage through other means, and b) development of rapport with staff – who appreciated having input into the evaluation and describing the larger context in which the services were offered.

Case Study #2: Computerized decision support

A review of the research literature identified a) key principles predicting effective adoption, b) the importance of implementation activities, and c) limited information on the impacts of computerized decision-support in this specific medical area. This knowledge provided additional support for the decision to a) focus on implementation evaluation, and b) expand the original plan of pre/post intervention measurement of test ordered to include a qualitative component that explored user perspectives.

Step 6: Build a shared level of consensus about the evaluation

As indicated in earlier sections, evaluation is subject to a number of misconceptions, and may have diverse purposes and approaches. It is usually safe to assume that not all stakeholders will have the same understanding of what evaluation is, or the best way to conduct an evaluation on the issue under consideration. Some are likely to have anxieties or concerns about the evaluation.

For this reason, it is important to build shared understanding and agreement before beginning the evaluation. Many evaluators find that it is useful to build into the planning an introductory session that covers the following:

  • Definition of evaluation, similarities and differences between evaluation, performance management, quality improvement, and research.
  • The range of potential purposes of evaluation, including information on when each is helpful. If there is anxiety about the evaluation, it is particularly important to present the full range of options, and help participants to recognize that evaluation – rather than being a judgment on their work – can actually be a support and resource to them.
  • Principles and benefits of collaborative evaluation.
  • How confidentiality will be maintained.
  • The processes you are proposing to develop the evaluation plan.

This overview can take as little as 20 minutes if necessary. It allows the evaluator to proactively address many potential misconceptions – misconceptions that could present obstacles both to a) support of and participation in the evaluation and to b) interest in acting on evaluation findings. Additional benefits of this approach include the opportunity to build capacity among evaluation stakeholders, and to begin to establish an environment conducive to collaborative problem-solving.

It is also important to ensure that the parameters of the evaluation are well defined. Often, the various stakeholders involved in the evaluation process will have different ideas of where the evaluable entity begins and ends. Reaching consensus on this at the outset helps to set clear evaluation objectives and to manage stakeholder expectations. This initial consensus will also help you and your partners keep the evaluation realistic in scope as you develop an evaluation plan. Strategies for focusing an evaluation and prioritizing evaluation questions are discussed in Steps 8 and 9 .

In this example, the introductory overview on evaluation was integrated with the site visits. Key themes were reiterated in the initial evaluation proposal, which was shared with all sites for input. As a result, even though staff from the three institutions had not met together, they developed a shared understanding of the evaluation and agreement on how it would be conducted.

You may find that consensus-building activities fit well into an initial meeting of your evaluation partners. In other cases, such discussions may be more appropriate once all stakeholders have been identified.

In collaborative evaluation with external organizations, it is also important to clarify roles and expectations of researchers and program staff/ managers, and make explicit any in-kind time commitments or requirements for data access. It is particularly important to have a clear agreement on data access, management and sharing (including specifics of when and where each partner will have access) before the evaluation begins.

Before embarking on the evaluation it is also important to clarify what information will be made public by the evaluator. Stakeholders need to know that results of research-funded evaluations will be publicly reported. Similarly, staff need to know that senior executives will have the right to see the results of program evaluations funded by the sponsoring organization.

It is also important to proactively address issues related to 'speaking to' evaluation findings. It is not unknown that a sponsoring organization may – in fear that an evaluation report will not be what it hoped for – choose to present an early (and more positive) version of findings before the final report is released. In some cases, they may not want results shared. For this reason it is important to be clear about roles, and to clarify that the evaluator is the person who is authorized to speak to accurately reflect the findings, accept speaking invitations or publish on the results of the evaluation. (Developing and presenting results in collaboration with stakeholders is even better.) Similarly, it is for the program/ organization leads to speak to the specific issues related to program design.

Research proposals that include evaluative components are strengthened by clear letters of commitment from research and evaluation partners. These letters should specifically outline the nature and extent of partners in developing the proposal; the structure and processes for supporting collaborative activities; and the commitments and contributions of partners to the proposed evaluation activities (e.g. data access; provision of in-kind services).

Step 7: Describe the program/intervention

Special Note : While this activity is placed in the preparation section of this guide, many evaluators find that, in practice, getting a clear description of the program, and the mechanism of action through which it is expected to work, may not be a simple activity. It is often necessary to delay this activity until later in the planning process, as you may need the active engagement of key stakeholders in order to facilitate what is often a challenging task.

Having stakeholders describe the program is useful for a number of reasons:

  • It may be the first opportunity that stakeholders have had for some time to reflect on the program, its rationale, and evidence for its design.
  • Differences in understanding of how things actually work in practice will quickly surface.
  • It provides a base for 'teasing out' the program theory.

However, you may find that those involved in program management find the process of describing their program or initiative on paper a daunting task. An important role of the evaluator may be to help facilitate this activity.

One deliverable requested by the provincial health department was a description of how each of the different models worked. This early activity took over 6 months: each time a draft was circulated for review, stakeholders identified additional information and differences of opinion about how things actually worked in practice.

The issue of "logic models"

Many evaluators place a strong emphasis on logic models . Logic models visually illustrate the logical chain of connections showing what the intervention is intended to accomplish. In this way, a logic model is consistent with theory-driven evaluation, as the intent is to get inside the "black box", and articulate program theory. Researchers will be more familiar with 'conceptual' models or frameworks, and there are many similarities between the two. However, a conceptual framework is generally more theoretically-based and conceptual than a logic model, which tends to be program specific and include more details on program activities.

When done well, logic models illustrate the 'if-then' and causal connections between program components and outcomes, and can link program planning, implementation and evaluation. They can be of great benefit in promoting clear thinking, and articulating program theory. There are many different formats for logic models ranging from simple linear constructions to complex, multidimensional representations. The simplest show a logical chain of connections under the headings of inputs (what is invested in the initiative), outputs (the activities and participants), and the outcomes (short, medium and long-term).

The diagram shows a logical chain of connections illustrated by one-way arrows beginning with inputs, and flowing to outputs and then outcomes. Under the inputs heading, is the description program investments. Under the outputs heading, the examples listed are activities and participation. Under the outcomes heading, short, medium and long-term outcomes are specified.)

Other logic models are more complex, illustrating complex, multi-directional relationships. See, for example, various templates developed by the University of Wisconsin .

In spite of the popularity of logic models, they do have potential limitations, and they are not the only strategy for promoting clarity on the theory behind the intervention to be evaluated.

Too often, logic models are viewed as a bureaucratic necessity (e.g. a funder requirement) and the focus becomes one of "filling in the boxes" rather than articulating the program theory and the evidence for assumptions in the program model. In other words, rather than promoting evaluative thinking, the activity of completing a logic model can inhibit it. Another potential downside is that logic models tend to be based on assumptions of linear, logical relationships between program components and outcomes that do not reflect the complexity in which many interventions take place. Sometimes logic models can even promote simplistic ( in the box ) thinking. Some authors advise that logic models are not appropriate for evaluations within complex environments (Patton, 2011).

Whether or not a graphic logic model is employed as a tool to aid in evaluation planning, it is important to be able to articulate the program theory: the mechanisms through which change is anticipated to occur. A program description, advised above, is one first step to achieving this. Theory can sometimes be effectively communicated through a textual approach outlining the relationships between each component of the program/ process. ( Because there is strong evidence on X, we have designed intervention Y ). This approach also brings the benefit of a structure that facilitates inclusion of available evidence for the proposed theory of action.

While presented in a step-wise fashion, activities described in this section are likely to be undertaken concurrently. Information gathered through each of the activities will inform (and often suggest a need to revisit) other steps. It is important to ensure that these preliminary activities have been addressed before moving into development of the actual evaluation plan.

Section 3: Designing an evaluation

The steps outlined in this section can come together very quickly if the preparatory work advised in Section 2 has been completed. These planning activities are ideally conducted in collaboration with your steering/ planning group.

For these next steps, the module will be based on an evaluation planning matrix ( Appendix A ). This matrix is not meant to be an evaluation template, but rather a tool to help organize your planning. Caution is needed in using templates in evaluation, as evaluation research is much more than a technical activity. It is one that requires critical thinking, assessment of evidence, careful analysis and clear conceptualization.

The first page of the matrix provides a simple outline for documenting a) the background of the initiative, b) the purpose of the planned evaluation, c) the intended use of the evaluation, d) the key stakeholders (intended evaluation users), e) and the evaluation focus. Completion of preparatory activities should allow you to complete sections a-d.

This section will start with a discussion of focus ( Step 8 , below), and then lead through the steps of completing page 2 of the matrix ( Steps 9 - 11 ). Appendix B provides a simple example of the completed matrix for Case Study 1: Unassigned Patients.

Step 8: Confirm purpose, focus the evaluation

Through the preparatory activities you have been clarifying the overall purpose of the evaluation. At this point, it is useful to operationalize the purpose of evaluation by developing a clear, succinct description of the purpose for conducting this particular evaluation. This purpose statement, one to two paragraphs in length, should guide your planning.

It is also important to include a clear statement of how you see the evaluation being used (this should be based on the preparatory meetings with stakeholders), and who the intended users of the evaluation are.

Renegotiation of the purpose of the evaluation of hospital models of care resulted in an evaluation purpose that was described as follows:

  • To respond to the request of the provincial government to determine whether the models developed by the sites were a) effective in providing care to unassigned patients, and b) were sustainable. (You will note that although the evaluation was framed as improvement-oriented, there was a commitment to make a judgment (value) related to these two specific criteria – which were acceptable to all parties).
  • To explore both program specific and system issues that may be affecting timely and quality care for medical patients . This objective reflected the concern identified in pre-evaluation activities, that there were system issues – not limited to specific programs – that needed to be addressed.

In keeping with an improvement-oriented evaluation approach, there is no intent to select one 'best model', but rather to identify strengths and limitations of each strategy with the objective of assisting in improving quality of all service models.

Preliminary consultation has also identified three key issues requiring additional research: a) understanding and improving continuity of patient care; b) incorporating provider and patient/ family insights into addressing organizational barriers to effective provision of quality inpatient care and timely discharge; and c) the impact of different perspectives of various stakeholders on the effectiveness of strategies for providing this care.

This evaluation will be used by staff of the department of health to inform decisions about continued funding of the programs; by site senior management to strengthen their specific services; and by regional senior management to guide ongoing planning.

This evaluation summarized its purpose as follows:

The purpose of this evaluation research is to identify facilitators and barriers to implementation of decision-support systems in the Canadian health context; to determine the impacts of introduction of the decision-support system; and to develop recommendations to inform any expansion or replication of such a project. It is also anticipated that findings from this evaluation will guide further research.

As these examples illustrate, it is often feasible to address more than one purpose in an evaluation. The critical point is, however, to be clear about the purpose, the intended users of the evaluation, and the approach proposed for working with stakeholders.

So far, we have discussed the purpose of the evaluation, and, in broad terms, some of the possible approaches to evaluation. Another concept that is critical to evaluation planning is that of focus. Whatever the purpose, an evaluation can have any one of dozens of foci (For example Patton (1997) lists over 50 potential foci). These include:

  • Implementation focus: a focus on the implementation of an initiative. Implementation evaluation is a necessary first step to many evaluations, as without assessment of implementation, it will not be possible to differentiate between failure of program theory (the initiative was poorly thought out, and failure to be expected), and failure to implement appropriately a theoretically sound and potentially successful intervention. While many authors incorporate implementation evaluation into general formative evaluation, it is often useful, in early phases of an evaluation plan, to focus specifically on implementation questions.
  • Goals-based evaluation is familiar to most readers. The focus is on evaluating the extent to which an initiative has met its stated objectives.
  • Goals-free evaluation, in contrast, takes a broader view and asks " what actually happened ?" as the result of the intervention. While it is generally wise to consider the original objectives, goals free evaluation allows assessment of unintended consequences (whether positive or negative). This is of particular importance when evaluating initiatives situated in complex systems, as small changes in one area may result in big impacts elsewhere. These unanticipated impacts may be of much greater importance than whether the stated objectives were met.

Case study #2: Computerized decision support

Through exploring the experience and perspectives of all stakeholders, the evaluation found that those receiving the computerized orders, while finding them easier to read (legibility was no longer a problem) also found that they contained less useful information due to the closed-ended drop down boxes which replaced open-ended physician description of the presenting problem. This question was not one that had been identified as an objective, but had important implications for future planning.

  • Impact evaluation assesses the changes (positive or negative, intended or unintended) of a particular intervention. These changes may not be limited to direct effects on participants.
  • Outcome evaluation investigates long term effects of an intervention on participants. Few evaluations are in a position to measure long term outcomes: most measure short term outcomes (e.g. processes), or intermediate outcomes (e.g. behavior, policy change).
  • Cost benefit analysis, or cost effectiveness analysis explores the relationship between program costs and outcomes (expressed in dollars, or not measured in dollars respectively).

It is also important to sequence evaluation activities – the focus you select will depend at least in part on the stage of development of the initiative you are evaluating. A new program, which is in the process of being implemented, is not appropriate for outcome evaluation. Rather, with few exceptions, it is likely that the focus should be on implementation evaluation. Implementation evaluation addresses such questions as:

  • To what extent was initiative implemented as designed?
  • Were resources, skills, timelines allocated adequate?
  • Are data collection systems adequate to collect data to inform outcome evaluation?
  • What obstacles to implementation and uptake can be identified, and how can they be addressed?

A program that has been implemented and running for some time, may select a number of different foci for an improvement-oriented evaluation.

Many summative (judgment-oriented) evaluations are likely to take a focus that is impact or outcome focused.

It is important to keep in mind that a focus will help keep parameters on your activities. The potential scope of any evaluation is usually much broader than the resources available. This, in addition to the need to sequence evaluation activities, makes it useful to define your focus.

Step 9: Identify and prioritize evaluation questions

Only when preparatory activities have been completed is it time to move on to identifying the evaluation questions. This is not to say that a draft of evaluation questions may not have already been developed. If only the research team is involved, questions may already be clearly defined: if you have been commissioned to undertake an evaluation, at least some of the evaluation questions may be predetermined. However, if you have been meeting with different stakeholders, they are likely to have identified questions of concern to them. The process of developing the evaluation questions is a crucial one, as they form the framework for the evaluation plan.

At this point we move on to page 2 of the evaluation planning matrix. It is critical to 'start with the question'; i.e. with what we want to learn from the evaluation. Too often, stakeholders can become detracted by first focusing on the evaluation activities they would like to conduct (e.g. We should conduct interviews with physicians ), the data they think is available (e.g. We can analyze data on X), or even the indicators that may be available . But without knowing what questions the evaluation is intended to answer, it is premature to discuss methods or data sources.

Get the questions on the table ( Column 2 )

In working with evaluation stakeholders, it is often more useful to solicit evaluation questions with wording such as "what do you hope to know at the end of this evaluation that you don't know now?" ratherthan as " what are the evaluation questions?" The latter question is more likely to elicit specific questions for an interview, focus group, or data query, than to identify questions at the level you will find helpful.

If you are conducting a collaborative evaluation, a useful strategy is to incorporate a discussion (such as a brainstorming session) with your stakeholder group. You will often find that, if there is good participation, dozens of evaluation questions may be generated – often broad in scope, and at many different levels. Scope of questions can often be constrained if there is a clear consensus on the purpose and focus on the evaluation – the reason that leading the group through such a discussion ( Section 2, Step 6 ) is useful.

The next step for the evaluator is to help the group rework these questions into a format that is manageable. This usually involves a) 'rolling up' the questions into overarching questions, and b) being prepared to give guidance as to sequence of questions. These two activities will facilitate the necessary task of prioritizing the questions: reaching consensus on which are of most importance.

Create overarching questions ( Column 1 )

Many questions that are generated by knowledge users are often subquestions of a larger question. The task of the evaluator is to facilitate the roll-up of questions into these overarching ones. Because it is important to demonstrate to participants that the questions of concern to them are not lost, it is often useful to keep note (in column two of the matrix) of all the questions of concern.

The stakeholders at the 3 sites generated a number of questions, many of which were similar. For example: " I want to know what nurses think about this model", "I want to know about the opinions of patients on this change", "How open are physicians to changes to the model?'

These questions could be summarized in an overarching question " What are the perspectives of, and experiences with, physicians, nurses, patients, families, and other hospital staff" on the care model?"

It is common for knowledge users (and researchers) to focus on outcome-related questions. Sometimes it is possible to include these questions in the evaluation you are conducting, but in many cases – particularly if you are in the process of implementing an initiative – it is not. As discussed earlier, for example, it is not appropriate to evaluate outcomes until you are sure that an initiative has been fully implemented. In other cases, the outcomes of interest to knowledge users will not be evident until several years into the future – although it may be feasible to measure intermediate outcomes.

However, even if it is not possible to address an outcome evaluation question in your evaluation it is important to take note of these desired outcomes. First, this will aid in the development of program theory and, secondly, noting the desired outcome measures is an essential first step in ensuring that there are adequate and appropriate data collection systems in place that will facilitate outcome evaluation in the future.

If it is not possible to address outcome questions, be sure to clearly communicate that these are important questions that will be addressed at a more appropriate point in the evaluation process.

Even when the evaluation questions have been combined and sequenced, there are often many more questions of interest to knowledge users than there is time (or resources) to answer them. The role of the evaluator at this point is to lead discussion to agreement on the priority questions. Some strategies for facilitating this include:

  • Focusing on how the information generated through the evaluation will be used . Simply asking the question " When we find out the answer to X, how will that information be used?" will help differentiate between questions that are critical for action, and those that driven by curiosity. A useful strategy is to suggest that given the time/resource constraints we all face, the priority should be to focus on questions that we know will result in action.
  • Referring back to the purpose of the evaluation, and to any funder requirements .
  • Addressing feasibility . There are rarely the resources (in funds or time) to conduct all the evaluations of interest. Focusing on funder time lines and available resources will often help eliminate questions that, while important, are beyond the scope of your evaluation.
  • Revisiting the issue of sequence . Even though 'early' evaluation questions may not be of as much interest to stakeholders as outcome questions, it is often useful to develop a phased plan, illustrating at what phase it is best to address a particular question. The evaluation matrix can be adapted to include sections that highlight questions at different phases (e.g. implementation evaluation, improvement oriented evaluation, outcome evaluation questions).
  • Exploring the potential of additional resources to investigate some questions . Some questions generated may be broad research questions. They are important, but there may not be an urgency to answer them. In such cases, there may be interest in investigating the potential of additional research funding to explore these questions at a later date.

If there is time, the steering/planning group can participate in this 'rolling up' and prioritization activity. Another alternative is for the evaluator to develop a draft based on the ideas generated and to circulate it for further input.

It is only when the evaluation questions have been determined that it is appropriate to move on to the next steps: evaluation design, selection of methods and data sources, and identification of indicators.

Step 10: Select methods and data sources ( Columns 3 and 4 )

Only when you are clear on the questions, and have prioritized them, is it time to select methods. In collaborative undertakings you may find that strong facilitation is needed to reach consensus on the questions, as stakeholders are often eager to move ahead to discussion of methods. The approach of 'starting with the question' may also be a challenge for researchers, who are often highly trained in specific methodologies and methods. It is important in evaluation, however, that methods be driven by the overall evaluation questions, rather than by researcher expertise.

Evaluators often find that many evaluations require a multi-method approach. Some well-designed research and evaluation projects can generate important new knowledge using only quantitative methods. However, in many evaluations it is important to understand not only if an intervention worked (and to measure accurately any difference it made) but to understand why the intervention worked – the principles or characteristics associated with success or failure, and the pathways through which effects are generated. The purpose of evaluating many pilot programs is to determine whether the program should be implemented in other contexts, not simply whether it worked in the environment in which it was evaluated. These questions generally require the addition of qualitative methods.

Your steering committee will also be helpful at this stage, as they will be able to advise you on the feasibility – and credibility - of certain methods.

Case study #1: Unassigned Patients

When the request for the evaluation was made, it was assumed that analysis of administrative data would be the major data source for answering evaluation questions. In fact, the data available was only able to provide partial insights to some of the questions of concern.

While the overall plan for the evaluation suggested focus groups would be appropriate for some data collection, the steering group highlighted the challenges in bringing physicians and hospital staff together as a group. They were, however, able to suggest strategies to facilitate group discussions, (integrating discussions with staff meetings, planning a catered lunch, and individualized invitations from respected physician leaders).

The process of identifying data sources is often interwoven with that of selecting methods. For example, if quantitative program data are not available to inform a specific evaluation question, there may be a need to select qualitative methods. In planning a research project, if the needed data were not available, a researcher may decide to remove a particular question from the study. In evaluation, this is rarely acceptable – if the question is important, there should be an effort to begin to answer it. As Patton (1997) has observed, it is often better to get a vague or fuzzy answer to an important question than a precise answer to a question no one cares much about. The best data sources in many cases are specific individuals!

Remember that many organizations have formal approval processes that must be followed before you can have access to program data, staff or internal reports.

Identify appropriate indicators ( Column 5 )

Once evaluation questions have been identified, and methods and data sources selected, it is time to explore what indicators may be useful.

An indicator can be defined as a summary statistic used to give an indication of a construct that cannot be measured directly. For example, we cannot directly measure the quality of care, but we can measure particular processes (e.g., adherence to best-practice guidelines) or outcomes (e.g., number of falls) thought to be related to quality of care. Good indicators:

...should actually measure what they are intended to (validity); they should provide the same answer if measured by different people in similar circumstances (reliability); they should be able to measure change (sensitivity); and, they should reflect changes only in the situation concerned (specificity). In reality, these criteria are difficult to achieve, and indicators, at best, are indirect or partial measures of a complex situation (Alberta Heritage Foundation for Medical Research (1998: 5).

However, it is easy to overlook the limitations both of particular indicators and of indicators in general. Some authors have observed that the statement " we need a program evaluation " is often immediately followed by " we have these indicators ," without consideration of exactly which question the indicators will answer (Bowen & Kriendler, 2008).

An exclusive focus on indicators can lead to decisions being data-driven rather than evidence-informed (Bowen et al. 2009). It is easy to respond to issues for which indicators are readily available, while ignoring potentially more important issues for which such data is not available. Developing activities around "what existing data can tell us," while a reasonable course for researchers, can be a dangerous road for both decision-makers and evaluators, who may lose sight of the most important questions facing the healthcare system. It has been observed that " the indicator-driven approach 'puts the cart before the horse' and often fails " (Chesson 2002: 2).

Not all indicators are created equal, and an indicator's limitations may not be obvious. Many indicators are 'gameable' (i.e. metrics can be improved without substantive change). For example, breastfeeding initiation is often used as an indicator of child health, as it is more easily measured than breastfeeding duration. However, lack of clear coding guidelines, combined with pressure on facilities to increase breastfeeding rates, appear to have produced a definition of initiation as, "the mother opened her gown and tried" (Bowen & Kriendler, 2008). It is not surprising then that hospitals are able to dramatically increase 'breastfeeding rates' if a directive is given to patient care staff, who are then evaluated on the results. This attempt, however, does not necessarily increase breastfeeding rates following hospital discharge. This example also demonstrates that reliance on a poor indicator can result in decreased attention and resources for an issue that may continue to be of concern.

The following advice is offered to avoid these pitfalls in indicator use in evaluation:

  • First determine what you want to know. Don't start with the data (and indicators) that are readily available.
  • In selecting indicators, evaluate them for validity, robustness and transferability before proposing them. Don't just use an indicator because it's available.
  • Understand what the indicator is really telling you – and what it isn't.
  • Limit the number of indicators, focusing resources on the strongest ones.
  • Choose indicators that cannot be easily gamed.
  • Ensure that those who gather and analyze the data (and are aware of what an indicator is actually measuring, data quality, etc.) are included on your team.
  • Remember that there may not be an appropriate indicator for many of the evaluation questions you hope to address (Bowen & Kriendler, 2008).

At the beginning of this evaluation it was assumed that assessment of impact would be fairly straightforward: the proposed indicator for analysis was hospital length of stay (LOS). However, discussions with staff at one centre uncovered that:

  • Health information staff were being asked to 'run' the data in two different ways, by heads of different departments. This resulted in different calculation of LOS.
  • Although the LOS on the ward (the selected indicator) showed a decline, the LOS in the emergency department was actually rising. Attention to the selected indicator risked obscuring problems created elsewhere in the system.

Step 11: Clarify resources and responsibility Areas ( Column 6 )

At this point, we are ready to move the evaluation plan into operation. It is necessary to ensure that you have the resources to conduct the proposed evaluation activities, and know who is responsible for conducting them. This final column in the matrix provides the base from which an evaluation workplan can develop.

This module does not attempt to provide detailed information on project implementation and management, although some resources to support this work (e.g. checklists) are included in the bibliography. However, as you conduct the evaluation it is important to:

  • Monitor (and evaluate!) implementation of activities . Be prepared to revise the plan when obstacles are identified. Unlike some research designs, in evaluation it is not always possible – or advisable – to hold the environment constant while the evaluation is occurring.
  • Ensure regular feedback and review of progress by stakeholders. Don't leave reporting until the evaluation is completed. As issues are identified, they should be shared and discussed with stakeholders. In some evaluations (e.g. improvement oriented or developmental evaluation) there will be an intent to act on findings as soon as they are identified. Even if there is no intent to change anything as the result of emerging findings, remember that people don't like surprises – it is important that stakeholders are informed not only of progress (and any difficulties with the evaluation), but also alerted to potentially contentious or distressing findings.

Even though it is recommended that there are regular reports (and opportunities for discussion) as the evaluation progresses, it is often important to leave a detailed evaluation report. This report should be focused to the intended users of the evaluation, and should form the basis of any presentations or academic publications, helping promote consistency if there are multiple authors or presenters.

Evaluation frequently faces the challenge of communicating contentious or negative findings. Issues related to communication are covered in more detail in Section 4, Ethics and Evaluation .

There are many guidelines for developing reports for knowledge users. The specifics will depend on your audience, the scope of the evaluation and many other factors. A good starting point is the CFHI resources providing guidance in communicating with decision-makers.

Section 4: Special issues in evaluation

Ethics and evaluation.

Evaluation societies have clearly identified ethical standards of practice. The Canadian Evaluation Society (n.d) provides Guidelines for Ethical Conduct , (competence, integrity, accountability) while the American Evaluation society (2004) publishes Guiding principles for evaluators (systematic inquiry, competence, integrity/honesty, respect for people, and responsibilities for general and public welfare).

The ethics of evaluation are an important topic in evaluation journals and evaluation conferences. Ethical behavior is a 'live issue' among professional evaluators. This may not be apparent to researchers as, in many jurisdictions, evaluation is exempt from the ethical review processes required by universities.

In addition to the standards and principles adopted by evaluation societies, it is important to consider the ethical issues specific to the type of evaluation you are conducting. For example there are a number of ethical issues related to collaborative and action research or undertaking organizational research (Flicker et al, 2007; Alred, 2008; Bell & Bryman, 2007).

Evaluators also routinely grapple with ethical issues, which while also experienced by those conducting some forms of research (e.g. participatory action research), are not found in much academic research. Some of these issues include:

Managing expectations . Many program staff welcome an evaluation as an opportunity to 'prove' that their initiative is having a positive impact. No ethical evaluator can ensure this and it is important the possibility of unwanted findings – and the evaluator's role in articulating these – is clearly understood by evaluation sponsors and affected staff.

Sharing contentious or negative findings . Fear that stakeholders may attempt to manipulate or censor negative results has led to evaluators either keeping findings a secret until the final report is released, or adjusting findings to make them politically acceptable. While the latter is clearly ethically unacceptable, the former also has ethical implications. It is recommended that there are regular reports to stakeholders in order to prepare them for any negative or potentially damaging findings. One of the most important competencies of a skilled evaluator is the ability to speak the truth in a way that is respectful and avoids unnecessary damage to organizations and participants. Some strategies that you may find helpful are to:

  • Involve stakeholders in interpretation of emerging results and planning for release of findings.
  • Frame findings neutrally – being careful not to assign blame.
  • Consider a private meeting if there are sensitive findings affecting one stakeholder to prepare them for public release of information.

Research ethics boards (REBs) vary in how they perceive their role in evaluation. Some, reflecting the view that evaluation is different from research, may decline to review evaluation proposals unless they are externally funded. Other REBs, including institutional boards, require ethical review. This situation can create confusion for researchers. It can also present challenges if researchers feel that their initiative requires REB review (as they are working with humans to generate new knowledge) but there is reluctance on the part of the REB to review their proposal. Unfortunately, there may also be less attention paid to ethical conduct of activities if the initiative is framed as evaluation rather than as research. Some REBs may also have limited understanding of evaluation methodologies, which may affect their ability to appropriately review proposals.

Health organizations can, and will, proceed with internal evaluation related to program management, whether or not there is REB approval. There is an often vociferous debate in the literature about the difference between Quality Improvement and Research – and the role of REBs in QI (Bailey et al, 2007). Evaluation activities are generally considered to be Quality Improvement and there is resistance on the part of the health system in many jurisdictions to the involvement of Ethics Boards in what organizations see as their daily business (Haggerty, 2004).

Unfortunately, this grey zone in REB review often results in less attention being given to the ethical aspects of evaluation activities. In other words, the attention is directed to the ethics review process (and approval), rather than the ethical issues inherent to the project. In some cases, there may even be a deliberate decision to define an activity as evaluation rather than research simply to avoid the requirement of ethical review. This lack of attention to the very real ethical issues posed by evaluation activities can often pose significant risks to staff, patients/ clients, organizations and communities – risks that are sometimes as great as those posed by research activities. These risks include:

  • Negative impacts on staff . There is a risk that difficulties identified with a program may be attributed to specific staff. Some organizations are even known to conduct a program evaluation as a way to avoid dealing with staff performance – presenting a potential trap (and ethical dilemma) for the evaluator. This practice is also one reason why evaluation is often threatening to staff. Even if this is not the intent, negative findings of an evaluation can have disastrous effects on staff.
  • Opportunity costs . Inadequate (or limited) evaluation can result in continuing to provide resources to support a lackluster service, meaning that other initiatives cannot be funded.
  • Negative impacts on patients/ clients . Not only clinical, but system redesign interventions, can have negative impacts on patients and families – it is essential that the evaluation is designed to assess potential impacts.
  • Impact on organizational reputation . Concerns about results of evaluations becoming public may lead to organizational avoidance of external evaluation. As indicated earlier, it is essential to negotiate the terms of any evaluation reporting involving an external partner before the evaluation is conducted.

There is growing interest in evaluating complex initiatives – and many initiatives (particularly in the Population Health and Health Services environments) are, by their very nature, complex.

It has been claimed that one of the reasons so little progress has been made in resolving the many problems facing us in the healthcare system, is that we continue to treat complex problems as though they were simple or complicated ones.

Evaluation design must match the complexity of the situation (Patton, 2011). Simple problems reflect linear cause-effect relationships: issues are fairly clear, and it is usually not difficult to get agreement on the 'best' answer is to a given problem. In such cases there is a high level of both a) certainty about whether a certain action will result in a given outcome, and b) agreement on the benefits of addressing the issue. Simple problems are relatively easy to evaluate – evaluation usually focuses on outcomes. An example would be the evaluation of a patient education program to increase knowledge of chronic disease self-management. This knowledge could be measured with a pre/ post design. Findings from evaluations of simple interventions may be replicable.

In complicated problems, cause and effect are still linked in some way, but there are many possible good answers – not just one best way of doing things. There is lack of either certainty about the outcome (technically complicated), or agreement on the benefits of the intervention (socially complicated) (Patton, 2011). An example of the latter would be provision of pregnancy termination services or safe injection sites. Evaluation in complicated contexts is more difficult, as there is need to explore multiple impacts from multiple perspectives. There are many, often diverse, stakeholders.

In complex systems it is not possible to predict what will happen (Snowden & Boone, 2007). The environment is continually evolving and small things can have significant and unexpected impacts. Evaluation in complex environments requires a great deal of flexibility – it needs to take place in 'real time': the feedback from the evaluation itself serves as an intervention. There are no replicable solutions as solutions are often context specific. Evaluation in a complex environment requires identification of principles that are transferable to other contexts – where the actual intervention may look quite different. This lack of clear cause-effect relationships (which may be apparent only in retrospect) explains the limitations of logic models in such environments (although their use may help to test assumptions in underlying program theory).

The complexity of "causation"

In research, much attention in research design is directed to identifying and minimizing sources of bias and confounding, and in distinguishing between causation and correlation. These design considerations are equally important in evaluation, and can often be more challenging than in some research projects as there is usually not the opportunity to create a true experiment, where all conditions are controlled. In fact, this inability to control the environment in which the initiative to be evaluated is taking place is one of the greatest challenges identified by many researchers.

A major task in evaluation is that of differentiating between attribution and contribution . A common evaluation error is to select a simple pre-post (or before and after) evaluation design, and to use any differences in data measured to draw conclusions about the impact of the initiative. In 'real life' situations, of course, there are many other potential causes for the observed change. Commonly, the intervention can be expected to contribute to some of the change, but rarely all of it. The challenge, then, is to determine the extent of the contribution of the intervention under study.

Some authors provide detailed formulae for helping evaluators determine the proportion of effect that can be assumed to be contribution of the specific intervention (see for example Davidson, 2005, Chapter 5). Strategies often used by evaluators to help determine the 'weight' that should be given to the contribution of the intervention include:

  • Triangulation. Triangulation refers to use of multiple methods, data sources, and analysts to increase depth of understanding of an issue.
  • A focus on reflexivity and evaluative thinking throughout the phase of data interpretation.
  • Incorporation of qualitative methods where those directly involved can be asked to comment on other factors potentially contributing to results.

For example, one evaluation of a Knowledge Translation initiative conducted in collaboration with health regions included interviews with CEOs and other key individuals in addition to assessing measurable changes. Participants were asked directly about factors that had, over the preceding years, contributed to increased use of evidence in organizational planning. Responses included a range of other potential contributors (e.g. changes by the provincial department of health to the planning process; increased access to library resources; new organizational leadership) in addition to the intervention under evaluation. Identification of these factors and the relative impact attributed by stakeholders to each in promoting change, assisted in determining the extent of contribution of the project compared to other events occurring at the time.

Decision-makers are often interested in an economic evaluation of an intervention(s). The purpose of economic evaluation (defined as the comparison of two or more alternative courses of action in terms of both their costs and consequences) is to help determine whether a program or service is worth doing compared with other things that could be done with the same resources (Drummond et al., 1997). If, however, only the costs of two or more alternatives are compared, (without consideration of the effects or consequences of these alternatives) this is not a full economic evaluation, rather it is a cost analysis.

Unfortunately, those requesting economic evaluation (and some of those attempting to conduct it) often equate costing analyses (assessment of the costs of a program or elements of a program) with economic evaluation, which can lead into dangerous waters. Before drawing conclusions, it is also necessary to know the costs of other alternatives, and the consequences of these alternatives – not only to the program under study but from the perspective of the larger health system or society. One risk in simple costing studies is that a new service often has a separate budget line, whereas the costs of continuing with the status quo may be hidden and not available for analysis.

Case study # 1: Unassigned Patients

The department of health originally requested an economic evaluation of the various hospital models. Preliminary investigation revealed that the plan was, essentially, to conduct a cost analysis of only one component of the model – the actual physician costs. This is not an economic evaluation, as to demonstrate which of the models was most cost effective would require not only calculation of other costs of each model (e.g. nursing costs, test ordering), but also of the consequences of each (e.g. readmissions, LOS, costs to other parts of the system such as home care or primary care).

While an economic evaluation would have been extremely useful, the evaluators had neither the funds, nor the data readily available to conduct one. Consequently, simply reporting on the physician costs could have led to flawed decision-making. In this situation, the evaluators explained the requirements for conducting a full economic evaluation, and the feasibility of conducting one with the data available.

In responding to a request to undertake an economic evaluation it is important to:

  • Ensure that you have health economist expertise on your team
  • Undertake a preliminary assessment of the data available (and its quality) to undertake economic analyses
  • Be realistic about the resources needed to undertake an economic evaluation. Most small evaluations do not have the resources to undertake economic evaluation, and the data to do so may not be available. Remember that a poorly designed 'economic evaluation' can lead to poorer decisions than no evaluation at all.
  • Ensure that those who understand the program, and the context within which it is operating, are integrally involved in planning the evaluation.
  • Be prepared to educate those requesting the evaluation on what economic evaluation entails, and the limitations of a costing study.

The concept of the "personal" factor

A useful concept in evaluation is that of the "personal factor" (Patton, 1997). This concept recognizes that often important factors in the success or failure of any initiative are the individual(s) in key roles in the initiative: their knowledge and skill; their commitment to the initiative; the credibility they have with peers; their ability to motivate others. The best example might be that of a set curriculum: different instructors can result in vastly different student assessments of exactly the same course. Many evaluations require consideration of the personal factor before drawing conclusions about the value of the initiative.

At the same time, it is important to ensure that recognition and assessment of the 'personal factor' does not degenerate into a personnel assessment. Nor is it useful for those looking to implement a similar initiative to learn that one of the reasons the initiative was successful was due to the wonderful director/staff. What is needed is clear articulation of what personnel factors contributed to the findings and for these to be communicated in a positive way.

The intent of this module has been to provide researchers with sufficient background on the topic of evaluation that they will be able to design a range of evaluations to respond to a variety of evaluation needs. A secondary objective was to assist reviewers in assessing the quality and appropriateness of evaluation plans submitted as part of a research proposal.

Two of the key challenges encountered by researchers in conducting evaluations are a) the requirement for evaluators to be able to negotiate with a variety of stakeholders, and b) the expectation that evaluations will provide results that will be both useful and used. With this in mind, the module has provided additional guidance on designing and implementing collaborative, utilization-focused evaluations.

Alberta Heritage Foundation for Medical Research. SEARCH. A Snapshot of the Level of Indicator Development in Alberta Health Authorities. Toward a Common Set of Health Indicators for Alberta (Phase One). Edmonton: AHFMR; 1998.

Aldred R. Ethical Issues in contemporary research relationships. Sociology. 2008 42: 887-903.

Alkin MC & Christie CA. An evaluation theory tree [ PDF (554 KB) - external link ] . In Alki, MC. Evaluation roots: Tracing theorist views and influences. Sage; 2004. Chapter 2; p. 12-65.

American Evaluation Association. Guiding Principles for Evaluators. 2004. Available from: Guiding Principles for Evaluators .

Baily L, Bottrell M, Jennings B, Levine R. et al. The ethics of using quality improvement methods in health care. Annals of Internal Medicine 2007; 146:666-673.

Bell E, Bryman A. The ethics of management research: an exploratory content analysis. British Journal of Management. 2007;18:63-77.

Bonar Blalock A. Evaluation Research and the performance management movement: From estrangement to useful integration? Evaluation 1999; 5 (2): 117-149.

Bowen S, Erickson T, Martens P, The Need to Know Team. More than "using research": the real challenges in promoting evidence-informed decision-making. Healthcare Policy. 2009; 4 (3): 87-102.

Bowen S, Kreindler, S. Indicator madness: A cautionary reflection on the use of indicators in healthcare. Healthcare Policy. 2008; 3 (4): 41-48.

Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin Company; 1966.

Canadian Evaluation Society , Vision, Mission, Goals, Guidelines for Ethical Conduct. (N.d). Available from: Canadian Evaluation Society.

Cargo M, Mercer SL. The value and challenges of participatory research: strengthening its practice . Annu Rev Public Health. 2008; 29: 325-50.

Chen H-T, & Rossi PH. Evaluating with sense: The theory-driven approach. Evaluation Review. 1983; 7(3):283-302.

Chesson JC. Sustainability indicators: Measuring our progress [ PDF (164 KB) - external link ] . Science for Decision Makers. 2002; 2: 1-7.

Coryn CLS, Noakes LA, Westine CD, Schoter DC. A systematic review of theory-driven evaluation practice from 1990 to 2009. American J Evaluation. 2011, June 32 (2): 199-226.

Davidson J. Evaluation methodology basics: the nuts and bolts of sound evaluation. Thousand Oaks: Sage Publications; 2005.

Drummond M, O'Brien B, Stoddart G. & Torrance Methods for the evaluation of health care programmes (2 ed.) Oxford: Oxford University Press; 1997.

Flicker S, Travers R, Guta A, McDonald S, Meagher A. Ethical dilemmas in community-based participatory research: Recommendations for institutional review boards. Journal of Urban Health. 2007;84(4):478-493.

Gamble JA. A developmental evaluation primer . The JW McConnell Family Foundation; 2006. Available from: The JW McConnell Family Foundation.

Glass GV, Ellett FS. Evaluation Research. Annu Rev Psychol 1980;31:211-28.

Guba EG & Lincoln YS. Fourth Generation Evaluation. Newbury Park, CA: Sage Publications; 1989.

Haggerty KD. Ethics creep: Governing social science research in the name of ethics. Qualitative Sociology. 2004;27(4):391-413.

Levin-Rozalis M. Evaluation and research: Differences and similarities. Canadian Journal of Program Evaluation. 2003; 18 (2) 1–31.

Patton MQ. Evaluation for the way we work [ PDF (907 KB) - external link ] . NonProfit quarterly. 2006.

Patton MQ. Utilization-focused evaluation. 3d Edition. Thousand Oaks: Sage Publications; 1997.

Patton MQ. Developmental evaluation: applying complexity concepts to enhance innovation and use. Guilford Press; 2011.

Preskill H. Evaluation's second act: A spotlight on learning. American Journal of Evaluation. 2008 29(2): 127-138.

Robert Wood Johnson Foundation. Guide to evaluation Primers [ PDF (998 KB) - external link ] . Association for the Study and Development of Community; 2004.

Rossi P, Lipsey MW & Freeman H. Evaluation: A Systematic Approach. 7th Edition. Thousand Oaks, CA: Sage Publications; 2004.

Scriven M. Evaluation ideologies. In G.F. Madeus, M. Scriven & D.S. Stufflebeam (eds). Evaluation models: Viewpoints on educational and human services evaluation (p229-260). Boston: Klower-Nijhoff; 1983.

Scriven M. Evaluation thesaurus (4th Edition). Thousand Oaks: Sage Publications; 1991.

Shadish WR. The common threads in program evaluation . Prev Chronic Dis [serial online] 2006 Jan [date cited].

Shadish WR, Cook TD, Leviton LC. Scriven M: The science of valuing. In: Shadish WR,

Cook TD, Leviton LC (eds). Foundations of program evaluation: Theories of practice. Newbury Park: Sage Publications; 1991. p. 73-118.

Snowden DJ; Boon ME. A leader's framework for decision making. Harvard Business Review . November 2007.

Section 5: Resources

American Evaluation Association .

ARECCI (A Project Ethics Community Consensus Initiative). (REB/Ethical considerations) .

Bhattacharyya O.K , Estey E.A, Zwarenstein M. Methodologies to evaluate the effectiveness of knowledge translation interventions: A primer for researchers and health care managers Journal of Clinical Epidemiology 64 (1), pp. 32-40.

Canadian Evaluation Society .

Holden, D.J., Zimmerman, M. 2009. A practical guide to program evaluation planning. Sage Publications, Thousand Oaks.

Judge, K & Bauld L. Strong theory, flexible methods: evaluating complex community-based initiatives, Critical public health 2001 Vol. 11 Issue 1, p19-38, 20p.

King JA, Morris LL, Fitz-Gibbon CT. How to Assess Program Implementation. Newbury Park, CA: Sage Publications; 1987.

Pawlson, R & Tilley, N. Realistic Evaluation. Sage Publications.1997.

Public Health Agency of Canada. Program evaluation toolkit .

Shadish WR, Cook TD, Leviton LC. Foundations of Program Evaluation: Theories of Practice. Newbury Park: Sage Publications; 1991.

University of Wisconsin – Extension. Logic Model . Program Development and Evaluation.

Western Michigan University. The Evaluation Centre .

W.K. Kellogg Foundation . Evaluation Handbook.

"Black box" evaluation: Evaluation of program outcomes without investigating the mechanisms (or being informed by the program theory) presumed to lead to these outcomes.

Collaborative evaluation : An evaluation conducted in collaboration with knowledge users or those affected by a program. There are many approaches to collaborative evaluation, and the partners in a particular collaborative evaluation may vary depending on the purpose of the evaluation. Sharing of decision-making around evaluation questions, and interpretation of findings is implied; however, the degree of involvement may vary.

Developmental evaluation : An evaluation whose purpose is to help support the design and development o f a program or organization. This form of evaluation is particularly helpful in rapidly evolving situations.

Economic evaluation : The comparison of two or more alternative courses of action in terms of both their costs and consequences. The purpose of economic evaluation is to help determine whether a program or service is worth doing compared with other things that could be done with the same resources.

Evaluation research: A research project that has as its focus the evaluation of some program process, policy or product. Unlike program evaluation, evaluation research is intended to generate knowledge that can inform both decision-making in other settings and future research.

External evaluation : Evaluation conducted by an individual or group external to and independent from the initiative being evaluated.

Formative evaluation : An evaluation conducted for the purpose of finding areas for improving an existing initiative.

Goals-based evaluation : An evaluation that is designed around the stated goals and objectives of an initiative. The purpose of the evaluation is to determine whether these goals and objectives have been achieved.

Goals-free evaluation : An evaluation that focuses on what is actually happening as a result of the initiative or intervention. It is not limited by stated objectives.

Implementation evaluation: An evaluation that focuses on the process of implementation of an initiative.

Improvement-oriented evaluation : See formative evaluation.

Intermediate outcomes: A measurable result that occurs between the supposed causal event and the ultimate outcome of interest. While not the final outcome, an intermediate outcome is used as an indicator that progress is being made towards it.

Internal evaluation : Evaluation conducted by staff of the program that designed and/ or implemented the intervention. Also applies to researchers evaluating the results of their own research project.

Multi-method evaluation : An evaluation that uses more than one method. See also 'triangulation'.

Outcome evaluation : An evaluation that studies the immediate or direct effects of the program on participants.

Process evaluation : An evaluation that focuses on the content, implementation and outputs of an initiative. The term is sometimes used to refer to an evaluation that only focuses on program processes.

Program evaluation : Evaluation of a specific program primarily for program management and organizational decision purposes.

Summative evaluation : An evaluation that focuses on making a judgment about the merit or worth of an initiative. This form of evaluation is conducted primary for reporting or decision-making purposes.

Theory-driven evaluation : An evaluation that explicitly integrates and uses theory in conceptualizing, designing, conducting, interpreting, and applying an evaluation.

Program theory refers to a statement of the assumptions about why an intervention should affect the intended outcomes.

Triangulation: Use of two or more methods, data sources, or investigators to investigate an evaluation question. Ideally, methods or data sources with different strengths and weaknesses are selected in order to strengthen confidence in findings.

Utilization-focused evaluation : An evaluation which focuses on intended use by intended users. Utilization-focused evaluations are designed with actual use in mind.

Sample evalution planning matrix

  • Summary of initiative
  • Why is evaluation being undertaken?
  • How will results be used?
  • Who are intended users of the evaluation?
  • What other stakeholders should be involved in some way?
  • Evaluation Focus

Sample evalution planning matrix:

Unassigned patients, a. background.

The provincial health department has requested an evaluation of the different models of medical care currently provided for unassigned patients (patients without a family physician to follow their care in hospital) in three regional hospitals. The health region has established a Steering Committee to guide the evaluation and ensure that the concerns of all stakeholders are represented. This committee has commissioned a review of the relevant literature; however this review did not provide clear guidance for service design in the context of this health region. Through the process of site visits, hospital staff have identified a number of concerns that they hope will be explored through evaluation activities.

B. Evaluation Purpose

  • To respond to the request of the provincial government to determine whether the models developed by the sites were a) effective in providing care to unassigned patients, and b) were sustainable.
  • To identify strengths and needed improvements of each program.
  • To explore both program specific and system issues that may be affecting timely and quality care for medical patients.

In this improvement oriented evaluation, there is no intent to select one 'best model', but rather to identify strengths and limitations of each strategy with the objective of assisting in improving quality of all service models.

C. Intended Use of Evaluation

D. intended users.

Knowledge users are identified as Department of Health staff, hospital site management, regional senior management and regional and site physician and nurse leadership.

E. Evaluation Focus

A goals free evaluation focus will be adopted. It will focus on what is actually happening within each of the identified models.

CIHR defines a knowledge user as "an individual who is likely to be able to use the knowledge generated through research to make informed decisions about health policies, programs and/or practices".

In integrated knowledge translation "researchers and research users work together to shape the research process by collaborating to determine the research questions, deciding on the methodology, being involved in data collection and tools development, interpreting the findings, and helping disseminate the research results".

evaluation a research project

Yearly paid plans are up to 65% off for the spring sale. Limited time only! 🌸

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

evaluation a research project

HubSpot CRM

evaluation a research project

Google Sheets

evaluation a research project

Google Analytics

evaluation a research project

Microsoft Excel

evaluation a research project

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

evaluation a research project

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • What is evaluation research: Methods & examples

What is evaluation research: Methods & examples

Defne Çobanoğlu

You have created a program or a product that has been running for some time, and you want to check how efficient it is. You can conduct evaluation research to get the insight you want about the project. And there are more than one method and way to obtain this information.

Afterward, when you collect the appropriate data about the program on its effectiveness, budget-friendliness, and opinions from customers, you can go one step further. The valuable information you collect from the research allows you to have a clear idea of what to do next. You can discard the project, upgrade it, make changes, or replace it. Now, let us go into detail about evaluation research and its methods.

  • First things first: Definition of evaluation research

Basically, evaluation research is a research process where you measure the effectiveness and success of a particular program, policy, intervention, or project. This type of research lets you know if the goal of that product was met successfully and shows you any areas that need improvement . The data gathered from the evaluation research gives a good insight into whether or not the time, money, and energy put into that project is worth it.

The findings from evaluation research can be used to form decisions about whether to continue, modify, discontinue, and improve future programs or interventions . Therefore, in other words, it means doing research to evaluate the quality and effectiveness of the overall project.

What is evaluation research?

What is evaluation research?

Why conduct evaluation research & when?

Conducting evaluation research is an effective way of usability testing and cost-effectiveness of the current project or product. Findings gathered from evaluative research play a key role in assessing what works and what doesn't and identifying areas of improvement for sponsors and administrators. This type of evaluation is a good means for data collection, and it provides a concrete result for decision-making processes.

There are different methods to collect feedback ranging from online surveys to focus groups. Evaluation research is best used when:

  • You are planning a different approach
  • You want to make sure everything is going as you want them to
  • You want to prove the effectiveness of an activity to the stakeholders and administrators
  • You want to set realistic goals for the future.
  • Methods to conduct an evaluation research

When you want to conduct evaluation research, there are different types of evaluation research methods . You can go through possible methods and choose the most suitable one(s) for you according to your target audience, manpower, and budget to go through with the research steps. Let us look at the qualitative and quantitative research methodologies.

Quantitative methods

These are the type of methods that asks questions to get tangible answers that rely on numerical data and statistical analysis to draw conclusions . These questions can be “ How many people? ”, “ What is the price? ”, “ What is the profit rate? ” etc. Therefore, they provide researchers with quantitative data to draw concrete conclusions. Now, let us look at the quantitative research methods.

1 - Online surveys

Surveys involve collecting data from a large number of people using appropriate evaluation questions to gather accurate feedback . This type of method allows for reaching a wider audience in a short time in a cost-effective manner. You can ask about various topics, from user satisfaction to market research. And, It would be quite helpful to use a free survey maker such as forms.app to help with your next research!

2 - Phone surveys

Phone surveys are a type of survey that involves conducting interviews with participants over the phone . They are a form of quantitative research and are commonly used by organizations and researchers to collect data from people in a short time. During a phone survey, a trained interviewer will call the participant and ask them a series of questions. 

Qualitative methods

This type of research method basically aims to explore audience feedback. These methods are used to study phenomena that cannot be easily measured using statistical techniques, such as opinions, attitudes, and behaviors . Techniques such as observation, interviews, and case studies are used to form evaluation for this method.

1 - Case studies

Case studies involve the analysis of a single case or a small number of cases to be explored further. In a case study, the researcher collects data from a variety of sources, such as interviews, observations, and documents. The data collected from case studies are often analyzed to identify patterns and themes .

2 - Focus groups

Using focus groups means having a small group of people and presenting them with a certain topic. A focus group usually consists of 6-10 people. The focus groups are introduced to a topic, product, or concept, and they present their reviews . Focus groups are a good way to obtain data as the responses are immediate. This method is commonly used by businesses to gain insight into their customers.

  • Evaluation research examples

Conducting evaluation research has helped many businesses to further themselves in the market because a big part of success comes from listening to your audience. For example, Lego found out that only around %10 of their customers were girls in 2011. They wanted to expand their audience. So, Lego conducted evaluation research to find and launch products that will appeal to girls.

  • Surveys questions to use in your own evaluation research

No matter the type of method you decide to go with, there are some essential questions you should include in your research process. If you prepare your questions beforehand and ask the same questions to all participants/customers, you will end up with a uniform set of answers. That will allow you to form a better judgment. Now, here are some good questions to include:

1  - How often do you use the product?

2  - How satisfied are you with the features of the product?

3  - How would you rate the product on a scale of 1-5?

4  - How easy is it to use our product/service?

5  - How was your experience completing tasks using the product?

6  - Will you recommend this product to others?

7  - Are you excited about using the product in the future?

8  - What would you like to change in the product/project?

9  - Did the program produce the intended outcomes?

10  - What were the unintended outcomes?

  • What’s the difference between generative vs. evaluative research?

Generative research is conducted to generate new ideas or hypotheses by understanding your users' motivations, pain points, and behaviors. The goal of generative research is to define the possible research questions and develop new theories and plan the best possible solution for those problems . Generative research is often used at the beginning of a research project or product.

Evaluative research, on the other hand, is conducted to measure the effectiveness of a project or program. The goal of evaluative research is to measure whether the existing project, program, or product has achieved its intended objectives . This method is used to assess the project at hand to ensure it is usable, works as intended, and meets users' demands and expectations. This type of research will play a role in deciding whether to continue, modify, or put an to the project. 

You can determine either to use generative or evaluation research by figuring out what you need to find out. However, of course, both methods can be useful throughout the research process in obtaining different types of evidence. Therefore, firstly determine your goal of conducting evaluation research, and then you can decide on the method to go with.

Conducting evaluation research means making sure everything is going as you want them to in your project or finding areas of improvement for your next steps. There are more than one methods to go with. You can do focus groups or case studies on collecting opinions, or you can do online surveys to get tangible answers. 

If you choose to do online surveys, you can try forms.app, as it is one of the best survey makers out there. It has more than 1000 ready-to-go templates. If you wish to know more about forms.app, you can check out our article on user experience questions !

Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.

  • Form Features
  • Data Collection

Table of Contents

  • Why conduct evaluation research & when?

Related Posts

75+ Black Friday statistics that will blow your mind

75+ Black Friday statistics that will blow your mind

Fatih Özkan

Paperform vs. Typeform: Which one is better?

Paperform vs. Typeform: Which one is better?

How to use video throughout your sales process to close more deals

How to use video throughout your sales process to close more deals

Cristian Stanciu

evaluation a research project

  • NIH Grants & Funding
  • Blog Policies

NIH Extramural Nexus

evaluation a research project

Simplified Review Framework for Research Project Grants (RPGs) Webinar Resources Available 

Did you miss the webinar on the implementation of the simplified review framework for RPGs? Not to worry, the presentation resources are now available! Reference the slides and transcript , or dive right in to the video , which includes sections on:  

  • Background   
  • Overview of the simplified review framework   
  • Preparing for and tracking changes to funding opportunities   
  • Tips for applicants   
  • Communication and training   

For more resources, see the event page and the Simplifying Review of Research Project Grant Applications page .  

RELATED NEWS

Before submitting your comment, please review our blog comment policies.

Your email address will not be published. Required fields are marked *

U.S. flag

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Prevention Research Centers Special Interest Projects NOFO

Deadline for Applications: March 4, 2024 by 11:59 PM Eastern time

This Notice of Funding Opportunity, or NOFO (RFA-DP-24-062) invites applications from CDC Health Promotion and Disease Prevention Research Centers (PRCs), selected for funding under RFA-DP-24-004, to apply for supplemental funding to conduct Special Interest Research Projects (SIPs) to inform public health practice.

Approximately $36,075,000 is available for the period of performance (9/30/2024–9/29/2029). CDC anticipates making up to 28 awards under this NOFO. Awards issued under this NOFO are contingent upon the availability of funds and receipt of a sufficient number of meritorious applications.

The purpose of this NOFO is for awarded PRCs to conduct high-quality applied health promotion and disease prevention research projects in real-world settings to identify, design, test, evaluate, disseminate, and translate interventions (i.e., programs, practices, policies, or strategies) to prevent and reduce risk for the leading causes of illness, disability, and death in the United States.

SIPs are supplemental funding awards that focus on topics of interest or gaps in knowledge or research and can also support the development of state and local public health interventions and policies. SIP topics are aligned with public health priorities, such as the Healthy People 2030 Objectives—the Health and Human Services’ national objectives for improving Americans’ health. SIPs are sponsored and primarily funded by CDC Centers, Institutes, and Offices (CIOs).

SIPs can have different structures including funding one or multiple PRCs to conduct community-based applied prevention research projects:

  • Single PRC: The SIP supports one PRC to conduct a specific research project.
  • Multiple PRCs: The SIP supports two or more PRCs to conduct different dimensions of a research project or to test strategies in different populations.
  • Thematic Research Networks: The SIP supports multiple PRCs that collaborate on research of a specific health issue.

PRCs selected for funding under RFA-DP-24-004 are encouraged to apply for SIPs that expand and strengthen their PRC’s mission and increase their applied public health research activities—to contribute to preventing and reducing risk for the leading causes of illness, disability, and death in the United States.

Eligible PRCs funded under RFA-DP-24-004:

  • Emory University
  • Georgia State University
  • Harvard School of Public Heath
  • Morehouse School of Medicine
  • New York University School of Medicine
  • San Diego State University
  • University of Arizona
  • University of Arkansas for Medical Sciences
  • University of California, San Francisco
  • University of Iowa
  • University of Massachusetts Medical School Worcester
  • University of Michigan at Ann Arbor
  • University of Minnesota
  • University of North Carolina at Chapel Hill
  • University of Pennsylvania
  • University of Pittsburgh
  • University of Rochester
  • University of South Carolina at Columbia
  • University of Utah
  • University of Wisconsin-Madison

CDC will be hosting a pre-application informational call Thursday, February 1, 2024, 1:30–3:30pm Eastern time. Register for the call.

Letters of Intent (LOIs) are due February 2, 2024. Although LOIs are optional, they are highly encouraged and should be emailed with “NOFO RFA-DP-24-062” in the subject line to [email protected] . Please refer to NOFO “Section IV. Application and Submission Information, 3. Letter of Intent” for information to include.

Please email any questions about this NOFO to [email protected] with “NOFO RFA-DP-24-062” in the subject line. Questions must be received by February 16, 2024, 5 PM Eastern time to ensure a response by the application deadline of March 4, 2024.

  • Opening date: December 15, 2023
  • Pre-application information call: February 1, 2024, 1:30-3:30pm Eastern time. View the call script and presentation
  • Closing date: March 4, 2024
  • The last day to submit questions regarding RFA-DP-24-062 is February 16, 2024 at 5:00 PM Eastern Time.

Apply for this Funding Opportunity

Frequently Asked Questions

Submit a Question

Call Script

Submit Letter of Intent by February 2, 2024

Subscribe to Prevention Research Matters for the latest updates on the PRC Network:

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

Content Search

Consultancy services pool for programs/projects baseline/evaluation/research/need assessment/survey, etc..

  • Concern Worldwide
  • Background:

Concern Worldwide began its work as an INGO in Bangladesh in 1972 and has responded to all major emergencies and implemented numerous programmes for the socio-economic empowerment of extremely poor people across the country. Under our Country Strategic Plan 2022- 2026 we aim to contribute to bringing sustainable, positive changes in the lives of people living in extreme poverty in Bangladesh. We will achieve this through working on five particular pillars – ensuring sustained change from our programmes (predominantly in the health, nutrition and livelihoods sectors); climate change; humanitarian action; working through partnership; and equality, diversity and inclusion.

2. Introduction:

Concern Worldwide is pleased to invite expressions of interest from qualified consultancy firms and individual consultants to establish a Consultancy Services Pool dedicated to undertaking various research studies, evaluations, and evidence documentation for development and humanitarian projects/programs within Bangladesh. The purpose of this pool is to engage reputable and proficient consultants who will contribute their expertise to inform, evaluate, and enhance the implementation of our development and humanitarian initiatives.

3. Scope of Services:

The Consultancy Services Pool, functions as a dynamic and comprehensive resource that offers expert advice and analytical services for a wide range of development and humanitarian projects and programs. The service pool is established to support M&E, Learning, Evaluation as well as targeted Evidence and Research for Concern Worldwide projects and program for a wide range of sectors incl. Health and Nutrition, Food Security and resilient Livelihoods, Climate Change Adaptation, Disaster Risk Reduction and Anticipatory Action. Humanitarian & Emergency; WASH

4. Modalities of this Consultancy Service pool:

  • Primarily, a talent consultancy pool (individual/firm) will be develop through the entire process from this EOIs.
  • Individual consultant/firm will be selected through a competitive process for enlisted pool 2(two) years of validity.
  • The final tasks to conduct various research studies, evaluations and/or assessments will be given based on their sector specific expertise and relevant experiences from the enlisted pool.
  • Individual consultant/firm will solely responsible for data collection, cleaning, compilation, analysis and reporting like; evaluation, research & studies as requires by Concern as per developed ToR for those specific events.
  • The individual consultant/firm will also be responsible for evidence generation and share the project/programme learning to the stakeholders based on the final assignment TOR.
  • Individual consultant/firm will directly work with the Programme and MEAL team at Concern Worldwide to execute the given tasks, while they have also liner relationship with Partner as well.

The following specific assignments will be included in the Consultancy Service Pool:

Enumerators supply for baseline, Mid-line, and Annual Outcome Monitoring:

  • The consultant/firm will supply enumerators/data collectors as required by the MEAL team for data collection support to baseline/mid-line/annual outcome monitoring/post distribution monitoring surveys under the overall guidance of MEAL Advisor/M&E Specialist along with programme manager and partner as well.

Project Evaluation /mid-term, End line and final evaluation:

  • Undertake evaluations, research & studies of ongoing programs and projects to assess their impact, effectiveness, and efficiency, sustainability following DAC criteria’s
  • Provide recommendations for improvements and/or adjustments based on evaluation findings including a comprehensive report

Research and Analysis:

  • Conduct in-depth research on specific thematic areas relevant to the organization's goals and objectives
  • Analyse findings and present actionable insights and recommendations
  • Develop a study methodology and protocol to ensure high-quality research.
  • Conduct surveys to gather quantitative and qualitative data on the specific topics relevant to the target groups specified in the respective project requirements, and produce a comprehensive report.

Context Analysis and Need Assessment:

  • Undertake need assessments to identify gaps, challenges, and opportunities in targeted areas
  • Provide a detailed report outlining the identified needs and potential solutions/interventions

Data Analysis and Reporting:

  • Analyse collected data using appropriate statistical methods and tools for evaluations, research & studies.
  • Prepare comprehensive reports presenting key findings, trends, insights and learning from evaluations, research & studies

Experience & Qualifications:

Essential Experience & Qualifications (The consulting firm/lead consultant(s) should have the following):

Lead consultant should have Master of Public Health or related health or nutrition, Climate Change and Adaptation, Development Studies, Economics and other relevant Social Sciences discipline.

  • At least 10 years of proven experience in the area of community based health and nutrition, Climate Change and Adaptation, Food Security & Resilient Livelihoods conducting surveys, including baseline, mid-term and final assessments of complex multisector nutrition programs, including KPCs.
  • Proven experience of working with high level stakeholders including government ministries and donors
  • Having experience working in both urban and rural contexts in Bangladesh and have conducted similar studies and research with international development partners.
  • Proven experience of policy advocacy with high level policy stakeholders, including writing Policy Briefs
  • Proven ability to apply a gender sensitive lens to all work
  • Excellent analytical and communication skills (Bangla and English)
  • Excellent English Report writing skills
  • Consultant(s) have the capacity to engage enumerators with at least three years relevant hands-on experience.
  • The Consultant should have excellent facilitation skills (Bangla & English)
  • Experience on digital data gathering management system on tablet /digital device
  • Willingness to travel and work in tough field environments.
  • The firm/consultant should have understanding on setting up standard data quality control mechanism and GDPR.

Criteria of Relevant Work experiences:

  • Health & Nutrition
  • Food Security and resilient Livelihoods
  • Climate Change and Adaption
  • Disaster Risk Reduction and Anticipatory Action
  • Humanitarian and Emergency

Concern Worldwide’s Policies and Guidelines: Concern's Code of Conduct (CCoC) and its associated safeguarding policies; the Programme Participant Protection Policy, the Child Safeguarding Policy and the Anti-Trafficking in Persons Policy have been developed to ensure the maximum protection of programme participants from exploitation and to clarify the responsibilities of Concern staff, consultants, contractors, visitors to the programme and partner organisations, and the standards of behaviour expected of them. In this context staff have a responsibility to the organisation to strive for and maintain the highest standards in the day-to-day conduct in their workplace in accordance with Concern's core values and mission. Concern's Code of Conduct and its associated safeguarding policies have been appended to this Contract for your signature. By signing the Concern Code of Conduct you demonstrate that you have understood their content and agree to conduct yourself in accordance with the provisions of these two documents.

Breach of Code of Conduct and Sharing of Information : We are required to share details of certain breaches of Concern’s Code of Conduct, specifically those related to fraud, sexual exploitation, abuse and harassment and trafficking in persons, with external organizations such as institutional donors, regulatory bodies and future employers. In the event where you have been found to be in breach of these aspects of Concern’s Code of Conduct, your personal details (e.g. name, date of birth, address and nationality) and details of these breaches will be shared with these external bodies. Organizations may retain this data and use it to inform future decisions about you.

In addition, where we are working in partnership with another organization and where there are allegations of breaches in the above areas against you, we will cooperate with any investigation being undertaken and will share your personal details with investigation teams.

A breach of this policy will result in disciplinary action up to, and including, dismissal.

Technical Guideline: Both Concern Worldwide’ s Code of Conduct, Communications Branding Policy must be followed throughout this assignment. The communications policy includes the requirement for informed consent of any persons being pictured, videoed, interviewed, etc.

The Consultant/Organization must follow Concern guidelines on communications and branding in all media engagement activities, articles, news, programme etc.All media activities related to Concern’s work and media content publish to external must be aligned with Concern’s Communications policy.

Safety and Security: It is a requirement that the Consultant will comply with Bangladesh security policy and in-country security procedures. It is a requirement that the Consultant will comply with Bangladesh security policy and in-country security procedures. Failing to comply will result in immediate termination of contract. The work plan will need to be flexible and have built in contingency plans that can be activated through mutual discussion with Concern and the partners on the ground. This will be especially important for activities that require fieldwork and face-to-face interactions

How to apply

Requirements for Inclusion in the Pool:

Concern Worldwide Bangladesh invites expressions of interest from qualified and experienced consultancy firms or individuals to be included in the Consultancy Service Pool.

For Individual Consultant

  • Individual profile or individual resume
  • Overview of the relevant work highlighting core competencies and development sectors expertise
  • A short CV maximum three pages
  • Details of previous development projects/programs conducted, including project titles, objectives, methodologies, and outcomes
  • Indication of availability and capacity to undertake project
  • TIN certificate

For Consultancy Firm

  • Short profile of the firm highlighting experiences on related assignment with client detail as mentioned for Individual Consultant.
  • Lead Consultant’s (team leader) with 3 pages of CV highlighting related work experiences and assignment completed.
  • Other Team members’ short description highlighting related expertise those will contribute the assignment.
  • Consultancy Firm’s legal documents (Certificate, TIN and VAT registration).e) technical proposal (research proposal following section E.), f) financial proposal is required to submit. Technical and Financial proposals should be in separate file or separate envelop (in case of hard copy).
  • Details of previous development projects/programs conducted, including project titles, objectives, methodologies, and outcomes.
  • Indication of availability and capacity to undertake project.

Submission requirements for Expression of Interest:

Interested National (Bangladeshi registered) consultancy individual/consultancy firms should submit their Expression of Interest (EOI) along with the following information/sequences in a soft copy only:

  • Interested sector to apply with
  • Description of the individual consultant/Firm – 500 words (Maximum)
  • Lead/key consultant CV- Maximum 3 pages
  • Overview of HR including sector specific expertise (H&N, WASH, etc.)
  • Evidence of related experiences (One sample report or link)
  • Evidence of similar work experiences within last 5 years with UN, INGO. (attach contact/certificate)
  • Any foreign/international affiliation please submit with evidence
  • Legal Documents (TIN, BIN, Trade Licence, incorporation certificate, VAT Certificate etc.)

Pre-discussion meeting: A pre-bid meeting with interested firm/individual consultant on 20 May 2024 at 10:00am

Meeting link: Join the meeting now

  • Meeting ID: 348 772 473 607
  • Passcode: qjTfVL

Submission Deadline:

Expressions of Interest should be submitted by 27 May 2024 on or before 11.59 pm at [email protected] Late submissions will not be considered.

  • Selection process: The selection process will involve evaluating the submitted EOIs based on criteria such as experience, expertise, proposed methodologies, and capacity. Shortlisted firms or consultants may be contacted for further discussions or interview.

Name of Applicant:

Points Available 15 marks

At least 10 years of proven experience in the area of community based health and nutrition, Climate Change and Adaptation, Food Security & Resilient Livelihoods conducting surveys, including baseline, mid-term and final assessments of complex multisector nutrition programs, including KPCs

Applicants must share evidence of similar work completed. International experience will be an added advantage

Methodology: Clear identification of FGDs, KIIs, IDI, logical distribution of samples, triangulation of primary, secondary and observation data to have a clear understanding of the achievements of the project as per logframe. Understanding and present how DAC criteria will be followed for the final evaluation. Indication of any summary findings and presentations.

**Individual Consultant-**A short CV maximum three pages highlighting experiences on related assignment completed with detail client name, address, contact persons & communication details, For other members of team include short CV highlighting relevant tasks or assignment, TIN certificate, A brief technical and financial proposal based on the information provided above. Consultancy Firm- Short profile of the firm highlighting experiences on related assignment with client detail as mention for individual consultant, Lead Consultant’s (team leader) with 2 pages of CV highlighting related work experiences and assignment completed. Other Team members’ (Engage in this assignment) with very short CV highlighting related task and assignment completed. Consultancy Firm’s legal documents (Certificate, TIN and VAT registration), technical proposal (research proposal following section E.2), financial proposal is required to submit, Technical and Financial proposals should be in separate file or separate envelop (in case of hard copy)

Availability 10 marks

Evidence of work provided is of good quality 15 marks

Total 100 marks

Related Content

Bangladesh + 1 more

Bangladesh Crisis Response Plan 2024 - Rohingya Humanitarian Crisis

Bangladesh | heatwave - dref operation appeal: mdrbd034, unicef bangladesh humanitarian situation report no. 67 - 1 jan - 31 mar 2024, who cox’s bazar: rohingya emergency crisis - situation report: march 2024.

  • UNC Chapel Hill

UNC Project-Malawi

MONITORING AND EVALUATION OFFICER

Job Summary

The M&E Officer will be responsible for the monitoring and ensuring high quality and timely inputs, for ensuring that the project maintains its strategic vision, and that its activities result in the achievement of its intended outputs in a cost effective and timely manner. He/she will be responsible for designing and implementing M&E activities of the program and assisting the Team Lead in preparing Quarterly/Annual progress reports.

Specific Duties / Responsibilities

  • Develop and strengthen monitoring, inspection, and evaluation procedures
  • Monitor all project activities and progress towards achieving the project outputs and results
  • Monitor the sustainability of the project’s results
  • Suggest strategies to the Team Leader for improving the efficiency and effectiveness of the project by identifying bottlenecks in completing project activities and developing plans to minimize or eliminate such bottlenecks
  • Guide staff and implementing partners in preparing progress reports.
  • Collaborate with staff and implementing partners on qualitative monitoring to provide relevant information for ongoing evaluation of project activities, foster participatory planning and monitoring by training and involving primary stakeholder groups in the M&E of activities,
  • Ensure that, in general, program monitoring arrangements comply with Fleming Fund requirements.
  • Organize and provide refresher training in M&E for programs and implementing partner staff and primary stakeholders
  • Plan for regular opportunities to identify lessons learned and implications for the program’s next steps.
  • Prepare scheduled reports on M&E findings and progress of activities as required, working closely with data manager, technical staff and implementing partners.
  • Undertake planned visits to the field to support implementation of M&E and to identify where adaptations might be needed.
  • Guide the regular sharing of the outputs of M&E findings with project staff, implementing partners and primary stakeholders.
  • In collaboration with AMRNCC Coordinator, provide the PHIM Director with relevant management information that may be required.

Qualifications and Experience

  • Master’s Degree in Public Health, Biostatistics
  • Experience in designing tools and strategies for data collection, analysis, and production of reports
  • Experience of working in the health sector
  • Expertise in analyzing data using statistical software
  • Strong training & facilitation skills.
  • Understanding, knowledge and experience of Antimicrobial Resistance (AMR) is an added advantage

Please send your applications via email to:

The Country Director

UNC Project

[email protected]

Deadline for receiving applications:

Thursday, February 29, 2024

Only shortlisted candidates will be acknowledged.

Republic of Türkiye Ministry of Industry and Technology - National Technology Initiative

2204-A High School Students Research Projects Competition Preliminary Evaluation Results Announced

https://tubitak.gov.tr/sites/default/files/2204-a_ana.jpg

In the 2204-A High School Students Research Projects Competition organized by TUBITAK, 1.219 of the 23.066 applications from 12 fields were qualified to participate in the regional competition as a result of the jury evaluation. The project exhibition of the competition will be held between March 4-6, 2024 and the award ceremony will be held simultaneously in 12 regions on March 7, 2024.

Click here to find out your application result.

Similar Announcements

Deadline for Applications to The GCIP Türkiye 2024 Accelerator Programme

IMAGES

  1. 9 Sample Project Evaluation Templates to Download

    evaluation a research project

  2. FREE 12+ Sample Research Project Templates in PDF

    evaluation a research project

  3. Sample Project Evaluation

    evaluation a research project

  4. Checklist For Evaluating A Scientific Research Paper

    evaluation a research project

  5. 9 Sample Project Evaluation Templates to Download

    evaluation a research project

  6. FREE Evaluation Report Template

    evaluation a research project

VIDEO

  1. QUANTITATIVE METHODOLOGY (Part 2 of 3):

  2. Types of Program Evaluation

  3. 22

  4. 10 important points the evaluators should consider while evaluating PhD Research Proposals

  5. What is Evaluation in M&E

  6. READS Grant Impact Report

COMMENTS

  1. Evaluating Research

    Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and ...

  2. Evaluation Research Design: Examples, Methods & Types

    Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project. Formative evaluation is the starting point of evaluation research because it sets the tone of the organization's project and provides useful insights for other types of evaluation.

  3. Evaluation Research: Definition, Methods and Examples

    The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications. LEARN ABOUT: Action Research.

  4. Design and Implementation of Evaluation Research

    Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or ...

  5. Tips for Evaluating Your Research

    Abstract. When evaluating Quality of Science (QoS) in the context of development initiatives, it is essential to define adequate criteria. The objective of this perspective paper is to show how altmetric and bibliometric indicators have been used to support the evaluation of QoS in the 2020 Review of the Phase 2-CGIAR Research Programs (CRPs, 2017-2022), where, for the first time, the ...

  6. Evaluating research: A multidisciplinary approach to assessing research

    In the US, the criteria for evaluating research grant applications at the National Institutes of Health include short definitions of five concepts: significance, approach, innovation, investigators, ... The focus in this project is on the quality of research practice, and thus on relevant aspects of the quality of research processes (that is ...

  7. Project MUSE

    Evaluation Research: An Overview. Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes, as a specific research methodology, and as an assessment process that employs special techniques unique to the evaluation of social programs. After the reasons for conducting evaluation ...

  8. Choose an Evaluation Design

    Choose an Appropriate Evaluation Design. Once you've identified your questions, you can select an appropriate evaluation design. Evaluation design refers to the overall approach to gathering information or data to answer specific research questions. There is a spectrum of research design options—ranging from small-scale feasibility studies ...

  9. Evaluation Research

    This chapter focuses on evaluation research, which is frequently used in planning and public policy. Evaluation research is often divided into two phases. The first phase is called ex-ante evaluation research and the second phase is the ex-post evaluation research. Ex-ante evaluation research is often referred to feasibility studies prior to ...

  10. Research Project Evaluation—Learnings from the PATHWAYS Project

    Background: Every research project faces challenges regarding how to achieve its goals in a timely and effective manner. The purpose of this paper is to present a project evaluation methodology gathered during the implementation of the Participation to Healthy Workplaces and Inclusive Strategies in the Work Sector (the EU PATHWAYS Project). The PATHWAYS project involved multiple countries and ...

  11. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Source Definition; Suchman (1968, pp. 2-3) [Evaluation applies] the methods of science to action programs in order to obtain objective and valid measures of what such programs are accomplishing.…Evaluation research asks about the kinds of change desired, the means by which this change is to be brought about, and the signs by which such changes can be recognized.

  12. Elements of an Evaluation Plan

    Some of your primary data will be qualitative in nature; some will be quantitative. One important thing to consider is whether you are collecting data on individuals or groups/organizations: If you collect data on individuals, you will likely focus on their. Knowledge. Attitudes, beliefs, and preferences.

  13. [PDF] Evaluation Research: An Overview

    Evaluation Research: An Overview. R. Powell. Published in Library Trends 6 September 2006. Education, Sociology. TLDR. It is concluded that evaluation research should be a rigorous, systematic process that involves collecting data about organizations, processes, programs, services, and/or resources that enhance knowledge and decision making and ...

  14. A Guide to Evaluation in Health Research

    Evaluation research: A research project that has as its focus the evaluation of some program process, policy or product. Unlike program evaluation, evaluation research is intended to generate knowledge that can inform both decision-making in other settings and future research.

  15. Evaluating impact from research: A methodological framework

    A typology of research impact evaluation designs is provided. •. A methodological framework is proposed to guide evaluations of the significance and reach of impact that can be attributed to research. •. These enable evaluation design and methods to be selected to evidence the impact of research from any discipline.

  16. What is evaluation research: Methods & examples

    First things first: Definition of evaluation research. Basically, evaluation research is a research process where you measure the effectiveness and success of a particular program, policy, intervention, or project. This type of research lets you know if the goal of that product was met successfully and shows you any areas that need improvement ...

  17. (PDF) Four Approaches to Project Evaluation

    evaluation types: (1) constructive process evaluation, (2) conclusive process evaluation, (3) constructive. outcome evaluation, (4) conclusive outcome evaluation and (5) hybrid evaluations derived ...

  18. Writing an Evaluation Plan

    Writing an Evaluation Plan. An evaluation plan is an integral part of a grant proposal that provides information to improve a project during development and implementation. For small projects, the Office of the Vice President for Research can help you develop a simple evaluation plan. If you are writing a proposal for larger center grant, using ...

  19. Evaluating Sources

    Evaluating a source's credibility. Evaluating the credibility of a source is an important way of sifting out misinformation and determining whether you should use it in your research. Useful approaches include the CRAAP test and lateral reading. CRAAP test. One of the best ways to evaluate source credibility is the CRAAP test. This stands for:

  20. Evaluation

    Evaluation - Research Project B. The following exemplars include graded student work. Documents will continue to be uploaded as they become available. RPB A+ Evaluation: Empathy [DOC 79KB] RPB A Evaluation: Fairytales [PDF 1MB] RPB B Evaluation: Hairy-nosed Wombat [DOC 57KB]

  21. Incorporating Climate and Environmental Justice into Research and

    In this webinar, speakers Lori Peek and Leila Darwish discuss environmental justice tools and evaluation in science research. More about the Speakers Leila Darwish is an emergency management and climate resilience professional with a deep commitment to supporting communities on the frontlines of climate change through equitable disaster ...

  22. Simplified Review Framework for Research Project Grants (RPGs) Webinar

    For more resources, see the event page and the Simplifying Review of Research Project Grant Applications page. Tags Peer review. RELATED NEWS. Announcing Revisions to the NIH Fellowship Review and Application Process. April 18, 2024. ... National Institutes of Health Office of Extramural Research.

  23. Table 8

    Provide easily linkable SDOH-focused data to use in patient-centered outcomes research, inform approaches to address emerging health issues, and ultimately contribute to improved health outcomes: Social and economic context; ... This is not a dataset but an evaluation framework for collecting, analyzing, and sharing data. Not applicable:

  24. Prevention Research Centers Special Interest Projects NOFO

    The purpose of this NOFO is for awarded PRCs to conduct high-quality applied health promotion and disease prevention research projects in real-world settings to identify, design, test, evaluate, disseminate, and translate interventions (i.e., programs, practices, policies, or strategies) to prevent and reduce risk for the leading causes of ...

  25. Consultancy Services Pool for Programs/Projects Baseline/Evaluation

    Project Evaluation /mid-term, End line and final evaluation: Undertake evaluations, research & studies of ongoing programs and projects to assess their impact, effectiveness, and efficiency ...

  26. MONITORING AND EVALUATION OFFICER

    The M&E Officer will be responsible for the monitoring and ensuring high quality and timely inputs, for ensuring that the project maintains its strategic vision, and that its activities result in the achievement of its intended outputs in a cost effective and timely manner. He/she will be responsible for designing and implementing M&E ...

  27. Tool Framework to Score, Evaluate and Prioritize Projects Easily

    Information-systems document from Private University of the North, 5 pages, Tool: Framework to Score, Evaluate and Prioritize Projects Easily Refreshed 27 October 2022, Published 22 April 2021 - ID G00743208 - 4 min read ARCHIVED This research is provided for historical perspective; portions may not reflect current conditions. Mbu

  28. 2204-A High School Students Research Projects Competition Preliminary

    In the 2204-A High School Students Research Projects Competition organized by TUBITAK, 1.219 of the 23.066 applications from 12 fields were qualified to participate in the regional competition as a result of the jury evaluation. The project exhibition of the competition will be held between March 4-6, 2024 and the award ceremony will be held ...

  29. Evaluating State Broadband Efforts: Insights from the Broadband

    By Mark JamisonApril 19, 2024. As the country races into its digital transformation, the expansion of broadband across the United States has become a pivotal undertaking. There are numerous state and federal efforts, fueled largely by over $70 billion of federal taxpayer dollars. States are at the forefront, receiving about $42.5 billion to ...

  30. PDF Bureau of Justice Assistance (BJA) FY 2023 Project Safe Neighborhoods

    Funding Focused Program Areas: Intervention, Enforcement, Research and Evaluation . Available Funding Total Program Funding Available at: $704,241.00 . Funding for Research Partners: $78,249 . PSN Microgrant Funding Available: $100,000 (two $50,000 grants) *Funding Available Inside the Three Task Force Areas: $525,992 *(City of San Bernardino ...