• Evaluation Research Design: Examples, Methods & Types

busayo.longe

As you engage in tasks, you will need to take intermittent breaks to determine how much progress has been made and if any changes need to be effected along the way. This is very similar to what organizations do when they carry out  evaluation research.  

The evaluation research methodology has become one of the most important approaches for organizations as they strive to create products, services, and processes that speak to the needs of target users. In this article, we will show you how your organization can conduct successful evaluation research using Formplus .

What is Evaluation Research?

Also known as program evaluation, evaluation research is a common research design that entails carrying out a structured assessment of the value of resources committed to a project or specific goal. It often adopts social research methods to gather and analyze useful information about organizational processes and products.  

As a type of applied research , evaluation research typically associated  with real-life scenarios within organizational contexts. This means that the researcher will need to leverage common workplace skills including interpersonal skills and team play to arrive at objective research findings that will be useful to stakeholders. 

Characteristics of Evaluation Research

  • Research Environment: Evaluation research is conducted in the real world; that is, within the context of an organization. 
  • Research Focus: Evaluation research is primarily concerned with measuring the outcomes of a process rather than the process itself. 
  • Research Outcome: Evaluation research is employed for strategic decision making in organizations. 
  • Research Goal: The goal of program evaluation is to determine whether a process has yielded the desired result(s). 
  • This type of research protects the interests of stakeholders in the organization. 
  • It often represents a middle-ground between pure and applied research. 
  • Evaluation research is both detailed and continuous. It pays attention to performative processes rather than descriptions. 
  • Research Process: This research design utilizes qualitative and quantitative research methods to gather relevant data about a product or action-based strategy. These methods include observation, tests, and surveys.

Types of Evaluation Research

The Encyclopedia of Evaluation (Mathison, 2004) treats forty-two different evaluation approaches and models ranging from “appreciative inquiry” to “connoisseurship” to “transformative evaluation”. Common types of evaluation research include the following: 

  • Formative Evaluation

Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project.  Formative evaluation is the starting point of evaluation research because it sets the tone of the organization’s project and provides useful insights for other types of evaluation.  

  • Mid-term Evaluation

Mid-term evaluation entails assessing how far a project has come and determining if it is in line with the set goals and objectives. Mid-term reviews allow the organization to determine if a change or modification of the implementation strategy is necessary, and it also serves for tracking the project. 

  • Summative Evaluation

This type of evaluation is also known as end-term evaluation of project-completion evaluation and it is conducted immediately after the completion of a project. Here, the researcher examines the value and outputs of the program within the context of the projected results. 

Summative evaluation allows the organization to measure the degree of success of a project. Such results can be shared with stakeholders, target markets, and prospective investors. 

  • Outcome Evaluation

Outcome evaluation is primarily target-audience oriented because it measures the effects of the project, program, or product on the users. This type of evaluation views the outcomes of the project through the lens of the target audience and it often measures changes such as knowledge-improvement, skill acquisition, and increased job efficiency. 

  • Appreciative Enquiry

Appreciative inquiry is a type of evaluation research that pays attention to result-producing approaches. It is predicated on the belief that an organization will grow in whatever direction its stakeholders pay primary attention to such that if all the attention is focused on problems, identifying them would be easy. 

In carrying out appreciative inquiry, the research identifies the factors directly responsible for the positive results realized in the course of a project, analyses the reasons for these results, and intensifies the utilization of these factors. 

Evaluation Research Methodology 

There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality

  • Output/Performance Measurement

Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization. In other words, performance measurement pays attention to the results achieved by the resources invested in a specific activity or organizational process. 

More than investing resources in a project, organizations must be able to track the extent to which these resources have yielded results, and this is where performance measurement comes in. Output measurement allows organizations to pay attention to the effectiveness and impact of a process rather than just the process itself. 

Other key indicators of performance measurement include user-satisfaction, organizational capacity, market penetration, and facility utilization. In carrying out performance measurement, organizations must identify the parameters that are relevant to the process in question, their industry, and the target markets. 

5 Performance Evaluation Research Questions Examples

  • What is the cost-effectiveness of this project?
  • What is the overall reach of this project?
  • How would you rate the market penetration of this project?
  • How accessible is the project? 
  • Is this project time-efficient? 

performance-evaluation-survey

  • Input Measurement

In evaluation research, input measurement entails assessing the number of resources committed to a project or goal in any organization. This is one of the most common indicators in evaluation research because it allows organizations to track their investments. 

The most common indicator of inputs measurement is the budget which allows organizations to evaluate and limit expenditure for a project. It is also important to measure non-monetary investments like human capital; that is the number of persons needed for successful project execution and production capital. 

5 Input Evaluation Research Questions Examples

  • What is the budget for this project?
  • What is the timeline of this process?
  • How many employees have been assigned to this project? 
  • Do we need to purchase new machinery for this project? 
  • How many third-parties are collaborators in this project? 

research project evaluation exemplar

  • Impact/Outcomes Assessment

In impact assessment, the evaluation researcher focuses on how the product or project affects target markets, both directly and indirectly. Outcomes assessment is somewhat challenging because many times, it is difficult to measure the real-time value and benefits of a project for the users. 

In assessing the impact of a process, the evaluation researcher must pay attention to the improvement recorded by the users as a result of the process or project in question. Hence, it makes sense to focus on cognitive and affective changes, expectation-satisfaction, and similar accomplishments of the users. 

5 Impact Evaluation Research Questions Examples

  • How has this project affected you? 
  • Has this process affected you positively or negatively?
  • What role did this project play in improving your earning power? 
  • On a scale of 1-10, how excited are you about this project?
  • How has this project improved your mental health? 

research project evaluation exemplar

  • Service Quality

Service quality is the evaluation research method that accounts for any differences between the expectations of the target markets and their impression of the undertaken project. Hence, it pays attention to the overall service quality assessment carried out by the users. 

It is not uncommon for organizations to build the expectations of target markets as they embark on specific projects. Service quality evaluation allows these organizations to track the extent to which the actual product or service delivery fulfils the expectations. 

5 Service Quality Evaluation Questions

  • On a scale of 1-10, how satisfied are you with the product?
  • How helpful was our customer service representative?
  • How satisfied are you with the quality of service?
  • How long did it take to resolve the issue at hand?
  • How likely are you to recommend us to your network?

research project evaluation exemplar

Uses of Evaluation Research 

  • Evaluation research is used by organizations to measure the effectiveness of activities and identify areas needing improvement. Findings from evaluation research are key to project and product advancements and are very influential in helping organizations realize their goals efficiently.     
  • The findings arrived at from evaluation research serve as evidence of the impact of the project embarked on by an organization. This information can be presented to stakeholders, customers, and can also help your organization secure investments for future projects. 
  • Evaluation research helps organizations to justify their use of limited resources and choose the best alternatives. 
  •  It is also useful in pragmatic goal setting and realization. 
  • Evaluation research provides detailed insights into projects embarked on by an organization. Essentially, it allows all stakeholders to understand multiple dimensions of a process, and to determine strengths and weaknesses. 
  • Evaluation research also plays a major role in helping organizations to improve their overall practice and service delivery. This research design allows organizations to weigh existing processes through feedback provided by stakeholders, and this informs better decision making. 
  • Evaluation research is also instrumental to sustainable capacity building. It helps you to analyze demand patterns and determine whether your organization requires more funds, upskilling or improved operations.

Data Collection Techniques Used in Evaluation Research

In gathering useful data for evaluation research, the researcher often combines quantitative and qualitative research methods . Qualitative research methods allow the researcher to gather information relating to intangible values such as market satisfaction and perception. 

On the other hand, quantitative methods are used by the evaluation researcher to assess numerical patterns, that is, quantifiable data. These methods help you measure impact and results; although they may not serve for understanding the context of the process. 

Quantitative Methods for Evaluation Research

A survey is a quantitative method that allows you to gather information about a project from a specific group of people. Surveys are largely context-based and limited to target groups who are asked a set of structured questions in line with the predetermined context.

Surveys usually consist of close-ended questions that allow the evaluative researcher to gain insight into several  variables including market coverage and customer preferences. Surveys can be carried out physically using paper forms or online through data-gathering platforms like Formplus . 

  • Questionnaires

A questionnaire is a common quantitative research instrument deployed in evaluation research. Typically, it is an aggregation of different types of questions or prompts which help the researcher to obtain valuable information from respondents. 

A poll is a common method of opinion-sampling that allows you to weigh the perception of the public about issues that affect them. The best way to achieve accuracy in polling is by conducting them online using platforms like Formplus. 

Polls are often structured as Likert questions and the options provided always account for neutrality or indecision. Conducting a poll allows the evaluation researcher to understand the extent to which the product or service satisfies the needs of the users. 

Qualitative Methods for Evaluation Research

  • One-on-One Interview

An interview is a structured conversation involving two participants; usually the researcher and the user or a member of the target market. One-on-One interviews can be conducted physically, via the telephone and through video conferencing apps like Zoom and Google Meet. 

  • Focus Groups

A focus group is a research method that involves interacting with a limited number of persons within your target market, who can provide insights on market perceptions and new products. 

  • Qualitative Observation

Qualitative observation is a research method that allows the evaluation researcher to gather useful information from the target audience through a variety of subjective approaches. This method is more extensive than quantitative observation because it deals with a smaller sample size, and it also utilizes inductive analysis. 

  • Case Studies

A case study is a research method that helps the researcher to gain a better understanding of a subject or process. Case studies involve in-depth research into a given subject, to understand its functionalities and successes. 

How to Formplus Online Form Builder for Evaluation Survey 

  • Sign into Formplus

In the Formplus builder, you can easily create your evaluation survey by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on “Create Form ” to begin. 

formplus

  • Edit Form Title

Click on the field provided to input your form title, for example, “Evaluation Research Survey”.

research project evaluation exemplar

Click on the edit button to edit the form.

Add Fields: Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for surveys in the Formplus builder. 

research project evaluation exemplar

Edit fields

Click on “Save”

Preview form.

  • Form Customization

With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 

evaluation-research-from-builder

  • Multiple Sharing Options

Formplus offers multiple form sharing options which enables you to easily share your evaluation survey with survey respondents. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages. 

You can send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Conducting evaluation research allows organizations to determine the effectiveness of their activities at different phases. This type of research can be carried out using qualitative and quantitative data collection methods including focus groups, observation, telephone and one-on-one interviews, and surveys. 

Online surveys created and administered via data collection platforms like Formplus make it easier for you to gather and process information during evaluation research. With Formplus multiple form sharing options, it is even easier for you to gather useful data from target markets.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • characteristics of evaluation research
  • evaluation research methods
  • types of evaluation research
  • what is evaluation research
  • busayo.longe

Formplus

You may also like:

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

research project evaluation exemplar

Formal Assessment: Definition, Types Examples & Benefits

In this article, we will discuss different types and examples of formal evaluation, and show you how to use Formplus for online assessments.

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

Assessment vs Evaluation: 11 Key Differences

This article will discuss what constitutes evaluations and assessments along with the key differences between these two research methods.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Banner

RCE 672: Research and Program Evaluation: APA Sample Paper

  • Tips for GALILEO
  • Evaluate Sources
  • How to Paraphrase
  • APA Citations
  • APA References
  • Book with Editor(s)
  • Book with No Author
  • Book with Organization as Author
  • Chapters and Parts of Books
  • Company Reports
  • Journal Article
  • Magazine Article
  • Patents & Laws
  • Unpublished Manuscripts/Informal Publications (i.e. course packets and dissertations)
  • APA Sample Paper
  • Paper Formatting (APA)
  • Zotero Citation Tool

Contact Info

[email protected]

229-227-6959

Make an appointment with a Librarian  

Make an appointment with a Tutor 

Follow us on:

research project evaluation exemplar

APA Sample Paper from the Purdue OWL

  • The Purdue OWL has an APA Sample Paper available on its website.
  • << Previous: Websites
  • Next: Paper Formatting (APA) >>
  • Last Updated: Jul 10, 2023 12:59 PM
  • URL: https://libguides.thomasu.edu/RCE672
  • Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Research Questions

Research Questions – Types, Examples and Writing...

Site logo

  • How to Write Evaluation Reports: Purpose, Structure, Content, Challenges, Tips, and Examples
  • Learning Center

Evaluation report

This article explores how to write effective evaluation reports, covering their purpose, structure, content, and common challenges. It provides tips for presenting evaluation findings effectively and using evaluation reports to improve programs and policies. Examples of well-written evaluation reports and templates are also included.

Table of Contents

What is an Evaluation Report?

What is the purpose of an evaluation report, importance of evaluation reports in program management, structure of evaluation report, best practices for writing an evaluation report, common challenges in writing an evaluation report, tips for presenting evaluation findings effectively, using evaluation reports to improve programs and policies, example of evaluation report templates, conclusion: making evaluation reports work for you.

An evaluatio n report is a document that presents the findings, conclusions, and recommendations of an evaluation, which is a systematic and objective assessment of the performance, impact, and effectiveness of a program, project, policy, or intervention. The report typically includes a description of the evaluation’s purpose, scope, methodology, and data sources, as well as an analysis of the evaluation findings and conclusions, and specific recommendations for program or project improvement.

Evaluation reports can help to build capacity for monitoring and evaluation within organizations and communities, by promoting a culture of learning and continuous improvement. By providing a structured approach to evaluation and reporting, evaluation reports can help to ensure that evaluations are conducted consistently and rigorously, and that the results are communicated effectively to stakeholders.

Evaluation reports may be read by a wide variety of audiences, including persons working in government agencies, staff members working for donors and partners, students and community organisations, and development professionals working on projects or programmes that are comparable to the ones evaluated.

Related: Difference Between Evaluation Report and M&E Reports .

The purpose of an evaluation report is to provide stakeholders with a comprehensive and objective assessment of a program or project’s performance, achievements, and challenges. The report serves as a tool for decision-making, as it provides evidence-based information on the program or project’s strengths and weaknesses, and recommendations for improvement.

The main objectives of an evaluation report are:

  • Accountability: To assess whether the program or project has met its objectives and delivered the intended results, and to hold stakeholders accountable for their actions and decisions.
  • Learning : To identify the key lessons learned from the program or project, including best practices, challenges, and opportunities for improvement, and to apply these lessons to future programs or projects.
  • Improvement : To provide recommendations for program or project improvement based on the evaluation findings and conclusions, and to support evidence-based decision-making.
  • Communication : To communicate the evaluation findings and conclusions to stakeholders , including program staff, funders, policymakers, and the general public, and to promote transparency and stakeholder engagement.

An evaluation report should be clear, concise, and well-organized, and should provide stakeholders with a balanced and objective assessment of the program or project’s performance. The report should also be timely, with recommendations that are actionable and relevant to the current context. Overall, the purpose of an evaluation report is to promote accountability, learning, and improvement in program and project design and implementation.

Evaluation reports play a critical role in program management by providing valuable information about program effectiveness and efficiency. They offer insights into the extent to which programs have achieved their objectives, as well as identifying areas for improvement.

Evaluation reports help program managers and stakeholders to make informed decisions about program design, implementation, and funding. They provide evidence-based information that can be used to improve program outcomes and address challenges.

Moreover, evaluation reports are essential in demonstrating program accountability and transparency to funders, policymakers, and other stakeholders. They serve as a record of program activities and outcomes, allowing stakeholders to assess the program’s impact and sustainability.

In short, evaluation reports are a vital tool for program managers and evaluators. They provide a comprehensive picture of program performance, including strengths, weaknesses, and areas for improvement. By utilizing evaluation reports, program managers can make informed decisions to improve program outcomes and ensure that their programs are effective, efficient, and sustainable over time.

research project evaluation exemplar

The structure of an evaluation report can vary depending on the requirements and preferences of the stakeholders, but typically it includes the following sections:

  • Executive Summary : A brief summary of the evaluation findings, conclusions, and recommendations.
  • Introduction: An overview of the evaluation context, scope, purpose, and methodology.
  • Background: A summary of the programme or initiative that is being assessed, including its goals, activities, and intended audience(s).
  • Evaluation Questions : A list of the evaluation questions that guided the data collection and analysis.
  • Methodology: A description of the data collection methods used in the evaluation, including the sampling strategy, data sources, and data analysis techniques.
  • Findings: A presentation of the evaluation findings, organized according to the evaluation questions.
  • Conclusions : A summary of the main evaluation findings and conclusions, including an assessment of the program or project’s effectiveness, efficiency, and sustainability.
  • Recommendations : A list of specific recommendations for program or project improvements based on the evaluation findings and conclusions.
  • Lessons Learned : A discussion of the key lessons learned from the evaluation that could be applied to similar programs or projects in the future.
  • Limitations : A discussion of the limitations of the evaluation, including any challenges or constraints encountered during the data collection and analysis.
  • References: A list of references cited in the evaluation report.
  • Appendices : Additional information, such as detailed data tables, graphs, or maps, that support the evaluation findings and conclusions.

The structure of the evaluation report should be clear, logical, and easy to follow, with headings and subheadings used to organize the content and facilitate navigation.

In addition, the presentation of data may be made more engaging and understandable by the use of visual aids such as graphs and charts.

Writing an effective evaluation report requires careful planning and attention to detail. Here are some best practices to consider when writing an evaluation report:

Begin by establishing the report’s purpose, objectives, and target audience. A clear understanding of these elements will help guide the report’s structure and content.

Use clear and concise language throughout the report. Avoid jargon and technical terms that may be difficult for readers to understand.

Use evidence-based findings to support your conclusions and recommendations. Ensure that the findings are clearly presented using data tables, graphs, and charts.

Provide context for the evaluation by including a brief summary of the program being evaluated, its objectives, and intended impact. This will help readers understand the report’s purpose and the findings.

Include limitations and caveats in the report to provide a balanced assessment of the program’s effectiveness. Acknowledge any data limitations or other factors that may have influenced the evaluation’s results.

Organize the report in a logical manner, using headings and subheadings to break up the content. This will make the report easier to read and understand.

Ensure that the report is well-structured and easy to navigate. Use a clear and consistent formatting style throughout the report.

Finally, use the report to make actionable recommendations that will help improve program effectiveness and efficiency. Be specific about the steps that should be taken and the resources required to implement the recommendations.

By following these best practices, you can write an evaluation report that is clear, concise, and actionable, helping program managers and stakeholders to make informed decisions that improve program outcomes.

Catch HR’s eye instantly?

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

Writing an evaluation report can be a challenging task, even for experienced evaluators. Here are some common challenges that evaluators may encounter when writing an evaluation report:

  • Data limitations: One of the biggest challenges in writing an evaluation report is dealing with data limitations. Evaluators may find that the data they collected is incomplete, inaccurate, or difficult to interpret, making it challenging to draw meaningful conclusions.
  • Stakeholder disagreements: Another common challenge is stakeholder disagreements over the evaluation’s findings and recommendations. Stakeholders may have different opinions about the program’s effectiveness or the best course of action to improve program outcomes.
  • Technical writing skills: Evaluators may struggle with technical writing skills, which are essential for presenting complex evaluation findings in a clear and concise manner. Writing skills are particularly important when presenting statistical data or other technical information.
  • Time constraints: Evaluators may face time constraints when writing evaluation reports, particularly if the report is needed quickly or the evaluation involved a large amount of data collection and analysis.
  • Communication barriers: Evaluators may encounter communication barriers when working with stakeholders who speak different languages or have different cultural backgrounds. Effective communication is essential for ensuring that the evaluation’s findings are understood and acted upon.

By being aware of these common challenges, evaluators can take steps to address them and produce evaluation reports that are clear, accurate, and actionable. This may involve developing data collection and analysis plans that account for potential data limitations, engaging stakeholders early in the evaluation process to build consensus, and investing time in developing technical writing skills.

Presenting evaluation findings effectively is essential for ensuring that program managers and stakeholders understand the evaluation’s purpose, objectives, and conclusions. Here are some tips for presenting evaluation findings effectively:

  • Know your audience: Before presenting evaluation findings, ensure that you have a clear understanding of your audience’s background, interests, and expertise. This will help you tailor your presentation to their needs and interests.
  • Use visuals: Visual aids such as graphs, charts, and tables can help convey evaluation findings more effectively than written reports. Use visuals to highlight key data points and trends.
  • Be concise: Keep your presentation concise and to the point. Focus on the key findings and conclusions, and avoid getting bogged down in technical details.
  • Tell a story: Use the evaluation findings to tell a story about the program’s impact and effectiveness. This can help engage stakeholders and make the findings more memorable.
  • Provide context: Provide context for the evaluation findings by explaining the program’s objectives and intended impact. This will help stakeholders understand the significance of the findings.
  • Use plain language: Use plain language that is easily understandable by your target audience. Avoid jargon and technical terms that may confuse or alienate stakeholders.
  • Engage stakeholders: Engage stakeholders in the presentation by asking for their input and feedback. This can help build consensus and ensure that the evaluation findings are acted upon.

By following these tips, you can present evaluation findings in a way that engages stakeholders, highlights key findings, and ensures that the evaluation’s conclusions are acted upon to improve program outcomes.

Evaluation reports are crucial tools for program managers and policymakers to assess program effectiveness and make informed decisions about program design, implementation, and funding. By analyzing data collected during the evaluation process, evaluation reports provide evidence-based information that can be used to improve program outcomes and impact.

One of the primary ways that evaluation reports can be used to improve programs and policies is by identifying program strengths and weaknesses. By assessing program effectiveness and efficiency, evaluation reports can help identify areas where programs are succeeding and areas where improvements are needed. This information can inform program redesign and improvement efforts, leading to better program outcomes and impact.

Evaluation reports can also be used to make data-driven decisions about program design, implementation, and funding. By providing decision-makers with data-driven information, evaluation reports can help ensure that programs are designed and implemented in a way that maximizes their impact and effectiveness. This information can also be used to allocate resources more effectively, directing funding towards programs that are most effective and efficient.

Another way that evaluation reports can be used to improve programs and policies is by disseminating best practices in program design and implementation. By sharing information about what works and what doesn’t work, evaluation reports can help program managers and policymakers make informed decisions about program design and implementation, leading to better outcomes and impact.

Finally, evaluation reports can inform policy development and improvement efforts by providing evidence about the effectiveness and impact of existing policies. This information can be used to make data-driven decisions about policy development and improvement efforts, ensuring that policies are designed and implemented in a way that maximizes their impact and effectiveness.

In summary, evaluation reports are critical tools for improving programs and policies. By providing evidence-based information about program effectiveness and efficiency, evaluation reports can help program managers and policymakers make informed decisions, allocate resources more effectively, disseminate best practices, and inform policy development and improvement efforts.

There are many different templates available for creating evaluation reports. Here are some examples of template evaluation reports that can be used as a starting point for creating your own report:

  • The National Science Foundation Evaluation Report Template – This template provides a structure for evaluating research projects funded by the National Science Foundation. It includes sections on project background, research questions, evaluation methodology, data analysis, and conclusions and recommendations.
  • The CDC Program Evaluation Template – This template, created by the Centers for Disease Control and Prevention, provides a framework for evaluating public health programs. It includes sections on program description, evaluation questions, data sources, data analysis, and conclusions and recommendations.
  • The World Bank Evaluation Report Template – This template, created by the World Bank, provides a structure for evaluating development projects. It includes sections on project background, evaluation methodology, data analysis, findings and conclusions, and recommendations.
  • The European Commission Evaluation Report Template – This template provides a structure for evaluating European Union projects and programs. It includes sections on project description, evaluation objectives, evaluation methodology, findings, conclusions, and recommendations.
  • The UNICEF Evaluation Report Template – This template provides a framework for evaluating UNICEF programs and projects. It includes sections on program description, evaluation questions, evaluation methodology, findings, conclusions, and recommendations.

These templates provide a structure for creating evaluation reports that are well-organized and easy to read. They can be customized to meet the specific needs of your program or project and help ensure that your evaluation report is comprehensive and includes all of the necessary components.

  • World Health Organisations Reports
  • Checkl ist for Assessing USAID Evaluation Reports

In conclusion, evaluation reports are essential tools for program managers and policymakers to assess program effectiveness and make informed decisions about program design, implementation, and funding. By analyzing data collected during the evaluation process, evaluation reports provide evidence-based information that can be used to improve program outcomes and impact.

To make evaluation reports work for you, it is important to plan ahead and establish clear objectives and target audiences. This will help guide the report’s structure and content and ensure that the report is tailored to the needs of its intended audience.

When writing an evaluation report, it is important to use clear and concise language, provide evidence-based findings, and offer actionable recommendations that can be used to improve program outcomes. Including context for the evaluation findings and acknowledging limitations and caveats will provide a balanced assessment of the program’s effectiveness and help build trust with stakeholders.

Presenting evaluation findings effectively requires knowing your audience, using visuals, being concise, telling a story, providing context, using plain language, and engaging stakeholders. By following these tips, you can present evaluation findings in a way that engages stakeholders, highlights key findings, and ensures that the evaluation’s conclusions are acted upon to improve program outcomes.

Finally, using evaluation reports to improve programs and policies requires identifying program strengths and weaknesses, making data-driven decisions, disseminating best practices, allocating resources effectively, and informing policy development and improvement efforts. By using evaluation reports in these ways, program managers and policymakers can ensure that their programs are effective, efficient, and sustainable over time.

' data-src=

Well understanding, the description of the general evaluation of report are clear with good arrangement and it help students to learn and make practices

' data-src=

Patrick Kapuot

Thankyou for very much for such detail information. Very comprehensively said.

' data-src=

hailemichael

very good explanation, thanks

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

research project evaluation exemplar

Jobs for You

Water, sanitation and hygiene advisor (wash) – usaid/drc.

  • Democratic Republic of the Congo

Health Supply Chain Specialist – USAID/DRC

Chief of party – bosnia and herzegovina.

  • Bosnia and Herzegovina

Project Manager I

  • United States

Business Development Associate

Director of finance and administration, request for information – collecting information on potential partners for local works evaluation.

  • Washington, USA

Principal Field Monitors

Technical expert (health, wash, nutrition, education, child protection, hiv/aids, supplies), survey expert, data analyst, team leader, usaid-bha performance evaluation consultant.

  • International Rescue Committee

Manager II, Institutional Support Program Implementation

Senior human resources associate, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research project evaluation exemplar

Home Market Research

Evaluation Research: Definition, Methods and Examples

Evaluation Research

Content Index

  • What is evaluation research
  • Why do evaluation research

Quantitative methods

Qualitative methods.

  • Process evaluation research question examples
  • Outcome evaluation research question examples

What is evaluation research?

Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal.

Evaluation research is closely related to but slightly different from more conventional social research . It uses many of the same methods used in traditional social research, but because it takes place within an organizational context, it requires team skills, interpersonal skills, management skills, political smartness, and other research skills that social research does not need much. Evaluation research also requires one to keep in mind the interests of the stakeholders.

Evaluation research is a type of applied research, and so it is intended to have some real-world effect.  Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.

LEARN ABOUT: Action Research

Why do evaluation research?

The common goal of most evaluations is to extract meaningful information from the audience and provide valuable insights to evaluators such as sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived value as useful if it helps in decision-making. However, evaluation research does not always create an impact that can be applied anywhere else, sometimes they fail to influence short-term decisions. It is also equally true that initially, it might seem to not have any influence, but can have a delayed impact when the situation is more favorable. In spite of this, there is a general agreement that the major goal of evaluation research should be to improve decision-making through the systematic utilization of measurable feedback.

Below are some of the benefits of evaluation research

  • Gain insights about a project or program and its operations

Evaluation Research lets you understand what works and what doesn’t, where we were, where we are and where we are headed towards. You can find out the areas of improvement and identify strengths. So, it will help you to figure out what do you need to focus more on and if there are any threats to your business. You can also find out if there are currently hidden sectors in the market that are yet untapped.

  • Improve practice

It is essential to gauge your past performance and understand what went wrong in order to deliver better services to your customers. Unless it is a two-way communication, there is no way to improve on what you have to offer. Evaluation research gives an opportunity to your employees and customers to express how they feel and if there’s anything they would like to change. It also lets you modify or adopt a practice such that it increases the chances of success.

  • Assess the effects

After evaluating the efforts, you can see how well you are meeting objectives and targets. Evaluations let you measure if the intended benefits are really reaching the targeted audience and if yes, then how effectively.

  • Build capacity

Evaluations help you to analyze the demand pattern and predict if you will need more funds, upgrade skills and improve the efficiency of operations. It lets you find the gaps in the production to delivery chain and possible ways to fill them.

Methods of evaluation research

All market research methods involve collecting and analyzing the data, making decisions about the validity of the information and deriving relevant inferences from it. Evaluation research comprises of planning, conducting and analyzing the results which include the use of data collection techniques and applying statistical methods.

Some of the evaluation methods which are quite popular are input measurement, output or performance measurement, impact or outcomes assessment, quality assessment, process evaluation, benchmarking, standards, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. There are also a few types of evaluations that do not always result in a meaningful assessment such as descriptive studies, formative evaluations, and implementation analysis. Evaluation research is more about information-processing and feedback functions of evaluation.

These methods can be broadly classified as quantitative and qualitative methods.

The outcome of the quantitative research methods is an answer to the questions below and is used to measure anything tangible.

  • Who was involved?
  • What were the outcomes?
  • What was the price?

The best way to collect quantitative data is through surveys , questionnaires , and polls . You can also create pre-tests and post-tests, review existing documents and databases or gather clinical data.

Surveys are used to gather opinions, feedback or ideas of your employees or customers and consist of various question types . They can be conducted by a person face-to-face or by telephone, by mail, or online. Online surveys do not require the intervention of any human and are far more efficient and practical. You can see the survey results on dashboard of research tools and dig deeper using filter criteria based on various factors such as age, gender, location, etc. You can also keep survey logic such as branching, quotas, chain survey, looping, etc in the survey questions and reduce the time to both create and respond to the donor survey . You can also generate a number of reports that involve statistical formulae and present data that can be readily absorbed in the meetings. To learn more about how research tool works and whether it is suitable for you, sign up for a free account now.

Create a free account!

Quantitative data measure the depth and breadth of an initiative, for instance, the number of people who participated in the non-profit event, the number of people who enrolled for a new course at the university. Quantitative data collected before and after a program can show its results and impact.

The accuracy of quantitative data to be used for evaluation research depends on how well the sample represents the population, the ease of analysis, and their consistency. Quantitative methods can fail if the questions are not framed correctly and not distributed to the right audience. Also, quantitative data do not provide an understanding of the context and may not be apt for complex issues.

Learn more: Quantitative Market Research: The Complete Guide

Qualitative research methods are used where quantitative methods cannot solve the research problem , i.e. they are used to measure intangible values. They answer questions such as

  • What is the value added?
  • How satisfied are you with our service?
  • How likely are you to recommend us to your friends?
  • What will improve your experience?

LEARN ABOUT: Qualitative Interview

Qualitative data is collected through observation, interviews, case studies, and focus groups. The steps for creating a qualitative study involve examining, comparing and contrasting, and understanding patterns. Analysts conclude after identification of themes, clustering similar data, and finally reducing to points that make sense.

Observations may help explain behaviors as well as the social context that is generally not discovered by quantitative methods. Observations of behavior and body language can be done by watching a participant, recording audio or video. Structured interviews can be conducted with people alone or in a group under controlled conditions, or they may be asked open-ended qualitative research questions . Qualitative research methods are also used to understand a person’s perceptions and motivations.

LEARN ABOUT:  Social Communication Questionnaire

The strength of this method is that group discussion can provide ideas and stimulate memories with topics cascading as discussion occurs. The accuracy of qualitative data depends on how well contextual data explains complex issues and complements quantitative data. It helps get the answer of “why” and “how”, after getting an answer to “what”. The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret.

Learn more: Qualitative Market Research: The Complete Guide

Survey software can be used for both the evaluation research methods. You can use above sample questions for evaluation research and send a survey in minutes using research software. Using a tool for research simplifies the process right from creating a survey, importing contacts, distributing the survey and generating reports that aid in research.

Examples of evaluation research

Evaluation research questions lay the foundation of a successful evaluation. They define the topics that will be evaluated. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to collect, how to analyze it, and how to report it.

Evaluation research questions must be developed and agreed on in the planning stage, however, ready-made research templates can also be used.

Process evaluation research question examples:

  • How often do you use our product in a day?
  • Were approvals taken from all stakeholders?
  • Can you report the issue from the system?
  • Can you submit the feedback from the system?
  • Was each task done as per the standard operating procedure?
  • What were the barriers to the implementation of each task?
  • Were any improvement areas discovered?

Outcome evaluation research question examples:

  • How satisfied are you with our product?
  • Did the program produce intended outcomes?
  • What were the unintended outcomes?
  • Has the program increased the knowledge of participants?
  • Were the participants of the program employable before the course started?
  • Do participants of the program have the skills to find a job after the course ended?
  • Is the knowledge of participants better compared to those who did not participate in the program?

MORE LIKE THIS

research project evaluation exemplar

What Are My Employees Really Thinking? The Power of Open-ended Survey Analysis

May 24, 2024

When I think of “disconnected”, it is important that this is not just in relation to people analytics, Employee Experience or Customer Experience - it is also relevant to looking across them.

I Am Disconnected – Tuesday CX Thoughts

May 21, 2024

Customer success tools

20 Best Customer Success Tools of 2024

May 20, 2024

AI-Based Services in Market Research

AI-Based Services Buying Guide for Market Research (based on ESOMAR’s 20 Questions) 

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Panel on the Evaluation of AIDS Interventions; Coyle SL, Boruch RF, Turner CF, editors. Evaluating AIDS Prevention Programs: Expanded Edition. Washington (DC): National Academies Press (US); 1991.

Cover of Evaluating AIDS Prevention Programs

Evaluating AIDS Prevention Programs: Expanded Edition.

  • Hardcopy Version at National Academies Press

1 Design and Implementation of Evaluation Research

Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or collective change. This setting usually engenders a great need for cooperation between those who conduct the program and those who evaluate it. This need for cooperation can be particularly acute in the case of AIDS prevention programs because those programs have been developed rapidly to meet the urgent demands of a changing and deadly epidemic.

Although the characteristics of AIDS intervention programs place some unique demands on evaluation, the techniques for conducting good program evaluation do not need to be invented. Two decades of evaluation research have provided a basic conceptual framework for undertaking such efforts (see, e.g., Campbell and Stanley [1966] and Cook and Campbell [1979] for discussions of outcome evaluation; see Weiss [1972] and Rossi and Freeman [1982] for process and outcome evaluations); in addition, similar programs, such as the antismoking campaigns, have been subject to evaluation, and they offer examples of the problems that have been encountered.

In this chapter the panel provides an overview of the terminology, types, designs, and management of research evaluation. The following chapter provides an overview of program objectives and the selection and measurement of appropriate outcome variables for judging the effectiveness of AIDS intervention programs. These issues are discussed in detail in the subsequent, program-specific Chapters 3 - 5 .

  • Types of Evaluation

The term evaluation implies a variety of different things to different people. The recent report of the Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences defines the area through a series of questions (Turner, Miller, and Moses, 1989:317-318):

Evaluation is a systematic process that produces a trustworthy account of what was attempted and why; through the examination of results—the outcomes of intervention programs—it answers the questions, "What was done?" "To whom, and how?" and "What outcomes were observed?'' Well-designed evaluation permits us to draw inferences from the data and addresses the difficult question: ''What do the outcomes mean?"

These questions differ in the degree of difficulty of answering them. An evaluation that tries to determine the outcomes of an intervention and what those outcomes mean is a more complicated endeavor than an evaluation that assesses the process by which the intervention was delivered. Both kinds of evaluation are necessary because they are intimately connected: to establish a project's success, an evaluator must first ask whether the project was implemented as planned and then whether its objective was achieved. Questions about a project's implementation usually fall under the rubric of process evaluation . If the investigation involves rapid feedback to the project staff or sponsors, particularly at the earliest stages of program implementation, the work is called formative evaluation . Questions about effects or effectiveness are often variously called summative evaluation, impact assessment, or outcome evaluation, the term the panel uses.

Formative evaluation is a special type of early evaluation that occurs during and after a program has been designed but before it is broadly implemented. Formative evaluation is used to understand the need for the intervention and to make tentative decisions about how to implement or improve it. During formative evaluation, information is collected and then fed back to program designers and administrators to enhance program development and maximize the success of the intervention. For example, formative evaluation may be carried out through a pilot project before a program is implemented at several sites. A pilot study of a community-based organization (CBO), for example, might be used to gather data on problems involving access to and recruitment of targeted populations and the utilization and implementation of services; the findings of such a study would then be used to modify (if needed) the planned program.

Another example of formative evaluation is the use of a "story board" design of a TV message that has yet to be produced. A story board is a series of text and sketches of camera shots that are to be produced in a commercial. To evaluate the effectiveness of the message and forecast some of the consequences of actually broadcasting it to the general public, an advertising agency convenes small groups of people to react to and comment on the proposed design.

Once an intervention has been implemented, the next stage of evaluation is process evaluation, which addresses two broad questions: "What was done?" and "To whom, and how?" Ordinarily, process evaluation is carried out at some point in the life of a project to determine how and how well the delivery goals of the program are being met. When intervention programs continue over a long period of time (as is the case for some of the major AIDS prevention programs), measurements at several times are warranted to ensure that the components of the intervention continue to be delivered by the right people, to the right people, in the right manner, and at the right time. Process evaluation can also play a role in improving interventions by providing the information necessary to change delivery strategies or program objectives in a changing epidemic.

Research designs for process evaluation include direct observation of projects, surveys of service providers and clients, and the monitoring of administrative records. The panel notes that the Centers for Disease Control (CDC) is already collecting some administrative records on its counseling and testing program and community-based projects. The panel believes that this type of evaluation should be a continuing and expanded component of intervention projects to guarantee the maintenance of the projects' integrity and responsiveness to their constituencies.

The purpose of outcome evaluation is to identify consequences and to establish that consequences are, indeed, attributable to a project. This type of evaluation answers the questions, "What outcomes were observed?" and, perhaps more importantly, "What do the outcomes mean?" Like process evaluation, outcome evaluation can also be conducted at intervals during an ongoing program, and the panel believes that such periodic evaluation should be done to monitor goal achievement.

The panel believes that these stages of evaluation (i.e., formative, process, and outcome) are essential to learning how AIDS prevention programs contribute to containing the epidemic. After a body of findings has been accumulated from such evaluations, it may be fruitful to launch another stage of evaluation: cost-effectiveness analysis (see Weinstein et al., 1989). Like outcome evaluation, cost-effectiveness analysis also measures program effectiveness, but it extends the analysis by adding a measure of program cost. The panel believes that consideration of cost-effective analysis should be postponed until more experience is gained with formative, process, and outcome evaluation of the CDC AIDS prevention programs.

  • Evaluation Research Design

Process and outcome evaluations require different types of research designs, as discussed below. Formative evaluations, which are intended to both assess implementation and forecast effects, use a mix of these designs.

Process Evaluation Designs

To conduct process evaluations on how well services are delivered, data need to be gathered on the content of interventions and on their delivery systems. Suggested methodologies include direct observation, surveys, and record keeping.

Direct observation designs include case studies, in which participant-observers unobtrusively and systematically record encounters within a program setting, and nonparticipant observation, in which long, open-ended (or "focused") interviews are conducted with program participants. 1 For example, "professional customers" at counseling and testing sites can act as project clients to monitor activities unobtrusively; 2 alternatively, nonparticipant observers can interview both staff and clients. Surveys —either censuses (of the whole population of interest) or samples—elicit information through interviews or questionnaires completed by project participants or potential users of a project. For example, surveys within community-based projects can collect basic statistical information on project objectives, what services are provided, to whom, when, how often, for how long, and in what context.

Record keeping consists of administrative or other reporting systems that monitor use of services. Standardized reporting ensures consistency in the scope and depth of data collected. To use the media campaign as an example, the panel suggests using standardized data on the use of the AIDS hotline to monitor public attentiveness to the advertisements broadcast by the media campaign.

These designs are simple to understand, but they require expertise to implement. For example, observational studies must be conducted by people who are well trained in how to carry out on-site tasks sensitively and to record their findings uniformly. Observers can either complete narrative accounts of what occurred in a service setting or they can complete some sort of data inventory to ensure that multiple aspects of service delivery are covered. These types of studies are time consuming and benefit from corroboration among several observers. The use of surveys in research is well-understood, although they, too, require expertise to be well implemented. As the program chapters reflect, survey data collection must be carefully designed to reduce problems of validity and reliability and, if samples are used, to design an appropriate sampling scheme. Record keeping or service inventories are probably the easiest research designs to implement, although preparing standardized internal forms requires attention to detail about salient aspects of service delivery.

Outcome Evaluation Designs

Research designs for outcome evaluations are meant to assess principal and relative effects. Ideally, to assess the effect of an intervention on program participants, one would like to know what would have happened to the same participants in the absence of the program. Because it is not possible to make this comparison directly, inference strategies that rely on proxies have to be used. Scientists use three general approaches to construct proxies for use in the comparisons required to evaluate the effects of interventions: (1) nonexperimental methods, (2) quasi-experiments, and (3) randomized experiments. The first two are discussed below, and randomized experiments are discussed in the subsequent section.

Nonexperimental and Quasi-Experimental Designs 3

The most common form of nonexperimental design is a before-and-after study. In this design, pre-intervention measurements are compared with equivalent measurements made after the intervention to detect change in the outcome variables that the intervention was designed to influence.

Although the panel finds that before-and-after studies frequently provide helpful insights, the panel believes that these studies do not provide sufficiently reliable information to be the cornerstone for evaluation research on the effectiveness of AIDS prevention programs. The panel's conclusion follows from the fact that the postintervention changes cannot usually be attributed unambiguously to the intervention. 4 Plausible competing explanations for differences between pre-and postintervention measurements will often be numerous, including not only the possible effects of other AIDS intervention programs, news stories, and local events, but also the effects that may result from the maturation of the participants and the educational or sensitizing effects of repeated measurements, among others.

Quasi-experimental and matched control designs provide a separate comparison group. In these designs, the control group may be selected by matching nonparticipants to participants in the treatment group on the basis of selected characteristics. It is difficult to ensure the comparability of the two groups even when they are matched on many characteristics because other relevant factors may have been overlooked or mismatched or they may be difficult to measure (e.g., the motivation to change behavior). In some situations, it may simply be impossible to measure all of the characteristics of the units (e.g., communities) that may affect outcomes, much less demonstrate their comparability.

Matched control designs require extraordinarily comprehensive scientific knowledge about the phenomenon under investigation in order for evaluators to be confident that all of the relevant determinants of outcomes have been properly accounted for in the matching. Three types of information or knowledge are required: (1) knowledge of intervening variables that also affect the outcome of the intervention and, consequently, need adjustment to make the groups comparable; (2) measurements on all intervening variables for all subjects; and (3) knowledge of how to make the adjustments properly, which in turn requires an understanding of the functional relationship between the intervening variables and the outcome variables. Satisfying each of these information requirements is likely to be more difficult than answering the primary evaluation question, "Does this intervention produce beneficial effects?"

Given the size and the national importance of AIDS intervention programs and given the state of current knowledge about behavior change in general and AIDS prevention, in particular, the panel believes that it would be unwise to rely on matching and adjustment strategies as the primary design for evaluating AIDS intervention programs. With differently constituted groups, inferences about results are hostage to uncertainty about the extent to which the observed outcome actually results from the intervention and is not an artifact of intergroup differences that may not have been removed by matching or adjustment.

Randomized Experiments

A remedy to the inferential uncertainties that afflict nonexperimental designs is provided by randomized experiments . In such experiments, one singly constituted group is established for study. A subset of the group is then randomly chosen to receive the intervention, with the other subset becoming the control. The two groups are not identical, but they are comparable. Because they are two random samples drawn from the same population, they are not systematically different in any respect, which is important for all variables—both known and unknown—that can influence the outcome. Dividing a singly constituted group into two random and therefore comparable subgroups cuts through the tangle of causation and establishes a basis for the valid comparison of respondents who do and do not receive the intervention. Randomized experiments provide for clear causal inference by solving the problem of group comparability, and may be used to answer the evaluation questions "Does the intervention work?" and "What works better?"

Which question is answered depends on whether the controls receive an intervention or not. When the object is to estimate whether a given intervention has any effects, individuals are randomly assigned to the project or to a zero-treatment control group. The control group may be put on a waiting list or simply not get the treatment. This design addresses the question, "Does it work?"

When the object is to compare variations on a project—e.g., individual counseling sessions versus group counseling—then individuals are randomly assigned to these two regimens, and there is no zero-treatment control group. This design addresses the question, "What works better?" In either case, the control groups must be followed up as rigorously as the experimental groups.

A randomized experiment requires that individuals, organizations, or other treatment units be randomly assigned to one of two or more treatments or program variations. Random assignment ensures that the estimated differences between the groups so constituted are statistically unbiased; that is, that any differences in effects measured between them are a result of treatment. The absence of statistical bias in groups constituted in this fashion stems from the fact that random assignment ensures that there are no systematic differences between them, differences that can and usually do affect groups composed in ways that are not random. 5 The panel believes this approach is far superior for outcome evaluations of AIDS interventions than the nonrandom and quasi-experimental approaches. Therefore,

To improve interventions that are already broadly implemented, the panel recommends the use of randomized field experiments of alternative or enhanced interventions.

Under certain conditions, the panel also endorses randomized field experiments with a nontreatment control group to evaluate new interventions. In the context of a deadly epidemic, ethics dictate that treatment not be withheld simply for the purpose of conducting an experiment. Nevertheless, there may be times when a randomized field test of a new treatment with a no-treatment control group is worthwhile. One such time is during the design phase of a major or national intervention.

Before a new intervention is broadly implemented, the panel recommends that it be pilot tested in a randomized field experiment.

The panel considered the use of experiments with delayed rather than no treatment. A delayed-treatment control group strategy might be pursued when resources are too scarce for an intervention to be widely distributed at one time. For example, a project site that is waiting to receive funding for an intervention would be designated as the control group. If it is possible to randomize which projects in the queue receive the intervention, an evaluator could measure and compare outcomes after the experimental group had received the new treatment but before the control group received it. The panel believes that such a design can be applied only in limited circumstances, such as when groups would have access to related services in their communities and that conducting the study was likely to lead to greater access or better services. For example, a study cited in Chapter 4 used a randomized delayed-treatment experiment to measure the effects of a community-based risk reduction program. However, such a strategy may be impractical for several reasons, including:

  • sites waiting for funding for an intervention might seek resources from another source;
  • it might be difficult to enlist the nonfunded site and its clients to participate in the study;
  • there could be an appearance of favoritism toward projects whose funding was not delayed.

Although randomized experiments have many benefits, the approach is not without pitfalls. In the planning stages of evaluation, it is necessary to contemplate certain hazards, such as the Hawthorne effect 6 and differential project dropout rates. Precautions must be taken either to prevent these problems or to measure their effects. Fortunately, there is some evidence suggesting that the Hawthorne effect is usually not very large (Rossi and Freeman, 1982:175-176).

Attrition is potentially more damaging to an evaluation, and it must be limited if the experimental design is to be preserved. If sample attrition is not limited in an experimental design, it becomes necessary to account for the potentially biasing impact of the loss of subjects in the treatment and control conditions of the experiment. The statistical adjustments required to make inferences about treatment effectiveness in such circumstances can introduce uncertainties that are as worrisome as those afflicting nonexperimental and quasi-experimental designs. Thus, the panel's recommendation of the selective use of randomized design carries an implicit caveat: To realize the theoretical advantages offered by randomized experimental designs, substantial efforts will be required to ensure that the designs are not compromised by flawed execution.

Another pitfall to randomization is its appearance of unfairness or unattractiveness to participants and the controversial legal and ethical issues it sometimes raises. Often, what is being criticized is the control of project assignment of participants rather than the use of randomization itself. In deciding whether random assignment is appropriate, it is important to consider the specific context of the evaluation and how participants would be assigned to projects in the absence of randomization. The Federal Judicial Center (1981) offers five threshold conditions for the use of random assignment.

  • Does present practice or policy need improvement?
  • Is there significant uncertainty about the value of the proposed regimen?
  • Are there acceptable alternatives to randomized experiments?
  • Will the results of the experiment be used to improve practice or policy?
  • Is there a reasonable protection against risk for vulnerable groups (i.e., individuals within the justice system)?

The parent committee has argued that these threshold conditions apply in the case of AIDS prevention programs (see Turner, Miller, and Moses, 1989:331-333).

Although randomization may be desirable from an evaluation and ethical standpoint, and acceptable from a legal standpoint, it may be difficult to implement from a practical or political standpoint. Again, the panel emphasizes that questions about the practical or political feasibility of the use of randomization may in fact refer to the control of program allocation rather than to the issues of randomization itself. In fact, when resources are scarce, it is often more ethical and politically palatable to randomize allocation rather than to allocate on grounds that may appear biased.

It is usually easier to defend the use of randomization when the choice has to do with assignment to groups receiving alternative services than when the choice involves assignment to groups receiving no treatment. For example, in comparing a testing and counseling intervention that offered a special "skills training" session in addition to its regular services with a counseling and testing intervention that offered no additional component, random assignment of participants to one group rather than another may be acceptable to program staff and participants because the relative values of the alternative interventions are unknown.

The more difficult issue is the introduction of new interventions that are perceived to be needed and effective in a situation in which there are no services. An argument that is sometimes offered against the use of randomization in this instance is that interventions should be assigned on the basis of need (perhaps as measured by rates of HIV incidence or of high-risk behaviors). But this argument presumes that the intervention will have a positive effect—which is unknown before evaluation—and that relative need can be established, which is a difficult task in itself.

The panel recognizes that community and political opposition to randomization to zero treatments may be strong and that enlisting participation in such experiments may be difficult. This opposition and reluctance could seriously jeopardize the production of reliable results if it is translated into noncompliance with a research design. The feasibility of randomized experiments for AIDS prevention programs has already been demonstrated, however (see the review of selected experiments in Turner, Miller, and Moses, 1989:327-329). The substantial effort involved in mounting randomized field experiments is repaid by the fact that they can provide unbiased evidence of the effects of a program.

Unit of Assignment.

The unit of assignment of an experiment may be an individual person, a clinic (i.e., the clientele of the clinic), or another organizational unit (e.g., the community or city). The treatment unit is selected at the earliest stage of design. Variations of units are illustrated in the following four examples of intervention programs.

Two different pamphlets (A and B) on the same subject (e.g., testing) are distributed in an alternating sequence to individuals calling an AIDS hotline. The outcome to be measured is whether the recipient returns a card asking for more information.

Two instruction curricula (A and B) about AIDS and HIV infections are prepared for use in high school driver education classes. The outcome to be measured is a score on a knowledge test.

Of all clinics for sexually transmitted diseases (STDs) in a large metropolitan area, some are randomly chosen to introduce a change in the fee schedule. The outcome to be measured is the change in patient load.

A coordinated set of community-wide interventions—involving community leaders, social service agencies, the media, community associations and other groups—is implemented in one area of a city. Outcomes are knowledge as assessed by testing at drug treatment centers and STD clinics and condom sales in the community's retail outlets.

In example (1), the treatment unit is an individual person who receives pamphlet A or pamphlet B. If either "treatment" is applied again, it would be applied to a person. In example (2), the high school class is the treatment unit; everyone in a given class experiences either curriculum A or curriculum B. If either treatment is applied again, it would be applied to a class. The treatment unit is the clinic in example (3), and in example (4), the treatment unit is a community .

The consistency of the effects of a particular intervention across repetitions justly carries a heavy weight in appraising the intervention. It is important to remember that repetitions of a treatment or intervention are the number of treatment units to which the intervention is applied. This is a salient principle in the design and execution of intervention programs as well as in the assessment of their results.

The adequacy of the proposed sample size (number of treatment units) has to be considered in advance. Adequacy depends mainly on two factors:

  • How much variation occurs from unit to unit among units receiving a common treatment? If that variation is large, then the number of units needs to be large.
  • What is the minimum size of a possible treatment difference that, if present, would be practically important? That is, how small a treatment difference is it essential to detect if it is present? The smaller this quantity, the larger the number of units that are necessary.

Many formal methods for considering and choosing sample size exist (see, e.g., Cohen, 1988). Practical circumstances occasionally allow choosing between designs that involve units at different levels; thus, a classroom might be the unit if the treatment is applied in one way, but an entire school might be the unit if the treatment is applied in another. When both approaches are feasible, the use of a power analysis for each approach may lead to a reasoned choice.

Choice of Methods

There is some controversy about the advantages of randomized experiments in comparison with other evaluative approaches. It is the panel's belief that when a (well executed) randomized study is feasible, it is superior to alternative kinds of studies in the strength and clarity of whatever conclusions emerge, primarily because the experimental approach avoids selection biases. 7 Other evaluation approaches are sometimes unavoidable, but ordinarily the accumulation of valid information will go more slowly and less securely than in randomized approaches.

Experiments in medical research shed light on the advantages of carefully conducted randomized experiments. The Salk vaccine trials are a successful example of a large, randomized study. In a double-blind test of the polio vaccine, 8 children in various communities were randomly assigned to two treatments, either the vaccine or a placebo. By this method, the effectiveness of Salk vaccine was demonstrated in one summer of research (Meier, 1957).

A sufficient accumulation of relevant, observational information, especially when collected in studies using different procedures and sample populations, may also clearly demonstrate the effectiveness of a treatment or intervention. The process of accumulating such information can be a long one, however. When a (well-executed) randomized study is feasible, it can provide evidence that is subject to less uncertainty in its interpretation, and it can often do so in a more timely fashion. In the midst of an epidemic, the panel believes it proper that randomized experiments be one of the primary strategies for evaluating the effectiveness of AIDS prevention efforts. In making this recommendation, however, the panel also wishes to emphasize that the advantages of the randomized experimental design can be squandered by poor execution (e.g., by compromised assignment of subjects, significant subject attrition rates, etc.). To achieve the advantages of the experimental design, care must be taken to ensure that the integrity of the design is not compromised by poor execution.

In proposing that randomized experiments be one of the primary strategies for evaluating the effectiveness of AIDS prevention programs, the panel also recognizes that there are situations in which randomization will be impossible or, for other reasons, cannot be used. In its next report the panel will describe at length appropriate nonexperimental strategies to be considered in situations in which an experiment is not a practical or desirable alternative.

  • The Management of Evaluation

Conscientious evaluation requires a considerable investment of funds, time, and personnel. Because the panel recognizes that resources are not unlimited, it suggests that they be concentrated on the evaluation of a subset of projects to maximize the return on investment and to enhance the likelihood of high-quality results.

Project Selection

Deciding which programs or sites to evaluate is by no means a trivial matter. Selection should be carefully weighed so that projects that are not replicable or that have little chance for success are not subjected to rigorous evaluations.

The panel recommends that any intensive evaluation of an intervention be conducted on a subset of projects selected according to explicit criteria. These criteria should include the replicability of the project, the feasibility of evaluation, and the project's potential effectiveness for prevention of HIV transmission.

If a project is replicable, it means that the particular circumstances of service delivery in that project can be duplicated. In other words, for CBOs and counseling and testing projects, the content and setting of an intervention can be duplicated across sites. Feasibility of evaluation means that, as a practical matter, the research can be done: that is, the research design is adequate to control for rival hypotheses, it is not excessively costly, and the project is acceptable to the community and the sponsor. Potential effectiveness for HIV prevention means that the intervention is at least based on a reasonable theory (or mix of theories) about behavioral change (e.g., social learning theory [Bandura, 1977], the health belief model [Janz and Becker, 1984], etc.), if it has not already been found to be effective in related circumstances.

In addition, since it is important to ensure that the results of evaluations will be broadly applicable,

The panel recommends that evaluation be conducted and replicated across major types of subgroups, programs, and settings. Attention should be paid to geographic areas with low and high AIDS prevalence, as well as to subpopulations at low and high risk for AIDS.

Research Administration

The sponsoring agency interested in evaluating an AIDS intervention should consider the mechanisms through which the research will be carried out as well as the desirability of both independent oversight and agency in-house conduct and monitoring of the research. The appropriate entities and mechanisms for conducting evaluations depend to some extent on the kinds of data being gathered and the evaluation questions being asked.

Oversight and monitoring are important to keep projects fully informed about the other evaluations relevant to their own and to render assistance when needed. Oversight and monitoring are also important because evaluation is often a sensitive issue for project and evaluation staff alike. The panel is aware that evaluation may appear threatening to practitioners and researchers because of the possibility that evaluation research will show that their projects are not as effective as they believe them to be. These needs and vulnerabilities should be taken into account as evaluation research management is developed.

Conducting the Research

To conduct some aspects of a project's evaluation, it may be appropriate to involve project administrators, especially when the data will be used to evaluate delivery systems (e.g., to determine when and which services are being delivered). To evaluate outcomes, the services of an outside evaluator 9 or evaluation team are almost always required because few practitioners have the necessary professional experience or the time and resources necessary to do evaluation. The outside evaluator must have relevant expertise in evaluation research methodology and must also be sensitive to the fears, hopes, and constraints of project administrators.

Several evaluation management schemes are possible. For example, a prospective AIDS prevention project group (the contractor) can bid on a contract for project funding that includes an intensive evaluation component. The actual evaluation can be conducted either by the contractor alone or by the contractor working in concert with an outside independent collaborator. This mechanism has the advantage of involving project practitioners in the work of evaluation as well as building separate but mutually informing communities of experts around the country. Alternatively, a contract can be let with a single evaluator or evaluation team that will collaborate with the subset of sites that is chosen for evaluation. This variation would be managerially less burdensome than awarding separate contracts, but it would require greater dependence on the expertise of a single investigator or investigative team. ( Appendix A discusses contracting options in greater depth.) Both of these approaches accord with the parent committee's recommendation that collaboration between practitioners and evaluation researchers be ensured. Finally, in the more traditional evaluation approach, independent principal investigators or investigative teams may respond to a request for proposal (RFP) issued to evaluate individual projects. Such investigators are frequently university-based or are members of a professional research organization, and they bring to the task a variety of research experiences and perspectives.

Independent Oversight

The panel believes that coordination and oversight of multisite evaluations is critical because of the variability in investigators' expertise and in the results of the projects being evaluated. Oversight can provide quality control for individual investigators and can be used to review and integrate findings across sites for developing policy. The independence of an oversight body is crucial to ensure that project evaluations do not succumb to the pressures for positive findings of effectiveness.

When evaluation is to be conducted by a number of different evaluation teams, the panel recommends establishing an independent scientific committee to oversee project selection and research efforts, corroborate the impartiality and validity of results, conduct cross-site analyses, and prepare reports on the progress of the evaluations.

The composition of such an independent oversight committee will depend on the research design of a given program. For example, the committee ought to include statisticians and other specialists in randomized field tests when that approach is being taken. Specialists in survey research and case studies should be recruited if either of those approaches is to be used. Appendix B offers a model for an independent oversight group that has been successfully implemented in other settings—a project review team, or advisory board.

Agency In-House Team

As the parent committee noted in its report, evaluations of AIDS interventions require skills that may be in short supply for agencies invested in delivering services (Turner, Miller, and Moses, 1989:349). Although this situation can be partly alleviated by recruiting professional outside evaluators and retaining an independent oversight group, the panel believes that an in-house team of professionals within the sponsoring agency is also critical. The in-house experts will interact with the outside evaluators and provide input into the selection of projects, outcome objectives, and appropriate research designs; they will also monitor the progress and costs of evaluation. These functions require not just bureaucratic oversight but appropriate scientific expertise.

This is not intended to preclude the direct involvement of CDC staff in conducting evaluations. However, given the great amount of work to be done, it is likely a considerable portion will have to be contracted out. The quality and usefulness of the evaluations done under contract can be greatly enhanced by ensuring that there are an adequate number of CDC staff trained in evaluation research methods to monitor these contracts.

The panel recommends that CDC recruit and retain behavioral, social, and statistical scientists trained in evaluation methodology to facilitate the implementation of the evaluation research recommended in this report.

Interagency Collaboration

The panel believes that the federal agencies that sponsor the design of basic research, intervention programs, and evaluation strategies would profit from greater interagency collaboration. The evaluation of AIDS intervention programs would benefit from a coherent program of studies that should provide models of efficacious and effective interventions to prevent further HIV transmission, the spread of other STDs, and unwanted pregnancies (especially among adolescents). A marriage could then be made of basic and applied science, from which the best evaluation is born. Exploring the possibility of interagency collaboration and CDC's role in such collaboration is beyond the scope of this panel's task, but it is an important issue that we suggest be addressed in the future.

Costs of Evaluation

In view of the dearth of current evaluation efforts, the panel believes that vigorous evaluation research must be undertaken over the next few years to build up a body of knowledge about what interventions can and cannot do. Dedicating no resources to evaluation will virtually guarantee that high-quality evaluations will be infrequent and the data needed for policy decisions will be sparse or absent. Yet, evaluating every project is not feasible simply because there are not enough resources and, in many cases, evaluating every project is not necessary for good science or good policy.

The panel believes that evaluating only some of a program's sites or projects, selected under the criteria noted in Chapter 4 , is a sensible strategy. Although we recommend that intensive evaluation be conducted on only a subset of carefully chosen projects, we believe that high-quality evaluation will require a significant investment of time, planning, personnel, and financial support. The panel's aim is to be realistic—not discouraging—when it notes that the costs of program evaluation should not be underestimated. Many of the research strategies proposed in this report require investments that are perhaps greater than has been previously contemplated. This is particularly the case for outcome evaluations, which are ordinarily more difficult and expensive to conduct than formative or process evaluations. And those costs will be additive with each type of evaluation that is conducted.

Panel members have found that the cost of an outcome evaluation sometimes equals or even exceeds the cost of actual program delivery. For example, it was reported to the panel that randomized studies used to evaluate recent manpower training projects cost as much as the projects themselves (see Cottingham and Rodriguez, 1987). In another case, the principal investigator of an ongoing AIDS prevention project told the panel that the cost of randomized experimentation was approximately three times higher than the cost of delivering the intervention (albeit the study was quite small, involving only 104 participants) (Kelly et al., 1989). Fortunately, only a fraction of a program's projects or sites need to be intensively evaluated to produce high-quality information, and not all will require randomized studies.

Because of the variability in kinds of evaluation that will be done as well as in the costs involved, there is no set standard or rule for judging what fraction of a total program budget should be invested in evaluation. Based upon very limited data 10 and assuming that only a small sample of projects would be evaluated, the panel suspects that program managers might reasonably anticipate spending 8 to 12 percent of their intervention budgets to conduct high-quality evaluations (i.e., formative, process, and outcome evaluations). 11 Larger investments seem politically infeasible and unwise in view of the need to put resources into program delivery. Smaller investments in evaluation may risk studying an inadequate sample of program types, and it may also invite compromises in research quality.

The nature of the HIV/AIDS epidemic mandates an unwavering commitment to prevention programs, and the prevention activities require a similar commitment to the evaluation of those programs. The magnitude of what can be learned from doing good evaluations will more than balance the magnitude of the costs required to perform them. Moreover, it should be realized that the costs of shoddy research can be substantial, both in their direct expense and in the lost opportunities to identify effective strategies for AIDS prevention. Once the investment has been made, however, and a reservoir of findings and practical experience has accumulated, subsequent evaluations should be easier and less costly to conduct.

  • Bandura, A. (1977) Self-efficacy: Toward a unifying theory of behavioral change . Psychological Review 34:191-215. [ PubMed : 847061 ]
  • Campbell, D. T., and Stanley, J. C. (1966) Experimental and Quasi-Experimental Design and Analysis . Boston: Houghton-Mifflin.
  • Centers for Disease Control (CDC) (1988) Sourcebook presented at the National Conference on the Prevention of HIV Infection and AIDS Among Racial and Ethnic Minorities in the United States (August).
  • Cohen, J. (1988) Statistical Power Analysis for the Behavioral Sciences . 2nd ed. Hillsdale, NJ.: L. Erlbaum Associates.
  • Cook, T., and Campbell, D. T. (1979) Quasi-Experimentation: Design and Analysis for Field Settings . Boston: Houghton-Mifflin.
  • Federal Judicial Center (1981) Experimentation in the Law . Washington, D.C.: Federal Judicial Center.
  • Janz, N. K., and Becker, M. H. (1984) The health belief model: A decade later . Health Education Quarterly 11 (1):1-47. [ PubMed : 6392204 ]
  • Kelly, J. A., St. Lawrence, J. S., Hood, H. V., and Brasfield, T. L. (1989) Behavioral intervention to reduce AIDS risk activities . Journal of Consulting and Clinical Psychology 57:60-67. [ PubMed : 2925974 ]
  • Meier, P. (1957) Safety testing of poliomyelitis vaccine . Science 125(3257): 1067-1071. [ PubMed : 13432758 ]
  • Roethlisberger, F. J. and Dickson, W. J. (1939) Management and the Worker . Cambridge, Mass.: Harvard University Press.
  • Rossi, P. H., and Freeman, H. E. (1982) Evaluation: A Systematic Approach . 2nd ed. Beverly Hills, Cal.: Sage Publications.
  • Turner, C. F., editor; , Miller, H. G., editor; , and Moses, L. E., editor. , eds. (1989) AIDS, Sexual Behavior, and Intravenous Drug Use . Report of the NRC Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences. Washington, D.C.: National Academy Press. [ PubMed : 25032322 ]
  • Weinstein, M. C., Graham, J. D., Siegel, J. E., and Fineberg, H. V. (1989) Cost-effectiveness analysis of AIDS prevention programs: Concepts, complications, and illustrations . In C.F. Turner, editor; , H. G. Miller, editor; , and L. E. Moses, editor. , eds., AIDS, Sexual Behavior, and Intravenous Drug Use . Report of the NRC Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences. Washington, D.C.: National Academy Press. [ PubMed : 25032322 ]
  • Weiss, C. H. (1972) Evaluation Research . Englewood Cliffs, N.J.: Prentice-Hall, Inc.

On occasion, nonparticipants observe behavior during or after an intervention. Chapter 3 introduces this option in the context of formative evaluation.

The use of professional customers can raise serious concerns in the eyes of project administrators at counseling and testing sites. The panel believes that site administrators should receive advance notification that professional customers may visit their sites for testing and counseling services and provide their consent before this method of data collection is used.

Parts of this section are adopted from Turner, Miller, and Moses, (1989:324-326).

This weakness has been noted by CDC in a sourcebook provided to its HIV intervention project grantees (CDC, 1988:F-14).

The significance tests applied to experimental outcomes calculate the probability that any observed differences between the sample estimates might result from random variations between the groups.

Research participants' knowledge that they were being observed had a positive effect on their responses in a series of famous studies made at General Electric's Hawthorne Works in Chicago (Roethlisberger and Dickson, 1939); the phenomenon is referred to as the Hawthorne effect.

participants who self-select into a program are likely to be different from non-random comparison groups in terms of interests, motivations, values, abilities, and other attributes that can bias the outcomes.

A double-blind test is one in which neither the person receiving the treatment nor the person administering it knows which treatment (or when no treatment) is being given.

As discussed under ''Agency In-House Team,'' the outside evaluator might be one of CDC's personnel. However, given the large amount of research to be done, it is likely that non-CDC evaluators will also need to be used.

See, for example, chapter 3 which presents cost estimates for evaluations of media campaigns. Similar estimates are not readily available for other program types.

For example, the U. K. Health Education Authority (that country's primary agency for AIDS education and prevention programs) allocates 10 percent of its AIDS budget for research and evaluation of its AIDS programs (D. McVey, Health Education Authority, personal communication, June 1990). This allocation covers both process and outcome evaluation.

  • Cite this Page National Research Council (US) Panel on the Evaluation of AIDS Interventions; Coyle SL, Boruch RF, Turner CF, editors. Evaluating AIDS Prevention Programs: Expanded Edition. Washington (DC): National Academies Press (US); 1991. 1, Design and Implementation of Evaluation Research.
  • PDF version of this title (6.0M)

In this Page

Related information.

  • PubMed Links to PubMed

Recent Activity

  • Design and Implementation of Evaluation Research - Evaluating AIDS Prevention Pr... Design and Implementation of Evaluation Research - Evaluating AIDS Prevention Programs

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Grad Coach

Research Aims, Objectives & Questions

The “Golden Thread” Explained Simply (+ Examples)

By: David Phair (PhD) and Alexandra Shaeffer (PhD) | June 2022

The research aims , objectives and research questions (collectively called the “golden thread”) are arguably the most important thing you need to get right when you’re crafting a research proposal , dissertation or thesis . We receive questions almost every day about this “holy trinity” of research and there’s certainly a lot of confusion out there, so we’ve crafted this post to help you navigate your way through the fog.

Overview: The Golden Thread

  • What is the golden thread
  • What are research aims ( examples )
  • What are research objectives ( examples )
  • What are research questions ( examples )
  • The importance of alignment in the golden thread

What is the “golden thread”?  

The golden thread simply refers to the collective research aims , research objectives , and research questions for any given project (i.e., a dissertation, thesis, or research paper ). These three elements are bundled together because it’s extremely important that they align with each other, and that the entire research project aligns with them.

Importantly, the golden thread needs to weave its way through the entirety of any research project , from start to end. In other words, it needs to be very clearly defined right at the beginning of the project (the topic ideation and proposal stage) and it needs to inform almost every decision throughout the rest of the project. For example, your research design and methodology will be heavily influenced by the golden thread (we’ll explain this in more detail later), as well as your literature review.

The research aims, objectives and research questions (the golden thread) define the focus and scope ( the delimitations ) of your research project. In other words, they help ringfence your dissertation or thesis to a relatively narrow domain, so that you can “go deep” and really dig into a specific problem or opportunity. They also help keep you on track , as they act as a litmus test for relevance. In other words, if you’re ever unsure whether to include something in your document, simply ask yourself the question, “does this contribute toward my research aims, objectives or questions?”. If it doesn’t, chances are you can drop it.

Alright, enough of the fluffy, conceptual stuff. Let’s get down to business and look at what exactly the research aims, objectives and questions are and outline a few examples to bring these concepts to life.

Free Webinar: How To Find A Dissertation Research Topic

Research Aims: What are they?

Simply put, the research aim(s) is a statement that reflects the broad overarching goal (s) of the research project. Research aims are fairly high-level (low resolution) as they outline the general direction of the research and what it’s trying to achieve .

Research Aims: Examples  

True to the name, research aims usually start with the wording “this research aims to…”, “this research seeks to…”, and so on. For example:

“This research aims to explore employee experiences of digital transformation in retail HR.”   “This study sets out to assess the interaction between student support and self-care on well-being in engineering graduate students”  

As you can see, these research aims provide a high-level description of what the study is about and what it seeks to achieve. They’re not hyper-specific or action-oriented, but they’re clear about what the study’s focus is and what is being investigated.

Need a helping hand?

research project evaluation exemplar

Research Objectives: What are they?

The research objectives take the research aims and make them more practical and actionable . In other words, the research objectives showcase the steps that the researcher will take to achieve the research aims.

The research objectives need to be far more specific (higher resolution) and actionable than the research aims. In fact, it’s always a good idea to craft your research objectives using the “SMART” criteria. In other words, they should be specific, measurable, achievable, relevant and time-bound”.

Research Objectives: Examples  

Let’s look at two examples of research objectives. We’ll stick with the topic and research aims we mentioned previously.  

For the digital transformation topic:

To observe the retail HR employees throughout the digital transformation. To assess employee perceptions of digital transformation in retail HR. To identify the barriers and facilitators of digital transformation in retail HR.

And for the student wellness topic:

To determine whether student self-care predicts the well-being score of engineering graduate students. To determine whether student support predicts the well-being score of engineering students. To assess the interaction between student self-care and student support when predicting well-being in engineering graduate students.

  As you can see, these research objectives clearly align with the previously mentioned research aims and effectively translate the low-resolution aims into (comparatively) higher-resolution objectives and action points . They give the research project a clear focus and present something that resembles a research-based “to-do” list.

The research objectives detail the specific steps that you, as the researcher, will take to achieve the research aims you laid out.

Research Questions: What are they?

Finally, we arrive at the all-important research questions. The research questions are, as the name suggests, the key questions that your study will seek to answer . Simply put, they are the core purpose of your dissertation, thesis, or research project. You’ll present them at the beginning of your document (either in the introduction chapter or literature review chapter) and you’ll answer them at the end of your document (typically in the discussion and conclusion chapters).  

The research questions will be the driving force throughout the research process. For example, in the literature review chapter, you’ll assess the relevance of any given resource based on whether it helps you move towards answering your research questions. Similarly, your methodology and research design will be heavily influenced by the nature of your research questions. For instance, research questions that are exploratory in nature will usually make use of a qualitative approach, whereas questions that relate to measurement or relationship testing will make use of a quantitative approach.  

Let’s look at some examples of research questions to make this more tangible.

Research Questions: Examples  

Again, we’ll stick with the research aims and research objectives we mentioned previously.  

For the digital transformation topic (which would be qualitative in nature):

How do employees perceive digital transformation in retail HR? What are the barriers and facilitators of digital transformation in retail HR?  

And for the student wellness topic (which would be quantitative in nature):

Does student self-care predict the well-being scores of engineering graduate students? Does student support predict the well-being scores of engineering students? Do student self-care and student support interact when predicting well-being in engineering graduate students?  

You’ll probably notice that there’s quite a formulaic approach to this. In other words, the research questions are basically the research objectives “converted” into question format. While that is true most of the time, it’s not always the case. For example, the first research objective for the digital transformation topic was more or less a step on the path toward the other objectives, and as such, it didn’t warrant its own research question.  

So, don’t rush your research questions and sloppily reword your objectives as questions. Carefully think about what exactly you’re trying to achieve (i.e. your research aim) and the objectives you’ve set out, then craft a set of well-aligned research questions . Also, keep in mind that this can be a somewhat iterative process , where you go back and tweak research objectives and aims to ensure tight alignment throughout the golden thread.

The importance of strong alignment 

Alignment is the keyword here and we have to stress its importance . Simply put, you need to make sure that there is a very tight alignment between all three pieces of the golden thread. If your research aims and research questions don’t align, for example, your project will be pulling in different directions and will lack focus . This is a common problem students face and can cause many headaches (and tears), so be warned.

Take the time to carefully craft your research aims, objectives and research questions before you run off down the research path. Ideally, get your research supervisor/advisor to review and comment on your golden thread before you invest significant time into your project, and certainly before you start collecting data .  

Recap: The golden thread

In this post, we unpacked the golden thread of research, consisting of the research aims , research objectives and research questions . You can jump back to any section using the links below.

As always, feel free to leave a comment below – we always love to hear from you. Also, if you’re interested in 1-on-1 support, take a look at our private coaching service here.

research project evaluation exemplar

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Narrative analysis explainer

39 Comments

Isaac Levi

Thank you very much for your great effort put. As an Undergraduate taking Demographic Research & Methodology, I’ve been trying so hard to understand clearly what is a Research Question, Research Aim and the Objectives in a research and the relationship between them etc. But as for now I’m thankful that you’ve solved my problem.

Hatimu Bah

Well appreciated. This has helped me greatly in doing my dissertation.

Dr. Abdallah Kheri

An so delighted with this wonderful information thank you a lot.

so impressive i have benefited a lot looking forward to learn more on research.

Ekwunife, Chukwunonso Onyeka Steve

I am very happy to have carefully gone through this well researched article.

Infact,I used to be phobia about anything research, because of my poor understanding of the concepts.

Now,I get to know that my research question is the same as my research objective(s) rephrased in question format.

I please I would need a follow up on the subject,as I intends to join the team of researchers. Thanks once again.

Tosin

Thanks so much. This was really helpful.

Ishmael

I know you pepole have tried to break things into more understandable and easy format. And God bless you. Keep it up

sylas

i found this document so useful towards my study in research methods. thanks so much.

Michael L. Andrion

This is my 2nd read topic in your course and I should commend the simplified explanations of each part. I’m beginning to understand and absorb the use of each part of a dissertation/thesis. I’ll keep on reading your free course and might be able to avail the training course! Kudos!

Scarlett

Thank you! Better put that my lecture and helped to easily understand the basics which I feel often get brushed over when beginning dissertation work.

Enoch Tindiwegi

This is quite helpful. I like how the Golden thread has been explained and the needed alignment.

Sora Dido Boru

This is quite helpful. I really appreciate!

Chulyork

The article made it simple for researcher students to differentiate between three concepts.

Afowosire Wasiu Adekunle

Very innovative and educational in approach to conducting research.

Sàlihu Abubakar Dayyabu

I am very impressed with all these terminology, as I am a fresh student for post graduate, I am highly guided and I promised to continue making consultation when the need arise. Thanks a lot.

Mohammed Shamsudeen

A very helpful piece. thanks, I really appreciate it .

Sonam Jyrwa

Very well explained, and it might be helpful to many people like me.

JB

Wish i had found this (and other) resource(s) at the beginning of my PhD journey… not in my writing up year… 😩 Anyways… just a quick question as i’m having some issues ordering my “golden thread”…. does it matter in what order you mention them? i.e., is it always first aims, then objectives, and finally the questions? or can you first mention the research questions and then the aims and objectives?

UN

Thank you for a very simple explanation that builds upon the concepts in a very logical manner. Just prior to this, I read the research hypothesis article, which was equally very good. This met my primary objective.

My secondary objective was to understand the difference between research questions and research hypothesis, and in which context to use which one. However, I am still not clear on this. Can you kindly please guide?

Derek Jansen

In research, a research question is a clear and specific inquiry that the researcher wants to answer, while a research hypothesis is a tentative statement or prediction about the relationship between variables or the expected outcome of the study. Research questions are broader and guide the overall study, while hypotheses are specific and testable statements used in quantitative research. Research questions identify the problem, while hypotheses provide a focus for testing in the study.

Saen Fanai

Exactly what I need in this research journey, I look forward to more of your coaching videos.

Abubakar Rofiat Opeyemi

This helped a lot. Thanks so much for the effort put into explaining it.

Lamin Tarawally

What data source in writing dissertation/Thesis requires?

What is data source covers when writing dessertation/thesis

Latifat Muhammed

This is quite useful thanks

Yetunde

I’m excited and thankful. I got so much value which will help me progress in my thesis.

Amer Al-Rashid

where are the locations of the reserch statement, research objective and research question in a reserach paper? Can you write an ouline that defines their places in the researh paper?

Webby

Very helpful and important tips on Aims, Objectives and Questions.

Refiloe Raselane

Thank you so much for making research aim, research objectives and research question so clear. This will be helpful to me as i continue with my thesis.

Annabelle Roda-Dafielmoto

Thanks much for this content. I learned a lot. And I am inspired to learn more. I am still struggling with my preparation for dissertation outline/proposal. But I consistently follow contents and tutorials and the new FB of GRAD Coach. Hope to really become confident in writing my dissertation and successfully defend it.

Joe

As a researcher and lecturer, I find splitting research goals into research aims, objectives, and questions is unnecessarily bureaucratic and confusing for students. For most biomedical research projects, including ‘real research’, 1-3 research questions will suffice (numbers may differ by discipline).

Abdella

Awesome! Very important resources and presented in an informative way to easily understand the golden thread. Indeed, thank you so much.

Sheikh

Well explained

New Growth Care Group

The blog article on research aims, objectives, and questions by Grad Coach is a clear and insightful guide that aligns with my experiences in academic research. The article effectively breaks down the often complex concepts of research aims and objectives, providing a straightforward and accessible explanation. Drawing from my own research endeavors, I appreciate the practical tips offered, such as the need for specificity and clarity when formulating research questions. The article serves as a valuable resource for students and researchers, offering a concise roadmap for crafting well-defined research goals and objectives. Whether you’re a novice or an experienced researcher, this article provides practical insights that contribute to the foundational aspects of a successful research endeavor.

yaikobe

A great thanks for you. it is really amazing explanation. I grasp a lot and one step up to research knowledge.

UMAR SALEH

I really found these tips helpful. Thank you very much Grad Coach.

Rahma D.

I found this article helpful. Thanks for sharing this.

Juhaida

thank you so much, the explanation and examples are really helpful

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Sitewide search

UCLA CHPR PNG Logo Black Text

The UCLA Center for Health Policy Research (CHPR) is one of the nation's leading health policy research centers and the premier source of health policy information for California.

Browse Publications

Find an expert, view all projects.

parks after dark evaluation brief cover with little girls wearing PAD shirts and showing medals and infographic in the background

Parks After Dark Evaluation Brief, May 2024

Summary: In this infographic brief, the UCLA Center for Health Policy Research summarizes information from their evaluation of the 2022 Parks After Dark (PAD) program in Los Angeles County. PAD is a county initiative led by the Department of Parks and Recreation in partnership with other county departments and community-based organizations. PAD programming — including sports, entertainment, activities, and more — was offered for eight weeks on Thursday, Friday, and Saturday evenings at 34 parks between June and August 2023.

Findings: Evaluators found that PAD has made significant progress in achieving its intended goals through the provision of quality recreational programming in a safe and family-friendly environment. Besides ensuring participant’s sense of safety at parks while attending PAD programming, evidence indicates that PAD may have reduced crime in PAD parks and their surrounding areas since its inception in 2010. In addition, PAD encouraged meaningful collaboration between participating county departments and community-based organizations; contributed to participant’s feelings of well-being, family togetherness, and social cohesion; and involved a diverse range of participants in community-driven programming in a meaningful way. PAD may also have reduced the burden of disease for those that engaged in exercise opportunities. Read the Publications:

  • Parks After Dark Evaluation Brief, May 2024 ( English )  
  • Parks After Dark Evaluation Brief, May 2024 ( Spanish )
  • Full Evaluation Report: Parks After Dark Evaluation Report, May 2024  

Previous years:

  • Parks After Dark Evaluation Brief, July 2023  
  • Parks After Dark Evaluation Report, July 2023  
  • Parks After Dark Evaluation Brief, July 2018  
  • Parks After Dark Evaluation Report, July 2018  
  • Parks After Dark Evaluation Brief, May 2017  
  • Parks After Dark Evaluation Report, May 2017  

Social media links

Copied to clipboard

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • About Adverse Childhood Experiences
  • Risk and Protective Factors
  • Program: Essentials for Childhood: Preventing Adverse Childhood Experiences through Data to Action
  • Adverse childhood experiences can have long-term impacts on health, opportunity and well-being.
  • Adverse childhood experiences are common and some groups experience them more than others.

diverse group of children lying on each other in a park

What are adverse childhood experiences?

Adverse childhood experiences, or ACEs, are potentially traumatic events that occur in childhood (0-17 years). Examples include: 1

  • Experiencing violence, abuse, or neglect.
  • Witnessing violence in the home or community.
  • Having a family member attempt or die by suicide.

Also included are aspects of the child’s environment that can undermine their sense of safety, stability, and bonding. Examples can include growing up in a household with: 1

  • Substance use problems.
  • Mental health problems.
  • Instability due to parental separation.
  • Instability due to household members being in jail or prison.

The examples above are not a complete list of adverse experiences. Many other traumatic experiences could impact health and well-being. This can include not having enough food to eat, experiencing homelessness or unstable housing, or experiencing discrimination. 2 3 4 5 6

Quick facts and stats

ACEs are common. About 64% of adults in the United States reported they had experienced at least one type of ACE before age 18. Nearly one in six (17.3%) adults reported they had experienced four or more types of ACEs. 7

Preventing ACEs could potentially reduce many health conditions. Estimates show up to 1.9 million heart disease cases and 21 million depression cases potentially could have been avoided by preventing ACEs. 1

Some people are at greater risk of experiencing one or more ACEs than others. While all children are at risk of ACEs, numerous studies show inequities in such experiences. These inequalities are linked to the historical, social, and economic environments in which some families live. 5 6 ACEs were highest among females, non-Hispanic American Indian or Alaska Native adults, and adults who are unemployed or unable to work. 7

ACEs are costly. ACEs-related health consequences cost an estimated economic burden of $748 billion annually in Bermuda, Canada, and the United States. 8

ACEs can have lasting effects on health and well-being in childhood and life opportunities well into adulthood. 9 Life opportunities include things like education and job potential. These experiences can increase the risks of injury, sexually transmitted infections, and involvement in sex trafficking. They can also increase risks for maternal and child health problems including teen pregnancy, pregnancy complications, and fetal death. Also included are a range of chronic diseases and leading causes of death, such as cancer, diabetes, heart disease, and suicide. 1 10 11 12 13 14 15 16 17

ACEs and associated social determinants of health, such as living in under-resourced or racially segregated neighborhoods, can cause toxic stress. Toxic stress, or extended or prolonged stress, from ACEs can negatively affect children’s brain development, immune systems, and stress-response systems. These changes can affect children’s attention, decision-making, and learning. 18

Children growing up with toxic stress may have difficulty forming healthy and stable relationships. They may also have unstable work histories as adults and struggle with finances, jobs, and depression throughout life. 18 These effects can also be passed on to their own children. 19 20 21 Some children may face further exposure to toxic stress from historical and ongoing traumas. These historical and ongoing traumas refer to experiences of racial discrimination or the impacts of poverty resulting from limited educational and economic opportunities. 1 6

Adverse childhood experiences can be prevented. Certain factors may increase or decrease the risk of experiencing adverse childhood experiences.

Preventing adverse childhood experiences requires understanding and addressing the factors that put people at risk for or protect them from violence.

Creating safe, stable, nurturing relationships and environments for all children can prevent ACEs and help all children reach their full potential. We all have a role to play.

  • Merrick MT, Ford DC, Ports KA, et al. Vital Signs: Estimated Proportion of Adult Health Problems Attributable to Adverse Childhood Experiences and Implications for Prevention — 25 States, 2015–2017. MMWR Morb Mortal Wkly Rep 2019;68:999-1005. DOI: http://dx.doi.org/10.15585/mmwr.mm6844e1 .
  • Cain KS, Meyer SC, Cummer E, Patel KK, Casacchia NJ, Montez K, Palakshappa D, Brown CL. Association of Food Insecurity with Mental Health Outcomes in Parents and Children. Science Direct. 2022; 22:7; 1105-1114. DOI: https://doi.org/10.1016/j.acap.2022.04.010 .
  • Smith-Grant J, Kilmer G, Brener N, Robin L, Underwood M. Risk Behaviors and Experiences Among Youth Experiencing Homelessness—Youth Risk Behavior Survey, 23 U.S. States and 11 Local School Districts. Journal of Community Health. 2022; 47: 324-333.
  • Experiencing discrimination: Early Childhood Adversity, Toxic Stress, and the Impacts of Racism on the Foundations of Health | Annual Review of Public Health https://doi.org/10.1146/annurev-publhealth-090419-101940 .
  • Sedlak A, Mettenburg J, Basena M, et al. Fourth national incidence study of child abuse and neglect (NIS-4): Report to Congress. Executive Summary. Washington, DC: U.S. Department of Health an Human Services, Administration for Children and Families.; 2010.
  • Font S, Maguire-Jack K. Pathways from childhood abuse and other adversities to adult health risks: The role of adult socioeconomic conditions. Child Abuse Negl. 2016;51:390-399.
  • Swedo EA, Aslam MV, Dahlberg LL, et al. Prevalence of Adverse Childhood Experiences Among U.S. Adults — Behavioral Risk Factor Surveillance System, 2011–2020. MMWR Morb Mortal Wkly Rep 2023;72:707–715. DOI: http://dx.doi.org/10.15585/mmwr.mm7226a2 .
  • Bellis, MA, et al. Life Course Health Consequences and Associated Annual Costs of Adverse Childhood Experiences Across Europe and North America: A Systematic Review and Meta-Analysis. Lancet Public Health 2019.
  • Adverse Childhood Experiences During the COVID-19 Pandemic and Associations with Poor Mental Health and Suicidal Behaviors Among High School Students — Adolescent Behaviors and Experiences Survey, United States, January–June 2021 | MMWR
  • Hillis SD, Anda RF, Dube SR, Felitti VJ, Marchbanks PA, Marks JS. The association between adverse childhood experiences and adolescent pregnancy, long-term psychosocial consequences, and fetal death. Pediatrics. 2004 Feb;113(2):320-7.
  • Miller ES, Fleming O, Ekpe EE, Grobman WA, Heard-Garris N. Association Between Adverse Childhood Experiences and Adverse Pregnancy Outcomes. Obstetrics & Gynecology . 2021;138(5):770-776. https://doi.org/10.1097/AOG.0000000000004570 .
  • Sulaiman S, Premji SS, Tavangar F, et al. Total Adverse Childhood Experiences and Preterm Birth: A Systematic Review. Matern Child Health J . 2021;25(10):1581-1594. https://doi.org/10.1007/s10995-021-03176-6 .
  • Ciciolla L, Shreffler KM, Tiemeyer S. Maternal Childhood Adversity as a Risk for Perinatal Complications and NICU Hospitalization. Journal of Pediatric Psychology . 2021;46(7):801-813. https://doi.org/10.1093/jpepsy/jsab027 .
  • Mersky JP, Lee CP. Adverse childhood experiences and poor birth outcomes in a diverse, low-income sample. BMC pregnancy and childbirth. 2019;19(1). https://doi.org/10.1186/s12884-019-2560-8 .
  • Reid JA, Baglivio MT, Piquero AR, Greenwald MA, Epps N. No youth left behind to human trafficking: Exploring profiles of risk. American journal of orthopsychiatry. 2019;89(6):704.
  • Diamond-Welch B, Kosloski AE. Adverse childhood experiences and propensity to participate in the commercialized sex market. Child Abuse & Neglect. 2020 Jun 1;104:104468.
  • Shonkoff, J. P., Garner, A. S., Committee on Psychosocial Aspects of Child and Family Health, Committee on Early Childhood, Adoption, and Dependent Care, & Section on Developmental and Behavioral Pediatrics (2012). The lifelong effects of early childhood adversity and toxic stress. Pediatrics, 129(1), e232–e246. https://doi.org/10.1542/peds.2011-2663
  • Narayan AJ, Kalstabakken AW, Labella MH, Nerenberg LS, Monn AR, Masten AS. Intergenerational continuity of adverse childhood experiences in homeless families: unpacking exposure to maltreatment versus family dysfunction. Am J Orthopsych. 2017;87(1):3. https://doi.org/10.1037/ort0000133 .
  • Schofield TJ, Donnellan MB, Merrick MT, Ports KA, Klevens J, Leeb R. Intergenerational continuity in adverse childhood experiences and rural community environments. Am J Public Health. 2018;108(9):1148-1152. https://doi.org/10.2105/AJPH.2018.304598 .
  • Schofield TJ, Lee RD, Merrick MT. Safe, stable, nurturing relationships as a moderator of intergenerational continuity of child maltreatment: a meta-analysis. J Adolesc Health. 2013;53(4 Suppl):S32-38. https://doi.org/10.1016/j.jadohealth.2013.05.004 .

Adverse Childhood Experiences (ACEs)

ACEs can have a tremendous impact on lifelong health and opportunity. CDC works to understand ACEs and prevent them.

research project evaluation exemplar

IMAGES

  1. Research Project B Evaluation

    research project evaluation exemplar

  2. PSY 7000 (2 year) Research Project Evaluation Form:

    research project evaluation exemplar

  3. Free 7 Sample Project Feedback Forms In Ms Word Pdf Project Management

    research project evaluation exemplar

  4. FREE 7+ Sample Project Evaluation Templates in PDF

    research project evaluation exemplar

  5. A Grade RP Evaluation Exemplar

    research project evaluation exemplar

  6. ELT 4 Research Project Evaluation Form & VIVA Rubric

    research project evaluation exemplar

VIDEO

  1. Cpm & Pert

  2. Student Project Evaluation

  3. Starting the 150 Word Summary

  4. Types of Program Evaluation

  5. NCERT EXEMPLAR Trigonometry Ex 8.4 Q 1 Full Video

  6. Australian Sensitive Data IG

COMMENTS

  1. Evaluation

    The following exemplars include graded student work. Documents will continue to be uploaded as they become available. RPB A+ Evaluation: Empathy [DOC 79KB] RPB A Evaluation: Fairytales [PDF 1MB] RPB B Evaluation: Hairy-nosed Wombat [DOC 57KB] RPB C+ Evaluation: A car and its owner [PDF 1.3MB]

  2. Research Project Evaluation—Learnings from the PATHWAYS Project

    This paper describes key project's evaluation issues including: (1) purposes, (2) advisability, (3) tools, (4) implementation, and (5) possible benefits and presents the advantages of a continuous monitoring. Methods: Project evaluation tool to assess structure and resources, process, management and communication, achievements, and outcomes.

  3. Evaluation Research Design: Examples, Methods & Types

    Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project. Formative evaluation is the starting point of evaluation research because it sets the tone of the organization's project and provides useful insights for other types of evaluation.

  4. Writing an Evaluation Plan

    Writing an Evaluation Plan. An evaluation plan is an integral part of a grant proposal that provides information to improve a project during development and implementation. For small projects, the Office of the Vice President for Research can help you develop a simple evaluation plan. If you are writing a proposal for larger center grant, using ...

  5. PDF Writing an Evaluation Plan

    Education Projects (Framework), and the Impacts and Indicators Worksheet. Evaluation design: Evaluation questions, design, data collection methods, analyses, and reporting/dissemination strategies must be detailed in the evaluation plan, including formative and summative evaluation goals and strategies that seek to answer the evaluation ...

  6. PDF Project Evaluation: Essays and Case Studies

    Martland for 1.011 Project Evaluation, a required course within MIT's Department of Civil & Environmental Engineering that he designed, developed, and taught for many years. It is structured to be of interest to anyone interested in infrastructure systems, especially engineers, planners and managers who design, build and operate such systems.

  7. RCE 672: Research and Program Evaluation: APA Sample Paper

    APA Sample Paper from the Purdue OWL. The Purdue OWL has an APA Sample Paper available on its website. APA Sample Paper.

  8. Evaluating Research

    Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...

  9. Evaluating Sources

    Lateral reading. Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to: Verify evidence. Contextualize information. Find potential weaknesses. If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable.

  10. How to Write Evaluation Reports: Purpose, Structure, Content

    The National Science Foundation Evaluation Report Template - This template provides a structure for evaluating research projects funded by the National Science Foundation. It includes sections on project background, research questions, evaluation methodology, data analysis, and conclusions and recommendations.

  11. Evaluation Research: Definition, Methods and Examples

    The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications. LEARN ABOUT: Action Research.

  12. (PDF) Four Approaches to Project Evaluation

    evaluation types: (1) constructive process evaluation, (2) conclusive process evaluation, (3) constructive. outcome evaluation, (4) conclusive outcome evaluation and (5) hybrid evaluations derived ...

  13. How to Write a Research Proposal

    Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: "A Conceptual Framework for Scheduling Constraint Management" Example research proposal #2: "Medical Students as Mediators of Change in Tobacco Use" Title page

  14. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Source Definition; Suchman (1968, pp. 2-3) [Evaluation applies] the methods of science to action programs in order to obtain objective and valid measures of what such programs are accomplishing.…Evaluation research asks about the kinds of change desired, the means by which this change is to be brought about, and the signs by which such changes can be recognized.

  15. PDF Evaluation of research proposals: the why and what of the ERC's recent

    relevant for the evaluation (for example publications or recognition by peers), and then the mechanisms that should be used to evaluate them (for example citation counts versus ... of the research project. At the same time, the panels will evaluate the intellectual capacity and creativity of the applicant, with a focus on the extent to which ...

  16. Design and Implementation of Evaluation Research

    Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or ...

  17. Research Questions, Objectives & Aims (+ Examples)

    Research Aims: Examples. True to the name, research aims usually start with the wording "this research aims to…", "this research seeks to…", and so on. For example: "This research aims to explore employee experiences of digital transformation in retail HR.". "This study sets out to assess the interaction between student ...

  18. Example 9

    Example 9 - Original Research Project Rubric. Characteristics to note in the rubric: Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper ...

  19. Evaluation Exemplar

    Studying from past student work is an amazing way to learn and research, however you must always act with academic integrity. This document is the prior work of another student. Thinkswap has partnered with Turnitin to ensure students cannot copy directly from our resources. Understand how to responsibly use this work by visiting 'Using ...

  20. Parks After Dark Evaluation Brief, May 2024

    Summary: In this infographic brief, the UCLA Center for Health Policy Research summarizes information from their evaluation of the 2022 Parks After Dark (PAD) program in Los Angeles County.PAD is a county initiative led by the Department of Parks and Recreation in partnership with other county departments and community-based organizations. PAD programming — including sports, entertainment ...

  21. Balanced Adversarial Tight Matching for Cross‐Project Defect Prediction

    Alignment effect of multimodal distributions between projects at the first iteration (a), the 10th iteration (b), and the 25th iteration (c). Considering the Poi->Camel task pair as an example, with Poi serving as the source project and represented by orange dots, illustrating the sample features within the project.

  22. Cross‐Project Defect Prediction Using Transfer Learning with Long Short

    An example is the convolutional neural network (CNN), which is mainly used to process images; while code defects are usually textual information, CNN is not suitable for this case. ... including model evaluation indicators, various machine learning and neural network models used in the experiments, experimental results, and research questions ...

  23. About Adverse Childhood Experiences

    Toxic stress, or extended or prolonged stress, from ACEs can negatively affect children's brain development, immune systems, and stress-response systems. These changes can affect children's attention, decision-making, and learning. 18. Children growing up with toxic stress may have difficulty forming healthy and stable relationships.

  24. Buildings

    Over 50% of nuclear power plants (NPPs) worldwide have operated for over three decades, leading to a surge in decommissioning projects. This study addresses the gap in current guidelines by analyzing risks in nuclear decommissioning. Using the fuzzy-AHP technique, tasks within dismantling radioactive concrete structures are prioritized. Findings reveal structural and human-related risks across ...

  25. Folio

    The following exemplars include graded student work. Documents will continue to be uploaded as they become available. RPB A+ Folio: Fine motor skills [PDF 3.1MB]

  26. Microsoft Build 2024: Create custom copilots from SharePoint

    Custom copilot is pre-populated with information from the file/folder selection. The copilot has a default folder name, branding, description, sources you've selected, and other fields already. You can keep these fields and parameters as-is, or easily update them. Customize the identity with a name change. Customize the grounding knowledge.