Search form

  • About Faculty Development and Support
  • Programs and Funding Opportunities

Consultations, Observations, and Services

  • Strategic Resources & Digital Publications
  • Canvas @ Yale Support
  • Learning Environments @ Yale
  • Teaching Workshops
  • Teaching Consultations and Classroom Observations
  • Teaching Programs

Spring Teaching Forum

  • Written and Oral Communication Workshops and Panels
  • Writing Resources & Tutorials
  • About the Graduate Writing Laboratory
  • Writing and Public Speaking Consultations
  • Writing Workshops and Panels
  • Writing Peer-Review Groups
  • Writing Retreats and All Writes
  • Online Writing Resources for Graduate Students
  • About Teaching Development for Graduate and Professional School Students
  • Teaching Programs and Grants
  • Teaching Forums
  • Resources for Graduate Student Teachers
  • About Undergraduate Writing and Tutoring
  • Academic Strategies Program
  • The Writing Center
  • STEM Tutoring & Programs
  • Humanities & Social Sciences
  • Center for Language Study
  • Online Course Catalog
  • Antiracist Pedagogy
  • NECQL 2019: NorthEast Consortium for Quantitative Literacy XXII Meeting
  • STEMinar Series
  • Teaching in Context: Troubling Times
  • Helmsley Postdoctoral Teaching Scholars
  • Pedagogical Partners
  • Instructional Materials
  • Evaluation & Research
  • STEM Education Job Opportunities
  • Yale Connect
  • Online Education Legal Statements

You are here

Program assessment.

Program evaluation is the process of systematically collecting, analyzing, and using data to review the effectiveness and efficiency of programs. In educational contexts, program evaluations are used to: identify methods of improving the quality of higher education; provide feedback to students, faculty, and adminstrators; and ensure that programs, policies, curriculum, departments, and/or institutions are functioning as intended and producing desireable outcomes.  The Poorvu Center is available to provide program evaluation support to Yale faculty and administrators.

Common Types of Educational Program Evaluations

1.     Needs assessment – This type of evaluation identifies whether there is a difference between the performance of a program and its desired objectives or outcomes.  The purpose is to identify the existing ‘needs’ within the targeted audience that can be addressed with supplemental instruction or programming.

2.     Curriculum mapping – This type of evaluation identifies when and how various skills, content, and objectives are addressed across multiple courses. A curriculum map helps instructors and administrators determine how to modify instruction or program requirements to ensure that the curriculum has the appropriate breadth and depth.

3.     Program review – This type of evaluation occurs on a regular schedule (often five to seven years).  The evaluation allows faculty and adminstrators to examine how the program has changed over time and periodically (re)assess programmatic goals.

Program Evaluation Process

Several different models of program evaluation exist that differ slightly in both purpose and process.  These models include the Kirkpatrick model, the CIPP model, and the SEP .  Each of these models has unique terminology and may be more applicable in certain contexts than others.  Despite these differences, the general program evaluation process in each model follows a similar process.

1.    Planning

  • Identify purpose  – Determine why an evaluation is needed and what the information gathered during the evaluation process would be used for. 
  • Identify stakeholders – This includes identifying key personnel, participants, and audiences of the program.  During this process, determine who should be involved in the evaluation process.  Think about whose perspective would improve the evaluation and whose voices would be absent if not included in the process.
  • Identify resources of the program – Determine all of the components that contribute to the functioning of the program (e.g. time, money, human resources, facilities, etc).

2.    Understanding Program Design

  • Describe the goals and outcomes of the program  – Write objectives and classify goals as being either short term, medium term, or long term.  
  • Identify programmatic activities – This includes all of the tasks that the staff need to complete, as well as all of the courses, assignments, co-curriculars, mandatory meetings, etc. that participants of the program will be asked to complete.
  • Connect the goals of the program with the activities and then the outcomes – This will help identify any gaps (i.e. goals and/or outcomes that are not associated with an activity) and ensures that all activies align with their intended purpose.

3.    Design the Evaluation Plan

  • Determine the scope of the evaluation – Identify what goals, outcomes, and activities will be included in the evaluation.  Some outcomes may not be realized until long after the particpants complete the program and may require an ongoing evaluation.
  • Find or develop measures to collect data -  Perform a literature search to identify if any measures of your constructs exist and are currently being implemented in other programs. If not, you can systematically develop your own measures to address your initial questions about your program. 
  • Write an evaluation plan –  The plan describes the data collection, data analyses, and reporting processes and outlines the responsibilities of each member of the evaluation team.  Often an evaluation plan includes an examination of the fidelity of implementation, valuates whether the program was implemented exactly as written so that results can be based on the actual occurrences of the program, rather than on assumptions of how program components took place. [1]

4.    Conduct the evaluation

  • Gather the data -  according to the plan you developed. 
  • Analyze the data  - Depending on the type of data you collected, select appropriate statistical analyses that allow you to understand what the data are demonstrating.
  • Report the results to program stakeholders  - Identify the audience and write an evaluation report. Include a description of the program as well as the results. 

5.    Revise the program and/or evaluation plan for continuous improvement

  • Evaluating a program is not a one-time process. When done well, it opens a continuous procedure for measurement within the program.

Internal vs. External Evaluations

Depending on the reasons why an evaluation is being considered for a program, the program may pursue an internal or external evaluation. [2] Internal evaluations are conducted by individuals within the program or institution, often as part of a formative effort of quality monitoring or continuous improvement. While the methodology may be similar to the internal process, external evaluations are conducted by individuals outside of the institution and are often focused on summative assessment. Internal or external reviews can help to determine where resources can be allocated for program improvement, and provide a comprehensive picture of services offered.

One benefit to using internal self-evaluation is that it can be directed to specific goals or intentionally aligned with program outcomes. Internal colleagues may also be familiar with the culture and context of the program being evaluated and be less intimidating to those involved. Internal evaluation can promote greater collegiality and collaboration among units, and due to the proximity of the evaluator, often it is feasible for results that are implemented to be overseen. Drawbacks of internal evaluation can be that there is subjectivity or bias in the program evaluation. 

A benefit of using external evaluators is that the evaluation can often be broader in scope and design. Additionally, external review can provide greater credibility for the results with increased levels of accountability for those being evaluated. 3 Some grant funding agencies require an eternal evaluator for these reasons.

Relationship to Educational Research

Evaluators in educational settings use similar data collection methods and encounter many of the same methodological challenges as education researchers.  However, the purpose underlying the work differs.  Educational researchers are often trying to understand and explain how learning takes place for the purpose of increasing the wider knowledge base. Since researchers usually are trying to identify phenomenon that may occur or be applicable across different contexts, there is often a greater emphasis on sampling methods to ensure that the results are not limited to a specific group of participants, lab, classroom, or environment.  Evaluations are usually context specific and there is less of an attempt to generalize the results beyond a particular context.  The audience for an evaluation is usually a stakeholder(s) in a program with the aim of indentifying ways to improving that program. Though depending on the scope of the evaluation, the results may be informative to other similar programs.

[1] Carroll, C., Patterson, M., Wood, S., Booth, A., Balain, J.R., & Balain, S. (2007). A conceptual framework for implementation fidelity,  Implementation Science,  2-40.

[1]Goldstein, I. L., & Ford, J. K. (2002). Training in organizations: Needs assessment, development, and evaluation (4th ed.). Belmont, CA, US: Wadsworth/Thomson Learnin

YOU MAY BE INTERESTED IN

project evaluation in education

Hosted annually, the Spring Teaching Forum fosters discussions about teaching and learning within the Yale community.

project evaluation in education

The Poorvu Center for Teaching and Learning routinely supports members of the Yale community with individual instructional consultations and classroom observations.

project evaluation in education

Reserve a Room

The Poorvu Center for Teaching and Learning partners with departments and groups on-campus throughout the year to share its space. Please review the reservation form and submit a request.

Creating a program evaluation in 10 steps.

A 10-Step Guide for Incorporating Program Evaluation in Education into the Design of Educational Interventions

Juan D'Brot

Proving What You Think You Know

In two recent CenterLine posts, my colleague Chris Brandt and I made a case for the importance of incorporating program evaluation in education into the design and implementation of any program intended to improve educational outcomes and have a positive impact on student learning. In this post, I offer a 10-step guide to designing a solid program evaluation from the beginning of a program. I also propose an abridged version of the guide that could be used to evaluate existing programs or if you are short on the resources (e.g., time, money, and personnel) to carry out the full 10-step program. 

From Solving to Mastering the Rubik’s Cube 

If you’ve read any of my previous posts, you know that I have to tie my message to some kind of analogy, metaphor, or (hopefully) related story drawn from my own everyday experiences or sometimes those of my son. I saw this post as an opportunity to offer a multi-generational parable to demonstrate the importance of having a well-planned guide to help you know that what you are doing is working and prove you know what you think you know. 

Recently, my son has transitioned away from Fortnite (see post here ) to mastering a Rubik’s Cube. Apparently, one does not simply try to solve a Rubik’s cube anymore. 

When I first encountered a Rubik’s Cube in the 1980s, it was a colorful puzzle you tried to solve, most often unsuccessfully. There was a well-defined problem; get one solid color on each face of the cube, but no road map to guide you from the starting point to the desired outcome. For as long as the cube held your interest, you adopted a strategy like solving one face of the cube at a time or starting with the corners, but you never had any way of knowing whether you were closer to a solution. Eventually, you forgot about the problem you were trying to solve and started creating pretty patterns on the faces of the cube that friends and family found appealing. But you were no closer to the solution, as far as you knew, and eventually, the cube ended up unsolved on a shelf or as a paperweight on your desk. 

My son’s experience with the Rubik’s Cube, in contrast, began with training. How does one go about getting that training? Through YouTube, of course. There are an astounding number of how-to videos coaching you on how to solve a Rubik’s Cube using various move sets (aka, “algorithms”). These algorithms vary in complexity, skill, and when they should be applied. Some are very repetitive and can be applied to most cubes, while others are a mix-and-match set that requires greater skill and awareness but solves the cube much faster. He’s gotten pretty quick with the standard-sized cubes and now wants to try his hand at a 4×4.

project evaluation in education

Consider solving the Rubik’s Cube as the initial problem. That problem has been solved. Actually, it seems that every kid at the bus stop plays with Rubik’s cubes as fidget toys, and they all seem to be solving them better than I can (another great example of the availability of information on the web). My son would not have learned how to solve the cube if he hadn’t had access to how-to videos. And he wouldn’t have gotten better at solving it if he hadn’t measured his progress regularly against the solutions provided by those videos. 

Now he can move on to the next problem, getting faster and more efficient at solving the cube and solving the cube under certain conditions, time bounds, and other constraints.  Speed and efficiency (number of moves) are now the measures he applies as part of the evaluative process. Do you see where I am going? 

My son’s example of solving and mastering a Rubik’s Cube highlights the need for a well-defined problem, a solution (i.e., how-to videos), and an evaluative process to determine efficacy both along the way and over time.  

We have well-defined problems in education, and we even have some research-based solutions (e.g., embedding formative assessment processes, curriculum, and instruction based on the science of reading). What we often lack is the evaluative process to determine the efficacy of our attempts to implement those solutions in our context both along the way and over time. 

Identifying the Problem and Planning for an Educational Program Evaluation

Before I describe the steps to conducting a high-quality evaluation, recall that it is important to first know what we’re even trying to evaluate (e.g., solving a Rubik’s Cube, completing an obstacle course race, or improving student learning). My colleague Chris Brandt described in a recent post the importance of applying the principles of improvement science with a focus on identifying and defining the problem to be solved . He also noted that once you have identified the problem and its root causes, we should apply an improvement framework to answer the following three core questions ( Langley, et al, 2009 ):

  • What is our goal? What are we trying to accomplish?
  • What change might we introduce and why? 
  • How will we know that a change is actually an improvement?

At the end of my recent post (after comparing educational program evaluation to obstacle course racing), I suggested the need to describe the steps in planning a well-designed evaluation based on a solid program theory and logic model. As I mentioned, program evaluation is a systematic method for collecting, analyzing, and using the information to answer questions like question 3: How will we know that a change is actually an improvement? 

Clearly defining the theory of action and program logic are two of the most critical steps in developing an evaluation plan. The more detailed a logic model, the clearer the evidence for defining program success. 

As Chris explained in his post and I demonstrated in a previous post about my experience building a patio , it is essential to ensure the proper evidence is aligned with the program’s resources, outputs, short-, mid-, and long-term outcomes. A robust evaluation is based on a solid plan and data elements, or indicators of proof defined a priori. These indicators inform what evidence should be collected, documented, and used to make interpretations.

The logic model identifies the components that are important to be aware of when connecting the program to an evaluation. Think of it more as an organizer for your program that can be used to inform the evaluation. It makes creating the evaluation much more efficient in the same way that organizing puzzle pieces by edges and center pieces makes for an easier puzzle-making experience. 

Once you have organized your program, you can use the following 10 steps to conduct your own evaluation. They are presented in the table below.

project evaluation in education

When Good Design Meets Real-World Constraints

I’ve been involved in a lot of educational program evaluations, and not all of them had the budget, personnel, and time to include all of the steps in the table above. Many times, particularly when working with states and districts, evaluations are added to the existing program rather than embedded from the beginning of the design. When that is the case, we need to narrow our scope and focus our limited resources. If I were trying to identify the bare-bones number of steps to evaluate a program, I would suggest focusing on the following three:

  • Build out the program logic model . Theories of action are important, and even if not documented, many program designers and implementers have an implicit theory of action. A logic model forces us to be more specific about the actual activities that are intended to bring about change, the resources that are necessary, the parties responsible, and the outputs that will lead to certain intended outcomes. 
  • Connect evidence (measures) to each activity in the logic model . This step translates the outputs into measurable data elements. In some cases, evidence may be direct observations of counts, completed tasks, or getting individuals to attend some training. In other cases, the evidence is a little more difficult to capture, like surveys, interviews, or document reviews. The coherent connection between the evidence (i.e., data elements) and activities is most important to be able to make judgments during the next step. 
  • Collect data and determine evidence quality . The collection of high-quality evidence that speaks to whether you are making incremental progress along the way is critical. This evidence helps justify that the intermediate activities along the way will positively impact intended outcomes. Even if the evidence is not analyzed across all activities, evaluating the intermediate steps can help us identify where we may need to go back and redeploy a step in the process to ensure that the program doesn’t break down. 

Ideally, we would want to engage in all 10 steps, but that’s often not practical or reasonable. I offer these as the three most critical steps I believe can help program designers and implementers monitor progress along the way rather than relying on waiting for changes in the outcome. 

Without a solid evaluation plan, we increase the risk that we are duplicating efforts, applying the wrong program, or simply wasting resources. By monitoring progress as we go, rather than just in retrospect, we are better equipped to identify what activities need to be corrected or correct activities along the way. 

Privacy Overview

Book cover

Educational Technology pp 165–177 Cite as

Educational Project Design and Evaluation

  • Ronghuai Huang 9 ,
  • J. Michael Spector 10 &
  • Junfeng Yang 11  
  • First Online: 28 February 2019

167k Accesses

Part of the book series: Lecture Notes in Educational Technology ((LNET))

Nowadays, the available and affordable resources and technologies which could support learning and instruction are plentiful. However, choosing the best resources for instruction in various situations is an increasingly challenging task for designers, teachers, administrators, and so on. According to Spector and Yuen ( 2016 ), the use of educational technology requires attention to (a) effective and efficient design, development, and deployment and (b) providing the best results for the relevant constituencies. In terms of how to make sure the educational technology is best used, the educational project design and evaluation provide an innovative approach to dealing with educational problems.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

ILO. (2010). P roject design manual, a step-by-step tool to support the development of cooperatives and other forms of self-help organizations . Retrieved from http://www.ilo.org/public/english/employment/ent/coop/africa/download/coopafricaprojectdesignmanual.pdf .

Prabhakar, G. P. (2009). Projects and their management: A literature review. International Journal of Biometrics, 3 (8), 3–9.

Google Scholar  

Spector, J. M., & Yuen, A. H. K. (2016). Educational technology program and project evaluation . New York: Routledge.

Book   Google Scholar  

Stufflebeam, D. L. (1971). The relevance of the CIPP evaluation model for educational accountability. Journal of Research and Development in Education, 5 (1), 19–25.

Stufflebeam, D. L. (2003). The CIPP model for evaluation. In T. Kellaghan & D. L. Stufflebeam (Eds.), International handbook of educational evaluation (pp. 31–62). Dordrecht: Springer Netherlands. Retrieved from https://doi.org/10.1007/978-94-010-0309-4_4 .

Chapter   Google Scholar  

Download references

Author information

Authors and affiliations.

School of Educational Technology, Beijing Normal University, Beijing, China

Ronghuai Huang

University of North Texas, Denton, TX, USA

J. Michael Spector

School of Education, Hangzhou Normal University, Hangzhou, Zhejiang, China

Junfeng Yang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ronghuai Huang .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter.

Huang, R., Spector, J.M., Yang, J. (2019). Educational Project Design and Evaluation. In: Educational Technology. Lecture Notes in Educational Technology. Springer, Singapore. https://doi.org/10.1007/978-981-13-6643-7_10

Download citation

DOI : https://doi.org/10.1007/978-981-13-6643-7_10

Published : 28 February 2019

Publisher Name : Springer, Singapore

Print ISBN : 978-981-13-6642-0

Online ISBN : 978-981-13-6643-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Grad Med Educ
  • v.12(3); 2020 Jun

Program Evaluation: Getting Started and Standards

To guide GME educators through the general process of a formal evaluation, we have launched a Rip Out series to highlight some of the key steps in designing effective evaluations. Our first Rip Out explores how 4 accepted program evaluation standards—accuracy, utility, integrity, and feasibility—can optimize the quality of your evaluation. Subsequent Rip Outs highlight other aspects of effective evaluations. Please share your reactions and evaluation examples by tagging @JournalofGME on Twitter

The Challenge

You have just left an Annual Program Evaluation committee meeting and your report is ready for submission to the program director (PD). Areas that the committee targeted for improvement seem to be progressing well. However, you are worried about how to present the report to the teaching faculty, who typically focus on the quality of the data: the Accreditation Council for Graduate Medical Education annual survey of residents and fellows, program-specific annual surveys, and end-of-rotation evaluations. The faculty discussion always ends with critiques such as “We don't really know what this data means” due to “small numbers,” confusion over what the Likert scale questions “really asked,” the statistical validity of the surveys, and concerns that there is “no control group.”

PDs and other graduate medical education (GME) 1 educators routinely evaluate their educational programs and then make judgments about what to keep, improve, or discontinue. Some may engage in program evaluation as if it were research. This is not surprising: faculty are trained in systematic inquiry focused on quality improvement or research activities, which serve different purposes and have varying assumptions and intended outcomes as compared with program evaluation. As a result, the faculty's grasp of program evaluation's underlying assumptions, aims/intended outcomes, methods, and reporting is often limited and leads to difficult discussions.

Rip Out Action Items

GME educators should:

  • Identify the purpose of your evaluation(s) and how results inform your decisions.
  • If evaluation data will not be used for decision-making, then do not collect the data.
  • Assure that your evaluations meet the standards for program evaluation.
  • Convene your Annual Program Evaluation committee (or similar group) to review your current sources of evaluation information.

What Is Known

In the mid-20th century, program evaluation evolved into its own field. Today, the purpose of program evaluation typically falls in 1 of 2 orientations in using data to (1) determine the overall value or worth of an education program (summative judgements of a program) or (2) plan program improvement (formative improvements to a program, project, or activity). Regardless of orientation, program evaluation can enhance the quality of GME and may ultimately improve accountability to the public through better quality of care.

Program evaluation standards help to ensure the quality of evaluations. 2 PDs and GME educators tend to focus on only one of these standards: accuracy. Less often, they consider the other standards associated with program evaluation: utility, integrity (fairness to diverse stakeholders), and feasibility. The Table displays these program evaluation standards and aligns each one with an evaluation question and action steps.

Program Evaluation Standards, Evaluation Questions, and Action Steps

Abbreviation: ERE, end-of-rotation evaluation.

How You Can Start TODAY

  • Apply the evaluation standards. The standards should be applied to every evaluation discussion—to assure the integrity of your progress, process, and outcomes.
  • Clarify the purpose of the evaluation. Be clear on what you are evaluating and why. Are you evaluating if the stated goals of the educational program are consistent with the needs of the community or the mission of the sponsoring institution? Are you aiming to improve the learning environment in ambulatory settings?
  • Always discuss feasibility and utility early on. It can be an awesome approach but impossible to do! Do not overlook the cost and politics of evaluation. Before you begin to collect your data, be clear about how you will actually use the information and who will have access to the findings.
  • Consider multiple stakeholders. For most evaluations, trainees and faculty members are key stakeholders. Patients, community members, and leadership from your hospitals, clinics, and quality and safety committees may also have a stake in educational programs.

What You Can Do LONG TERM

  • Convene your workgroup. Convene your Annual Program Evaluation committee (or similar group) and review high-priority decisions. Apply the evaluation standards and determine if you have sufficient and accurate information to make informed decisions from all contributors.
  • Adopt, adapt, author. Adopt or adapt existing evaluation tools that align with your aim before authoring your own. Optimally, these tools have been vetted and can provide comparison data.
  • Familiarize yourself. Learn about the field of evaluation and evaluation resources (eg, American Evaluation Association) as well as program evaluation resources in health professions education. 2 , 3

Certificate in Education Program Evaluation

Develop expertise in program evaluation theory, methods, and skills to effectively measure and report on the success of programming and add value to your organization.

Curriculum & Schedule

How to register, tuition & funding.

The Certificate in Education Program Evaluation prepares you with an advanced understanding of program evaluation theory, methods, and applications for the 21st century. Through case studies and hands-on exercises, you’ll develop the well-rounded skills and expertise needed to support and influence programs across not only the education sector, but also non-profit organizations, government, and associations.

In the classroom, you’ll learn from academics and advanced practitioners as you work toward designing and presenting your own program evaluation. Upon completing the program, you’ll be able to effectively measure, evaluate, and report on the success of programming within your organization.

Ideal for Professionals in education, non-profits, the public or private sector

Duration 3 months to 2 years

Tuition $2,797

Format Online & On-Campus

Schedule Saturdays

Semester of Entry Fall, spring, summer

Upon successful completion of the certificate, you‘ll be able to:

  • Compare evaluation theories and techniques
  • Identify design structure of an evaluation tool
  • Apply appropriate research methodology to program evaluations
  • Design a program or policy evaluation outline
  • Leverage evaluation findings to influence future change

Testimonials from current students and alumni.

Headshot of Larry Thi

Amy did a really exceptional job keeping the conversations within scope of relevance. The activities also helped me develop the mindset to critically think about programs I develop and how I would eventually want to evaluate them.

You must successfully complete the three required courses for a total 4.90 Continuing Education Units (CEUs), which is equivalent to 49.0 contact hours. All three courses must be completed within a two-year period.

  • Program Planning, Analysis and Evaluation (Required; 1.4 CEUs)
  • Research Methods (Required; 2.1 CEUs)
  • Program Evaluation Design (Required; 1.4 CEUs)

What Is Live Online Learning? Live online instruction is enhanced by incorporating various instructional practices and technology tools. Features such as Zoom video conferencing, breakout rooms, and chat allow for real-time interaction and collaboration among learners. Tools like Google Docs, Slides, Sheets, and Canvas Groups facilitate teamwork and information sharing within the learning community. Polling, surveys, and threaded discussion boards promote active engagement and the expression of opinions. It is important to foster social respect, privacy, and incorporate Jesuit values to create a supportive and inclusive online environment. By utilizing these practices and tools effectively, live online instruction can be engaging, interactive, and conducive to meaningful learning experiences.

What Is On-Campus Learning? On-Campus programs combine traditional classroom learning with interactive experiential methodology. Classes typically meet for two or three consecutive days once a month at our downtown Washington, D.C. campus.

Course Schedule

Brandon Daniels

Brandon Daniels

Brandon Daniels is currently the Performance Management Officer, in the Department of General Services, with the Government of the District of Columbia. Dr. Daniels currently serves as the Performance Management ... Read more

Kristen Hodge-Clark

Kristen Hodge-Clark

Dr. Kristen N. Hodge-Clark serves as senior assistant dean for program planning within the School of Continuing Studies. In this capacity, she oversees several strategic functions related to the development ... Read more

Mona Levine

Mona Levine

Dr. Mona Levine has over thirty years of experience in leadership, administration and instruction in both research universities and community colleges. At Georgetown, she serves as Subject Matter Expert and ... Read more

Please review the refund policies in our Student Handbook before completing your registration.

Degree Requirement

You must hold a bachelor's degree or the equivalent in order to enroll in our certificate programs.

Registration

This certificate is an open-enrollment program. No application is required. Click the "Register Now" button, select your courses, and then click "Add to Cart". Course registration is complete when your payment is processed. You will receive a confirmation email when your payment is received. Please retain the payment confirmation message for your records.

You can combine on-campus and online courses (if available) to complete your certificate. Depending on the certificate program, we may suggest taking courses in a specific order, but this is not a requirement.

Most students register for all courses at the same time and complete their certificate within a few months. However, you may choose to register for courses one by one over time. Once you begin a certificate, you have up to two years from the time you start your first course to complete all required courses.

International Students

International students who enter the U.S. on a valid visa are eligible to enroll in certificate courses. However, Georgetown University cannot sponsor student visas for noncredit professional certificate programs.

A TOEFL examination is not required for non-native English speakers but students are expected to read, write, and comprehend English at the graduate level to fully participate in and gain from the program.

Students from most countries may register for our online certificate programs; however, due to international laws, residents of certain countries are prohibited from registering.

Tuition varies by course. Total program tuition for all 3 courses is $2,797. Most course materials are included.

Noncredit professional certificates do not qualify for federal financial aid, scholarships, grants, or needs-based aid. However, several finance and funding options do exist, as listed below.

Some employers offer funding for employee education or professional development. If an employer guarantees payment for employee education and training, Georgetown will accept an Intent to Pay form . If you are using employer sponsorship or training authorizations, you must submit an Intent to Pay form with your registration.

If your employer will pay for your tuition, select “Third-Party Billing” as your method of payment when you register for courses online. Please submit an Intent to Pay form indicating that your employer or another third party should be billed for tuition. Invoices will not be generated without this form on file.

  • Pay training and education expenses from appropriated funds or other available funds for training needed to support program functions
  • Reimburse employees for all or part of the costs of training or education
  • Share training and education costs with employees
  • Pay travel expenses for employees assigned to training
  • Adjust an employee's normal work schedule for educational purposes not related to official duties

Georgetown accepts Standard Form-182 (SF-182) for training authorizations from the federal government.

*Federal employees should ask the appropriate budget officer about training budgets available.

Eligible Georgetown employees may use their Tuition Assistance Program (TAP) benefits to fund 90% of the certificate program tuition—employees will be invoiced for the remaining 10% of tuition and must pay any other charges associated with their certificate program. Employees using TAP benefits may work directly with the HR Benefits Office to ensure payment prior to the start of any course. This payment option is only valid if registration occurs at least 10–14 business days prior to the start date of the first course. Any fees incurred due to course withdrawal are the student’s responsibility and are not funded by Georgetown University TAP. For questions regarding TAP benefits, please contact the HR/Benefits Office at [email protected] or (202) 687-2500.

SCS is registered with GoArmyEd.com to accept SF-182 training authorization forms. GoArmyEd.com is the virtual gateway for all eligible active duty, National Guard, and Army Reserve soldiers to request Tuition Assistance (TA) online. GoArmyEd.com is also the virtual gateway for Army Civilians to apply for their Civilian education, training, and leadership development events.

The professional certificate programs offer an interest-free payment plan for certificate programs that are more than one month in duration and for which the total tuition is greater than or equal to $4,000. The payment plan is structured in the following manner:

  • Payment #1: A down payment of 25% of the total tuition balance must be paid online (within 72 hours after you register and select Payment Plan) via the Noncredit Student Portal . Please submit your down payment as soon as possible.
  • Payments #2, #3, and #4: Your remaining balance will be due in three (3) equal monthly installments beginning 30 calendar days after your down payment is processed. Your monthly payments must be paid via credit card in the Noncredit Student Portal . You will be able to access each invoice and payment due date in your student account.

PLEASE NOTE: Automatic Payment Service is not available. You must make each subsequent payment via the Noncredit Student Portal .

A number of tuition benefits are available through the Department of Veterans Affairs and under various parts of the GI Bill ® . Please visit the Resources for Military Students page for additional information and instructions.

Some students choose to finance certificate programs with private education loans. Students are responsible for contacting lenders directly to find out if a noncredit professional certificate program is eligible for a loan. While Georgetown University will not recommend specific lenders, it will certify loans for eligible programs from approved lenders.

For eligible noncredit professional certificate programs, Georgetown University will certify loan amounts up to the full cost of tuition for the program. Tuition does not cover books, travel, or living expenses. Please see individual program pages for tuition rates.

Georgetown University has a unique campus code for Sallie Mae. Our Sallie Mae branch code is 001445-99.

You must be approved for a loan before registering for courses. Follow these steps to pursue a loan option:

  • Check the list of lenders that have offered private education loans in the past to Georgetown University students.
  • Contact the lender and confirm your program is eligible for a private education loan.
  • Obtain the necessary paperwork and apply for the loan.
  • Georgetown will certify loan amounts based on the information below. Please note that our branch code is 001445-99.
  • Payment sent to Georgetown: Select “Third-Party Payment” at the time of registration if the lender is sending funds directly to Georgetown.
  • Enter the information about the lender and then contact Noncredit Student Accounts at [email protected] .

Note: It is your responsibility to contact Georgetown University Noncredit Student Accounts at [email protected] to ensure that your loan is processed.

While you may choose to complete your certificate program in one semester, many programs (but not all) allow up to two years to complete all requirements. As a result, you may choose to register for required and elective courses over several semesters to spread out the cost of tuition over time. We generally offer every course in a program each semester, so you'll have many opportunities to enroll in required and elective courses within the two-year time frame.

Tuition Discounts

Only one tuition discount may be applied at the time of registration. Tuition discounts cannot be combined. Tuition discounts are not applied retroactively. Please contact [email protected] with any questions.

Georgetown University alumni and SCS certificate completers are eligible to receive a 30% tuition discount for many certificates offered within SCS’s Professional Development & Certificates (PDC) portfolio. When registering for an eligible certificate through the SCS website, you will see the "30% Georgetown Alumni Discount" as an option. The Enrollment Team will then verify your eligibility status as a Georgetown University alumnus or certificate completer.

Georgetown SCS offers a 20% discount for eligible certificates to organizations that register 5 or more employees for the same certificate cohort at the same time. Eligible organizations include government agencies, nonprofit agencies, and for-profit businesses. Please contact [email protected] for steps and procedures to ensure your group has access to the discount.

Employees of Boeing receive a 10% tuition discount on select programs and courses

Employees of companies that belong to the EdAssist education network may receive a 10% tuition discount on select programs and courses. Contact EdAssist directly to find out if you qualify.

Eligible federal employees across the country receive a 10% scholarship applied to the current tuition rate for all SCS degree programs and professional certificate programs each academic semester. Please contact [email protected] for steps to be added to this discount group.

Professional Certificates Help You Retool, Learn New Skills

In an age of accountability, program evaluation has become a key skill, ‘flex learning’ recreates classroom experience online.

Still Have Questions?

Certificate Admissions and Enrollment Email: [email protected] Phone: (202) 687-7000

Student Accounts Email: [email protected] Phone: (202) 687-7696

Certifying Military Benefits Email: [email protected] Phone: (202) 784-7321

Want to learn more?

Request information to find out the latest on our programs.

All fields are required.

  • Spring 2024
  • Summer 2024
  • Spring 2025
  • Summer 2025

* indicates required field

Choose Your Term

This program has multiple applications available. Please select your preferred term.

Shapiro Library

Higher Education Administration (HEA) Guide

A review of quantitative and qualitative analysis.

Need a refresher on Quantitative and Qualitative Analysis? Click below to get a review of both research methodologies.

  • Quantitative and Qualitative Analysis This link opens in a new window

Program Evaluation and Planning

Close up on hand writing out numbered plans on paper

Image by Kelly Sikkema, retrieved via Unsplash

From data analysis to program management methods and more, evaluating and planning for the success of each program is a crucial aspect of Higher Education Administration. Below you will find some useful articles and reports to help bring context to this important element of higher education leadership. 

Useful Articles

Below you will find a sample of reports, case studies and articles that outline the process of program evaluation, planning and analysis. Click through and read on for more information. 

  • The Feasibility of Program-Level Accountability in Higher Education: Guidance for Policymakers. Research Report Policymakers have expressed increased interest in program-level higher education accountability measures as a supplement to, or in place of, institution-level metrics. But it is unclear what these measures should look like. In this report, the authors assess the ways program-level data could be developed to facilitate federal accountability.
  • Improving Institutional Evaluation Methods: Comparing Three Evaluations Using PSM, Exact and Coarsened Exact Matching Policymakers and institutional leaders in higher education too often make decisions based on descriptive data analyses or even anecdote when better analysis options could produce more nuanced and more valuable results. Employing the setting of higher education program evaluation at a midwestern regional public university, for this study we compared analysis approaches using basic descriptive analyses, regression, standard propensity score matching (PSM), and a mixture of PSM with continuous variables, coarsened exact matching, and exact matching on categorical variables. We used three examples of program evaluations: a freshman seminar, an upper division general education program intended to improve cultural awareness and respect for diverse groups, and multiple living learning communities. We describe how these evaluations were conducted, compare the different results for each type of method employed, and discuss the strengths and weaknesses of each in the context of program evaluation.
  • Data-Informed Policy Innovations in Tennessee: Effective Use of State Data Systems This link opens in a new Analysis of student-level data to inform policy and promote student success is a core function of executive higher education agencies. Postsecondary data systems have expanded their collection of data elements for use by policymakers, institutional staff and the general public. State coordinating and governing boards use these data systems for strategic planning, to allocate funding, establish performance metrics, evaluate academic programs, and inform students and their families. This report discusses efforts at the Tennessee Higher Education Commission to support policy innovation with data and information resources.

Other Resources

project evaluation in education

  • << Previous: Contemporary Issues In Higher Education
  • Next: Legal and Regulatory Research >>

National Oceanic and Atmospheric Administration logo

  • Visitor Centers & Exhibits
  • Things to Do
  • Get into Your Sanctuary Day
  • Wildlife Viewing
  • Latest News
  • Press Releases
  • Earth Is Blue
  • Earth is Blue Magazine
  • Stories from the Blue
  • Notes From the Field
  • Federal Register Notices
  • Maritime Heritage
  • Socioeconomics
  • Small Boat Program
  • Get Involved
  • Photos & Videos
  • Virtual Dives
  • Sanctuaries Live
  • Policy & Planning
  • Publications
  • Strategic Plan
  • Work For Us

Education Project Evaluation - Plan an Evaluation

photo of teachers demonstrating to students on the beach

These are the California B-WET grant requirements for your project evaluation.

Project Evaluation Criteria and Technical merit: Evaluation (10 points)

Tool: Evaluation Plan Assessment (Reviewers' Rubric)

Use this rubric to assess the evaluation section of your California B-WET proposal. This rubric is based on the five questions in the evaluation section assessment from the B-WET RFP and is the one that reviewers will use to score the evaluation section of your grant. To use the rubric, read through your proposal's evaluation section, then read each question below and the directions for scoring on a scale from 3 to 0 and for tallying your score. A top score is 10 points.

Reviewers' Rubric pdf  

Examples: Proposal Evaluation Section

These are provided as examples of a thoughtful and appropriate evaluation section for a B-WET grant proposal.

Project Evaluation

We will evaluate our project to improve its design and assess its effectiveness in reaching our education objectives. At the start of the grant we will conduct a front-end evaluation (likely via a survey) of program participants (high-school and college students) to determine their prior knowledge and attitudes about the topics they will study. At the end of the summer workshop and twice during the school year, all participants will repeat relevant portions of the front-end survey so we can track changes in knowledge, attitudes and possibly stewardship actions immediately after their experiences as well as over time (a time series evaluation design). At the end of the grant we will report the summative impact of this project on participants.

The broad questions we will attempt to answer with this evaluation are:

  • Do participants' understanding of the local watershed system improve and do they feel better connected to the local system? Are the topics presented during the workshop relevant to participants? Do they inspire stewardship interests/actions?
  • What impact does the program have on participants' aspirations toward advanced schooling, interest in/pursuit of science careers, and stewardship actions?
  • What aspects of this program (content, pedagogy, field experiences and/or mentoring) do participants report as having the greatest impacts of them?

The target audience for this evaluation is teachers participating in a weeklong summer workshop. The main questions for the evaluation are:

  • Which aspects of the workshop worked well? Which didn't work? What changes would improve the workshop?
  • What's the impact of the workshop on the teachers who attend? Does it change their knowledge or how to teach the content, and in what ways?
  • What impact does the workshop have on classroom practice? Do the teachers use what they learned in the workshop?

All teacher participants would complete a pre-workshop survey. At the end of each day, teachers would provide feedback on the day's events and activities via a feedback form. On the last day of the workshop, teachers would complete a post-workshop survey (one similar to the pre-workshop survey).

The survey and feedback forms would use a mix of questions to collect qualitative and quantitative data. Data from the pre-workshop survey would be compared to responses on the post-workshop survey to track changes in participants over the week. Responses on the feedback forms will help us improve the delivery of the program.

Approximately 6 months after the workshop, teacher participants will be asked to complete an online or telephone survey to determine the workshop's impacts on their teaching and their use of the materials we provided.

Our main evaluation question is: What's the impact of the program on students? We want to know if, at the end of the school year, students know/understand more about watersheds than they did at the beginning of the year.

This evaluation will focus on program participants—middle school students. We plan to collect data using two methods:  1) student concept maps on the concept "watershed" and 2) a survey or interview of teachers about the program and any other watershed-related activities they may have used in the classroom, as well as background information on their classes, teaching experience and experience with concept maps.

A concept map is a knowledge representation tool developed by Joseph Novak and associates in the 1970s (Novak, 1998, p. 27). Concepts maps "met a need for an evaluation tool that can show easily and precisely changes in students' conceptual understanding" (Novak, 1998, p. 192). This evaluation method has been found to be reliable and valid by Novak and other researchers (Novak, 1998).

We plan to use the concept mapping technique as a pre-test in the winter, before students have experienced the program. And then use the same technique late in the school year to post-test students on the same concept. We will gain parents' permission for students to participate in this evaluation. For students whose parents do not grant permission, we have an art activity for them to complete at test time. Each test session should take about an hour. During that time we plan to train students on how to create a concept map using a non-watershed concept, then we'll ask them to construct a concept map on the concept "watershed."

We intend to use Novak's scoring criteria (Novak, 1984, pg. 105-108) to derive a numerical score for the concept maps for each student. Then we'll use statistical analyses to compare students' pre-test and post-test scores (statistical tests to be determined). Our hypothesis is that the program will have a positive impact on students' knowledge about the local watershed.

Novak, J.D. (1998).  Learning, Creating, and Using Knowledge: Concept maps as facilitative tools in schools and corporations . Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Novak, J.D. & Gowin, D.B. (1984).  Learning How to Learn . Cambridge: Cambridge University Press.

Of our project objectives, our evaluation will focus on assessing our success at increasing participants' knowledge about the local watershed, engaging them in quality scientific investigations and increasing their stewardship actions toward the watershed. We will use a mixed-methods evaluation strategy (surveys and observations) to provide us with formative data so we can improve our program and summative data to show our program's impact. The principal investigators will implement the evaluation with Dr. XYZ serving as our advisor on the development of assessment instruments, the analysis of data and reporting of results.

We will start on the first day of the program with a pre-program survey of participants' knowledge of local watersheds and their actions regarding wetland conservation. We will conduct a nearly identical post-program survey during the final day of the program. During each field work session, two trained interns will use an observation instrument to assess the quality of participants' field investigations. The observers will share their findings with the staff after each session so we can improve how we conduct the field work with participants. Finally, at the end of the program, we will ask participants to complete a survey about their intended actions regarding conservation of the local watershed.

Pre- and post-survey data will be compiled and compared to determine the impact of the program on participants. All the evaluation data collected will be summarized and included in our final report.

Non-Examples: Proposal Evaluation Section

These are provided as examples of a poorly planned or articulated evaluation section for a B-WET grant proposal.

SAMPLE 1 We will evaluate our project by observing the students during the program and discussing our observations at regular staff meetings. Parents will also complete an assessment tool to provide feedback.

SAMPLE 2 We will hire an outside evaluator to develop online surveys that both teachers and students will complete. Students will also keep journals and present final projects, which will encompass all they've learned during the program. Evaluation will be both formative and summative with the evaluator compiling and reporting on the results.

SAMPLE 3 By documenting the number of people who attend the program each month we will assess its success. At the end of each program we will ask participants to tell us what they thought about the program and suggest ways to improve it. We will also ask them to tell us one thing they learned from that program. This information will help us improve the program so that it better meets our objectives and participants' needs.

Logic Model: A Planning Tool

Attached is a logic model template to help you plan your project and its evaluation. This is a good tool to help you think about your project as a whole. The first column includes your objectives, that is, how you believe your audience will be different after they participate in your project. The next three columns are where you list what you will provide/do in order to meet your objectives. The final two columns are your outcomes-how your audience will be different immediately after participating in your project and in the not-to-distant future.

What is a logic model? A logic model is a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan to do, and the changes or results you hope to achieve. (see Online Resources: W.K. Kellogg Foundation, 2001, p. 1)

A logic model is a "picture" of how your project will work. Logic models link project outcomes (short-, intermediate- and long-term) with project activities, outputs and inputs (or resources) &.Logic models provide a road map of your project, showing how it is expected to work, the logical order of activities and how the desired outcomes will be achieved. (NOAA Coastal Services Center, Project Design and Evaluation Workshop Handbook, p. 63)

Why use it? The purpose of a logic model is to provide stakeholders with a road map describing the sequence of related events connecting the need for the planned program with the programs' desired results. Mapping a project helps you visualize and understand how human and financial investments can contribute to achieving your intended program goals and can lead to program improvements. (See Online Resources: W.K. Kellogg Foundation, 2001, pg. 3)

  • Helps you and colleagues/partners link all the components together on the same page
  • Helps project/program designers differentiate between objectives and activities, between outputs and outcomes
  • Helps managers/stakeholders see how all the components fit together
  • Aids decision making about resources and activities
  • Helps managers determine where resources will go to achieve impacts
  • Sets up project/program so that it's easier to evaluate
  • Helps individuals see how they contribute to the project/program
  • Funders are starting to request them.
  • Something else to do
  • Something new to learn
  • Takes time and thought
  • Have to think through project/program before jumping into doing activities
  • Could make you more accountable for what you do

Logic Model

Logo

How to use student evaluation of teaching to improve learning

Effective SET is the start, not the end, of a conversation about student learning, writes Saranne Weller

Saranne Weller's avatar

Saranne Weller

  • More on this topic

Student writing in a notepad

You may also like

Using immersive tech, such as augmented reality, can revolutionise assessment in universities

Popular resources

.css-1txxx8u{overflow:hidden;max-height:81px;text-indent:0px;} Emotions and learning: what role do emotions play in how and why students learn?

A diy guide to starting your own journal, universities, ai and the common good, artificial intelligence and academic integrity: striking a balance, create an onboarding programme for neurodivergent students.

Student evaluation of teaching (SET) is widely used to measure the quality of teaching , and often controversially, for teacher recruitment, promotion and tenure. Institution-led SET is largely quantitative and increasingly carried out via online surveys. This approach systematises the collection and analysis of data but also disconnects evaluation from the day-to-day interaction between students and their teachers that can help to explain, contextualise or prompt action in response to feedback. This may limit the feedback’s ability to improve teaching. 

Using SET to prompt student reflection and discussion about their learning is less common. Yet, if used purposefully to evaluate learning as well as teaching, SET can deepen student understanding of their learning environment and develop self-awareness and regulation of their learning processes.

Evaluate early to set expectations for effective learning

Conducting surveys at the end of a session or module is typical, but at this point, it is often too late to help the students who have provided feedback because it cannot be acted upon retrospectively. Formative evaluation during teaching is critical. However, SET should not be used to just identify and head off problems. Well-chosen questions can initiate early discussion about student expectations, awareness of their learning processes and goal setting. Ask students to rate their confidence in completing typical tasks that will be used in the session or module (such as independent pre-reading, taking notes in a lecture, group work, conducting an experiment or accessing resources) to initiate reflection on the learning activities. The following questions can help students preview the session or module in advance and plan their approaches to their learning:

  • What are the teaching objectives and how do they relate to your own goals for this session or module?
  • What are the learning strategies you need to use?
  • How will you organise your time or prioritise tasks to achieve your goals?
  • How will you help your peers learn during this session or module?

Use SET to promote student self-evaluation

Developing students’ ability to self-assess their learning is often a neglected outcome of SET. A well-timed SET can act as a metacognitive pause during teaching to help students stop and reflect on what they and others are doing that is either helping or hindering their learning. The following questions can prompt students to monitor their learning: 

  • What did you already know, want to know and learn in this session or module so far?
  • How well have we achieved the session or module objectives so far and why?
  • What questions are arising for you and what aspects are challenging or confusing?
  • What can the teacher, you and your peers do differently to improve your learning in the session or module?

Prepare students to give high-quality feedback

SET can be ineffective if students don’t know how to give meaningful feedback. Students may provide incomplete, inappropriate or general comments that are difficult to interpret or respond to. Alternatively, student evaluation may focus disproportionately on aspects of their experience that are irrelevant to the quality of teaching. SET has been widely criticised, for example, for its lack of fairness reflecting teacher popularity, preferential grading practices of individual teachers or bias against certain demographic groups. 

Knowing how to provide high-quality feedback, however, is an essential outcome for graduates. So, before introducing SET, teach students the principles of good feedback: be specific, focus on observable examples and suggest solutions to close the gap between their expectations and current experience. Collaborative discussions help students to reflect and calibrate their experiences with peers. 

  • Resource collection: How to make your teaching more interesting
  • Seven steps to make an effective course quality evaluation instrument
  • Scales, stars and numbers: the question of evaluation

Before completing SET, snowball an evaluation discussion from individuals to pairs to groups of four and at each stage ask students to up-vote and prioritise their most important shared feedback issues and add more information to improve the quality of information that the feedback provides. Be alert though to the possibility that reaching a consensus as a group on what is working and what should change can silence some outlier views. Make sure you also pause to revisit feedback that has not been prioritised through this process and consider all viewpoints even if they are not necessarily shared by the majority.

Get creative and inclusive

Survey fatigue is often cited as a reason for the lack of student engagement with SET. Creative evaluation methods not only have the novelty factor to increase engagement but also give students the space for reflection and for making unexpected connections between ideas. They can also be more inclusive, offering alternative ways to represent perspectives, experiences and feelings as well as encouraging diverse and personalised views that may be difficult to express in standard SET surveys. Visual and multimedia evaluation methods such as drawing, collaging or digital storytelling their module learning journey, taking photographs or choosing artefacts to represent a learning experience all provide opportunities to share and discuss the pleasure, fear, curiosity or frustrations of learning experiences with peers and teachers.

As teachers, we need to reconnect with the role of evaluation for improvement, not only in our own practice but also in how our students learn. Effective SET, then, is the start, not the end, of a conversation about student learning.

Saranne Weller is a reader in higher education practice and development at St George’s, University of London.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter .

Emotions and learning: what role do emotions play in how and why students learn?

Global perspectives: navigating challenges in higher education across borders, how to help young women see themselves as coders, contextual learning: linking learning to the real world, authentic assessment in higher education and the role of digital creative technologies, how hard can it be testing ai detection tools.

Register for free

and unlock a host of features on the THE site

A photo of the Program Evaulation Graduate Certificate instructors.

Program Evaluation Graduate Certificate

Acquire the skills you need to conduct ethical, systematic, feasible, useful, and socially just program evaluation..

Did you “stumble” into the field of program evaluation? Many people charged with evaluating programs often have little or no training in the competencies or standards of the field. Students in the 15-credit UMass Amherst Program Evaluation Certificate program study the fundamental practices of program evaluation--including evaluation contracting, logic modeling, mixed methods of data collection, data visualization, and facilitating use.

If you are looking to “become” an evaluator, or already “doing” program evaluation, this certificate program is for you.

For more information, please email  evaluation [at] umass [dot] edu .

Related offerings

Students interested in our Program Evaluation Graduate Certificate may also be interested in these other offerings.

  • Data Analysis, Assessment, & Research in Education MEd
  • Research, Educational Measurement, & Psychometrics PhD

Research, Educational Measurement, & Psychometrics

As the demand for testing and assessment has grown in education and beyond, the need for experts has exploded.

Global footer

  • ©2024 University of Massachusetts Amherst
  • Site policies
  • Non-discrimination notice
  • Accessibility
  • Terms of use

Curriculum Mapping and Inventory

What is a curriculum inventory (ci).

  • Definition : the CI is a comprehensive record of all the courses, lectures, and other learning activities 
  • Importance : the CI is an essential tool for program evaluation, transparency, and curriculum alignment.
  • Scope : the CI captures various learning activities, such as courses, clerkships, electives, and other educational experiences.
  • Benefits : the CI can be used to improve program evaluation, promote transparency, and create curriculum maps.

But Really...Why?

  • It's an accreditation requirement.  Although there are many other reasons, this one usually trumps all others when articulating the importance of mapping. The Liaison Committee on Medical Education (LCME) Data Collection Instrument (DCI) refers to this documentation as a curriculum database, and the American Osteopathic Association (AOA) Commission on Osteopathic College Accreditation (COCA) refers to it as a curriculum map. Beyond simply having a curriculum map, reports sourced from your curriculum mapping data may support aspects of the accreditation process.
  • It's needed to complete  AAMC AACOM Curriculum SCOPE Survey.   Schools rely on their local curriculum maps to be able to complete this survey, especially around curriculum topics (e.g., advocacy, clinical decision-making, nutrition, etc.). Benchmarking reports will only be useful with reliable, accurate data, thus sound curriculum maps are needed to inform schools' survey responses.
  • Students need it . The curriculum map helps students understand the trajectory of their learning, where they are, where they are going, and where they have been. It helps put their learning into context, so how their learning fits into long-term goals is clear. 
  • Faculty need it . With numerous faculty involved in the educational enterprise, some with very brief touches with students, it is critical faculty are able to know where all the "touches" on a given topic occur, so that they understand what students have already experienced and what they need to be prepared for. Faculty need to know how the content is structured and delivered, what gaps or unintentional redundancies may exist, and how their areas of expertise fit into the students' educational objectives. 
  • The school needs it . Whether its responding to inquiries, outlining expectations for students, ensuring content has a logical sequence and progressive complexity, ensuring content is aligned and integrated where appropriate, supporting continuous quality improvement and program evaluation, or providing reports to the curriculum committee, a healthy curriculum map is the foundation of educational program alignment and sound decision-making.

Using CI/Maps to Improve Curriculum

  • Ensuring compliance with accreditation standards
  • Identify gaps and redundancies
  • Clarify course/learning objectives
  • Improve instructional alignment
  • Support course-level reviews
  • Promote curriculum research
  • Facilitating instructional collaboration amongst faculty
  • Tracking student progress

Gathering Feedback

We will use this page to communicate more information related to the Curriculum Inventory (CI), curriculum mapping, use of Leo by DaVinci Ed, and more. In our effort to develop a plan for streamlined data collection/curation, we are gathering formative feedback from course leaders that will define and shape the CI structure and how we will use it collectively. Please complete and submit the form below to share your ideas.

PLAN INTERNATIONAL HOME

  • Careers Home
  • The Organisation
  • Our Strategy
  • Our Approach
  • Our History
  • Our Structure
  • Annual Review
  • Finance and Accountability
  • View All Jobs
  • Emergency Response
  • Business Support & Administration
  • Monitoring, Evaluation & Research
  • Global Influencing & Partnerships
  • Information Technology
  • Procurement & Logistics
  • Legal and Governance
  • Global Assurance
  • Human Resources & Organisational Development
  • Ukraine Refugee Crisis Response
  • Asia Pacific Region
  • Central and Eastern Europe Hub
  • Global Hub – Location Flexible
  • Global Hub – UK
  • Middle East, Eastern and Southern Africa
  • Region of the Americas
  • West and Central Africa
  • English (United Kingdom)
  • English (United States)
  • Español (España)
  • Français (France)

Monitoring Evaluation and Learning Lead - Inclusive Quality Education Project

Date: 9 Apr 2024

Company: Plan International

The advertising for this role is now closed

  • Terms & Conditions

Content Search

Monitoring, evaluation & learning manager, usda mcgovern-dole food for education and child nutrition program- malawi.

  • Counterpart International

Counterpart International is seeking a Monitoring, Evaluation & Learning (MEL) Manager for its anticipated USDA-funded McGovern-Dole Food for Education Project in Malawi. Under the supervision of the Chief of Party, the MELManager will be responsible for developing Counterpart's systems, skills and approach to planning, monitoring, evaluation and learning. He/she will promote, enable and improve our culture and practice of measuring progress against plans, and learning from evidence and successes, or otherwise, of our work. He/she will champion monitoring, evaluation, accountability and learning (MEL) within Counterpart International Malawi, providing leadership and technical oversight.

The location of this position has not been determined, but is likely to be based in Blantyre. The position will be engaged though a host-country national (Malawian) contract with no assistance for relocation to Malawi and/or work permit.

Responsibilities

The MEL Manager will work under the direction of the Chief of Party to manage all aspects of the Monitoring, Evaluation, Accountability and Learning (MEL) plan for the McGovern-Dole Food for Education and Child Nutrition project. The Manager will work to develop MEL strategies (in line with the approved MEL plan) outlining M&E systems for data collection, targeting and monitoring and USDA indicators, knowledge management, impact evaluations, learning activities and reporting. He/she will ensure that the project delivers high quality programming and reporting, while continuously improving the impact of its programming and advancing Counterpart's technical vision. The Manager will be responsible for:

  • Developing and overseeing monitoring systems to report on program implementation.
  • Designing research and evaluation efforts to assess the impact of the McGovern Dole program.
  • Improving data systems -- collection, storage, use -- to increase the efficiency and scalability of M&E systems.
  • Acting as the key technical lead for MEL quality design, including tools, methods and budget, tailored to the domain, context and technical requirements of project design and proposal development.
  • Exploring and providing recommendations for innovative technologies to streamline monitoring, evaluation and other operational functions.
  • Promoting learning with project staff and partner teams and facilitate Collaboration, Learning and Adaptation (CLA) strategies, including oversight of partner learning and research staff.
  • Managing and implementing all MEL activities throughout the relevant project cycles -- project design, start-up, implementation and closure -- to ensure efficient and effective implementation in line with
  • Counterpart's MEL policies and procedures, donor requirements, program quality principles and standards, best practice and budget.
  • Ensuring project team and partner staff use appropriate MEL systems and tools, including adaptive learning and management. Coordinate with Counterpart-led consortium partners to ensure quality
  • implementation of project performance and impact evaluations, in line with the donor-approved evaluation plan and terms of reference.
  • Ensuring the quality and proper functioning of data management systems and practices, with specific attention to compliance with data security and privacy regulations and standards.
  • Analyzing MEL data (both qualitative and quantitative), and work with the Chief of Party to integrate analysis, reflection, interpretation and use of data into ongoing project activities for evidence-based decision-making.
  • Preparing high-quality reports for both internal and external audiences, including donor and host government.
  • Building an organizational knowledge base through research and partnerships.
  • Developing and maintaining partnerships with international academic and research institutions to provide additional technical support for evaluation.
  • Effectively managing talent and supervising the Field M&E Manager and data analysts in data collection and analysis and ensuring that M&E activities are completed on schedule.
  • Mentoring and strategically adapting individual development plans, contributing to the recruitment process for MEL staff, and completing performance management for direct reports.

Qualifications

  • Master's degree in international development, international relations, statistics or education required. Additional experience may replace some studies.
  • Minimum 8 years' MEL experience required, particularly with systems mapping, activity monitoring, data collection, training data quality assessments, third-party monitoring and evaluation, and adaptive learning and management methods.
  • Minimum 7 years' relevant field experience in coordinating or managing light to moderately complex projects, preferably with an international NGO.
  • Knowledge of the main quantitative and qualitative monitoring methodologies and proven ability to design monitoring instrumentation tools
  • Advanced data analysis presentation and report writing skills combined with a proactive, energetic approach to problem solving.
  • Experience with a variety of partner organizations is an asset.
  • Previous experience working with USAID, USDA, or other international donors required.
  • High level of proficiency in Windows Excel and/or statistical software (SPSS, STATA)
  • Professional command of written and spoken English required.
  • Malawian or significant Malawian experience with Malawian residency.

How to apply

To apply, please use the following link:

https://careers2-counterpart.icims.com/jobs/1815/monitoring%2c-evaluation-%26-learning-manager%2c-usda-mcgovern-dole-food-for-education-and-child-nutrition-program--malawi/job?mode=view

Counterpart is an equal opportunity/affirmative action employer and does not discriminate in its selection and employment practices. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, political affiliation, sexual orientation, gender identity, marital status, disability, protected veteran status, genetic information, age, or other legally protected characteristics.

Accessibility Notice: If you need reasonable accommodation for any part of the employment process due to a physical or mental disability, please send an email to [email protected]. Please view Affirmative Action/Equal Employment Opportunity Posters provided by OFCCP here .

Only finalists will be contacted. No phone calls, please.

Related Content

Malawi + 30 more

Multi-Country Outbreak of Cholera, External Situation Report #13, Published 17 April 2024

Malawi + 14 more

Des tests de diagnostic rapide sont déployés à l’échelle mondiale pour stimuler la lutte contre le choléra

Malawi: wash sector meeting points/actions, 10 april 2024 [meeting minutes], malawi curbs cholera through enhanced outbreak control.

Building, Architecture, Outdoors, City, Aerial View, Urban, Office Building, Cityscape

Assistant, Associate or Professor CHS -OB/Gyn Education Research and Development

  • Madison, Wisconsin
  • SCHOOL OF MEDICINE AND PUBLIC HEALTH/OBSTETRICS & GYNECOLOGY-GEN
  • Instructional Category
  • Faculty-Full Time
  • Partially Remote
  • Opening at: Apr 16 2024 at 16:35 CDT

Job Summary:

The Assistant, Associate, or Professor (CHS Track), OB/Gyn Education Research and Development will play an integral role in scholarship, assessment, and innovation in curriculum and instruction. Implement educational programs designed to equip faculty, residents and students with the knowledge and skills necessary to be effective teachers and learners, as well as foster and facilitate educational research within the department. The duties of this position include research and grant writing to support medical education program development, program evaluation, faculty development, and implementation of innovative curricula, methods of instruction, and assessment of learning.

Responsibilities:

The chosen applicant will participate in administrative and committee work to support the clinical and scholarly missions of UW Health and the School of Medicine and Public Health. An essential part of these duties will be working in a collegial relationship with other faculty members. Responsibilities may include but are not limited to: 1) Educational Research 50% - Collaborate and lead initiatives to promote scholarship resulting from program innovations while developing an independent research program - Take on a leadership role in advancing the progress of education research and development projects by supporting data collection and analysis, writing abstracts and papers for publication, and composing grant applications for project funding - Acquire grants which support the development of medical education program development and research - Conduct systematic literature reviews in support of education research projects - Support the dissemination of scholarly work through publication - Collaborate on the design and analysis of qualitative and quantitative education research studies and the design of mixed- method studies (including a variety of qualitative approaches) - Collaborate with students, residents, fellows, and faculty in developing educational scholarship, presentations, and publications 2) Educational Program Development and Instructional Design 50% - Develop educational processes that align with the needs of learners (including medical students, residents, fellows, graduate students and faculty) and institute effective, theory and research-based training strategies - Collaborate with other faculty on educational innovation opportunities - Participate in meetings with faculty and researchers engaged in educational innovation - Establish relationships within and across the institution to advance the quality and reputation of the department's educational work - Develop departmental teaching skills by creating trainee and faculty development programs in education - Advise faculty, residents, and fellows in the development and implementation of highly effective evaluation methods

Institutional Statement on Diversity:

Diversity is a source of strength, creativity, and innovation for UW-Madison. We value the contributions of each person and respect the profound ways their identity, culture, background, experience, status, abilities, and opinion enrich the university community. We commit ourselves to the pursuit of excellence in teaching, research, outreach, and diversity as inextricably linked goals. The University of Wisconsin-Madison fulfills its public mission by creating a welcoming and inclusive community for people from every background - people who as students, faculty, and staff serve Wisconsin and the world. For more information on diversity and inclusion on campus, please visit: Diversity and Inclusion

Required PhD Preferred focus in Educational Psychology in Learning Sciences or related field.

Qualifications:

Experience with publishing in medical education journals and applying for grant funding in medical education required. Experience with presenting at national medical education meetings and participating on medical education committees at a national level preferred. Candidate should also have experience designing and teaching medical student courses. For an appointment at Associate Professor or Professor rank on CHS Track, candidates will meet criteria established by the department and as outlined in the School of Medicine and Public Health guidelines for promotion or appointment to Associate or Professor on the CHS Track.

Full Time: 100% This position may require some work to be performed in-person, onsite, at a designated campus work location. Some work may be performed remotely, at an offsite, non-campus work location.

Appointment Type, Duration:

Ongoing/Renewable

Anticipated Begin Date:

JULY 01, 2024

Minimum $56,302 ANNUAL (12 months) Depending on Qualifications Employees in this position can expect to receive benefits such as generous vacation, holidays, and sick leave; competitive insurances and savings accounts; retirement benefits. Benefits information can be found at ( https://hr.wisc.edu/benefits/ ). SMPH Academic Staff Benefits flyer: ( https://uwmadison.box.com/s/r50myohfvfd15bqltljn0g4laubuz7t0 )

How to Apply:

The deadline for assuring full consideration is May 18, 2024, however, this position will remain open and applications may be considered until this position is filled. Your application must be received through the Jobs at UW portal to be considered as a candidate. Applications submitted outside of this system will not be considered. To apply for this position, please click on the "Apply Now" button and use the online UW Job Application system to submit the following: - Current Curriculum Vitae (CV). - A cover letter briefly describing your qualifications and experience. - List of contact information for three (3) references, including your current/most recent supervisor. References will not be contacted without prior notice.

Kirsten Gragg [email protected] 608-265-3357 Relay Access (WTRS): 7-1-1. See RELAY_SERVICE for further information.

Official Title:

Professor (CHS)(IC014) or Associate Professor (CHS)(IC015) or Assistant Professor (CHS)(IC016)

Department(s):

A53-MEDICAL SCHOOL/OB-GYN/OB-GYN

Employment Class:

Academic Staff-Renewable

Job Number:

The university of wisconsin-madison is an equal opportunity and affirmative action employer..

You will be redirected to the application to launch your career momentarily. Thank you!

Frequently Asked Questions

Applicant Tutorial

Disability Accommodations

Pay Transparency Policy Statement

Refer a Friend

You've sent this job to a friend!

Website feedback, questions or accessibility issues: [email protected] .

Learn more about accessibility at UW–Madison .

© 2016–2024 Board of Regents of the University of Wisconsin System • Privacy Statement

IMAGES

  1. What is Evaluation in Education? Definition of Evaluation in Education

    project evaluation in education

  2. The Importance of Project Evaluation for Instructional Designers

    project evaluation in education

  3. PPT

    project evaluation in education

  4. FREE 10+ Sample Project Evaluation Forms in PDF

    project evaluation in education

  5. The Impact Of Formative Assessment On The Learning Process And The Un

    project evaluation in education

  6. Educational Evaluation Methods

    project evaluation in education

VIDEO

  1. Cpm & Pert

  2. Student Project Evaluation

  3. Program Evaluation Example

  4. Monitoring and Evaluation: The Goals of Evaluation

  5. 1. Networking

  6. How to find Standard Deviation for a Project!!! (Network Diagram

COMMENTS

  1. PDF Program Evaluation Toolkit: Quick Start Guide

    5 8. Program Evaluation Toolkit: Quick Start Guide. Joshua Stewart, Jeanete Joyce, Mckenzie Haines, David Yanoski, Douglas Gagnon, Kyle Luke, Christopher Rhoads, and Carrie Germeroth October 2021. Program evaluation is important for assessing the implementation and outcomes of local, state, and federal programs.

  2. PDF What is program evaluation?

    How does program evaluation answer questions about whether a program works, or how to improve it. Basically, program evaluations systematically collect and analyze data about program activities and outcomes. The purpose of this guide is to briefly describe the methods used in the systematic collection and use of data.

  3. PDF Evaluation of Educational Development Projects

    Evaluation of Educational Development Projects*. This paper published at the ASEE/IEEE FIE2006 Conference. Russell Pimmel1, Barbara Anderegg2, Susan Burkett3, Bevlee Watford4, and Connie Della-Piana5. Abstract - This paper is a narrative form of an interactive session that aims to develop a better understanding of the evaluation process and ...

  4. Program Evaluation

    Our Program Evaluation Experts. The Johns Hopkins School of Education is home to a diverse group of scholars, researchers, analysts, and administrators with expertise in educational program evaluation. Our experts evaluate programs for all grade levels and content areas, as well as big topics in education like technology and education policy.

  5. PDF Measuring success: Evaluating educational programs

    Abstract: This paper reveals a new evaluation model, which enables educational program and project managers to evaluate their programs with a simple and easy to understand approach. index of success The " model" is comprised of five parameters that enableto focus on and evaluate both the implementation and results of an educational program.

  6. Evaluations of Educational Practice, Programs, Projects ...

    Program evaluation involves a systematic effort to report and explain what has happened, or what is happening, as a result of particular educational policies, practices, and instructional interventions (Spector, 2013).Introducing and integrating an innovation such as an educational technology or new instructional approach is a situation that is typically evaluated in order to determine to what ...

  7. PDF 2010 User-Friendly Handbook for Project Evaluation

    the beginning of a project. Planning, evaluation, and implementation are all parts of a whole, and they work best when they work together. Kaser et al. (1999) go so far as to state that "a quality program takes evaluation seriously and builds it into the program design" (p. 23). Exhibit 1 shows 1Chapter

  8. Program Assessment

    Program Assessment. Program evaluation is the process of systematically collecting, analyzing, and using data to review the effectiveness and efficiency of programs. In educational contexts, program evaluations are used to: identify methods of improving the quality of higher education; provide feedback to students, faculty, and adminstrators ...

  9. Using Program Evaluation in Education

    Identifying the Problem and Planning for an Educational Program Evaluation. Before I describe the steps to conducting a high-quality evaluation, recall that it is important to first know what we're even trying to evaluate (e.g., solving a Rubik's Cube, completing an obstacle course race, or improving student learning).

  10. Program and Project Evaluation

    Logic Models and Program/Project Evaluation. By way of summary, when conducting applied or development research, one may have a new educational technology or system that one believes will be beneficial in some way. This situation is a prime target for research and inquiry. One kind of inquiry often associated with development research is ...

  11. Program Evaluation Toolkit

    Designed to be used in a variety of education settings, the toolkit focuses on the practical application of program evaluation for all users. The toolkit can also build your understanding of program evaluation so that you can be better equipped to understand the evaluation process and use evaluation practices. The toolkit includes eight modules ...

  12. Educational Program Evaluation

    RAND rigorously evaluates all kinds of educational programs by performing cost-benefit analyses, measuring effects on student learning, and providing recommendations to help improve program design and implementation. Our portfolio of educational program evaluations includes studies of early childhood education, summer and after-school programs, educator pay-for-performance, programs intended ...

  13. Design an Evaluation

    Education Project Evaluation - Design an Evaluation Tool: Evaluation Design Checklist. An evaluation plan documents the details of your evaluation design-what information you need to make informed decisions and how you will go about gathering that information. Evaluation design is often an iterative process that prioritizes the evaluation ...

  14. Educational Project Design and Evaluation

    Use the educational project design logic model to design an educational project to fix the problem of low performance of students in math. Situation: In one math class, 60% of the students are sleeping, 20% of the students are following teacher, 20% of the students are playing with their phones, and teacher is reading the textbook. 6.

  15. National Center for Education Evaluation and Regional Assistance (NCEE

    The program included reading, speaking, and writing activities for students and training for teachers. About 60 schools were randomly assigned to implement the program for one school year or to continue using their typical strategies. The study compared the average reading performance of the two groups to assess the program's effectiveness.

  16. PDF Evaluation and Assessment Frameworks for Improving School Outcomes

    Common policy challenges for evaluation of education systems are: meeting information needs at system level; monitoring key outcomes of the education system; and maximising the use ... assignments, project work and portfolios. Typically, teacher-based assessment is presented in the literature as having higher ...

  17. Program Evaluation: Getting Started and Standards

    What Is Known. In the mid-20th century, program evaluation evolved into its own field. Today, the purpose of program evaluation typically falls in 1 of 2 orientations in using data to (1) determine the overall value or worth of an education program (summative judgements of a program) or (2) plan program improvement (formative improvements to a program, project, or activity).

  18. Certificate in Education Program Evaluation

    Register Now Request Information. The Certificate in Education Program Evaluation prepares you with an advanced understanding of program evaluation theory, methods, and applications for the 21st century. Through case studies and hands-on exercises, you'll develop the well-rounded skills and expertise needed to support and influence programs ...

  19. PDF U.S. Department of Education Evaluation Policy (PDF)

    feasibility of the evaluation effort; and how recently a program's last evaluation occurred. • Reviewing the literature, consulting with experts, and seeking the input of key education stakeholders so that evaluations can generate relevant evidence for audiences both inside and outside of the Department.

  20. PDF Designing Education Projects

    Targeting Outcomes of Programs or the TOP model integrates evaluation within the project development process. When planning an education or outreach project, education coordinators start at the upper left-hand side of the model by describing the social, economic, and environmental (SEE) conditions that need to be addressed. As the

  21. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Joint Committee on Standards for Educational Evaluation (Yarbrough et al., 2010, p. xxv) ... Program evaluation is the process of systematically gathering empirical data and contextual information about an intervention program—specifically answers to what, who, how, whether, and why questions that will assist in assessing a program's ...

  22. Program Evaluation and Planning

    A Practical Guide to Program Evaluation Planning: Theory and Case Examples provides a step-by-step process to guide evaluators in planning a comprehensive, yet feasible, program evaluation--from start to design--within any context. No book currently on the market delineate the required steps for preparing to conduct an evaluation.

  23. Plan an Evaluation

    These are provided as examples of a thoughtful and appropriate evaluation section for a B-WET grant proposal. SAMPLE 1. Project Evaluation. We will evaluate our project to improve its design and assess its effectiveness in reaching our education objectives. At the start of the grant we will conduct a front-end evaluation (likely via a survey ...

  24. How to use student evaluation of teaching to improve learning

    Student evaluation of teaching (SET) is widely used to measure the quality of teaching, and often controversially, for teacher recruitment, promotion and tenure. Institution-led SET is largely quantitative and increasingly carried out via online surveys. This approach systematises the collection and analysis of data but also disconnects ...

  25. Program Evaluation Graduate Certificate : College of Education : UMass

    Students in the 15-credit UMass Amherst Program Evaluation Certificate program study the fundamental practices of program evaluation--including evaluation contracting, logic modeling, mixed methods of data collection, data visualization, and facilitating use. If you are looking to "become" an evaluator, or already "doing" program ...

  26. Curriculum Mapping and Inventory

    Importance : the CI is an essential tool for program evaluation, transparency, and curriculum alignment. Scope : the CI captures various learning activities, such as courses, clerkships, electives, and other educational experiences. Benefits : the CI can be used to improve program evaluation, promote transparency, and create curriculum maps.

  27. Monitoring Evaluation and Learning Lead

    ROLE PURPOSE. Plan International is looking for a Monitoring, Evaluation and Learning Lead, to effectively lead the monitoring, evaluation and learning activities for a 5-year project titled "Design, Implementation and Management of International Development Inclusive Education Programme", in Malawi, Rwanda and Zambia.

  28. Monitoring, Evaluation & Learning Manager, USDA McGovern ...

    Monitoring, Evaluation & Learning Manager, USDA McGovern-Dole Food for Education and Child Nutrition Program- Malawi Organization. Counterpart International; Posted 16 Apr 2024 Closing date 31 May ...

  29. Assistant, Associate or Professor CHS -OB/Gyn Education Research and

    Job Summary: The Assistant, Associate, or Professor (CHS Track), OB/Gyn Education Research and Development will play an integral role in scholarship, assessment, and innovation in curriculum and instruction. Implement educational programs designed to equip faculty, residents and students with the knowledge and skills necessary to be effective teachers and learners, as well as foster and ...

  30. PDF Program Evaluation: Online Higher Education in New Mexico April 18, 2024

    Online Higher Education in New Mexico | Report #24-02 | April 18, 2024 23. LFC staff found four out of 24 classrooms in use at the CNM Rio Rancho location, which spends $637 thousand annually on facilities. LFC staff conducted a site visit to CNM's Rio Rancho Campus on the afternoon of March 7, 2024.