Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Population Health for Nurses

20.3 Evaluation Strategies

Population Health for Nurses20.3 Evaluation Strategies

Learning Outcomes

By the end of this section, you should be able to:

  • 20.3.1 Describe an intervention evaluation plan for public health services for individuals, families, and groups.
  • 20.3.2 Utilize a systematic process to direct the evaluation of public health interventions.
  • 20.3.3 Evaluate outcomes of action plans and interventions, considering implications for practice.

Evaluation of community programs occurs throughout implementation and at the conclusion of a program to improve processes and outcomes. The evaluation provides evidence for decisions regarding the program, such as whether the program should continue, if revisions are needed, or if the program should be discontinued. Additionally, evaluation may either be mandated by external funders, driven by a need to determine program effectiveness, or both (Centers for Disease Control and Prevention [CDC], 2012). The program team develops a plan for evaluation during the planning phases of community health programming, determining the evaluation methods before beginning intervention activities. Using a systematic process ensures all components of the program are evaluated in an evidence-based way. No matter the process and methods used, the program team evaluates whether goals and objectives of the program are met. If a health program fails to meet the goals and objectives or the community’s needs, the team should carefully consider the program’s future.

Evaluation Planning for Public Health Programs

Program evaluation is the ongoing, systematic collection, analysis, and use of data to examine program efficacy, effectiveness, and efficiency to make decisions about current and future health programs (CDC, 2012; Issel & Wells, 2018). Efficacy is the “maximum potential effect under ideal conditions” (Issel & Wells, 2018, p. 222). Ideal conditions are difficult to create in community health programs, so efficacy is rarely evaluated. Effectiveness is the community program’s ability to achieve the desired outcome in real-life settings (Issel & Wells, 2018). It is usually measured using statistical data and comparisons to benchmarks. Efficiency occurs when the effect of program interventions, or outputs, are greater than the resources, or inputs, used to provide the intervention (Issel & Wells, 2018). The program team plans for evaluation in order to:

  • monitor progress toward program goals and objectives,
  • decide if program activities and components are leading to the desired results,
  • make comparisons among program participants and other populations,
  • provide rationale for further funding and support,
  • ensure continuous quality improvement,
  • verify program maintenance and efficient use of resources,
  • document accountability that the program is fulfilling its purpose and meeting goals, and
  • justify sustaining, revising, or discontinuing the program (CDC, 2021).

As noted, the program team plans for evaluation during the program planning process and prior to implementing interventions. Steps to planning for program evaluation include the following:

  1. Identify individuals and groups to plan and assist with evaluation.
  2. Meet with the program team to determine how to evaluate the program.
  3. Examine evaluation types and processes used in the literature.
  4. Choose the type of evaluation to be used and a systematic process that aligns with evaluation needs and program goals.
  5. Determine what program goals and objectives will be measured, how they will be measured, who will be responsible for collecting data, and what resources are available.
  6. Write the program evaluation plan using the types of evaluation and chosen systematic process.

The program team determines which type of community health program evaluation will be conducted. The most common types are formative evaluation, process evaluation, outcome evaluation, and impact evaluations. The choice of type depends upon program activities, organizational needs, funder requirements, and the program’s developmental stage.

Formative Evaluation

Formative evaluation occurs during program development to confirm that program interventions are feasible and appropriate (CDC, 2014). Most often, formative evaluation occurs during new program development or when an existing program is revised. Formative evaluation includes a community health needs assessment as discussed in Assessment, Analysis, and Diagnosis.

Process Evaluation

Process evaluation focuses on program implementation processes to determine if the program has been implemented efficiently and as planned. It occurs throughout program implementation, allowing for mid-program revisions and following the program to provide direction for future program improvement. As such, process evaluation should occur to some extent for all community health programs. During process evaluation, the program team describes the program’s inputs and outputs. Program inputs are those things and resources required to carry out the program; examples are personnel number and experience, volunteers, informational and technological resources, financial resources and budget, physical location and resources, transportation needs, leadership, time, marketing needs, and other resources needed to complete activities (Issel & Wells, 2018). Program outputs are things accomplished using inputs. Examples of outputs are population reach; number of participants; intervention dose and amount; equipment or incentives distributed; partnerships developed; staff and volunteer hours worked; extent that the budget was followed; quality of information, technological, and physical resources; and staff, volunteer, and participant satisfaction (Issel & Wells, 2018). Most often, the team describes inputs and outputs using qualitative data, but they may use some quantitative data. The program team explains what and how much was accomplished during the program and determines strengths, areas for improvement, and recommendations for ongoing and future program implementation.

Outcome Evaluation

Outcome evaluation assesses the extent to which the program achieves its objectives within the target population and its effect on the target populations’ knowledge, attitudes, and behaviors (CDC, 2014). This is an evaluation of the SMART objectives developed during the program development planning stages (Figure 20.2). Planning Health Promotion and Disease Prevention Interventions guides the nurse in the development of SMART objectives. Outcome evaluation should be completed for all community health programs regardless of developmental level. Typically, outcome evaluations are quantitative and include short-, medium-, and long-term measures of change. At times, the team may use qualitative data to provide support for quantitative results. It is recommended that process and outcome evaluation occur simultaneously because if a program objective is not met, it could be a result of implementation process issues (CDC, 2014).

A speaker at a podium next to an easel addresses a roomful of people. A chart with a bar graph and other written information sits on the easel.
Figure 20.2 During outcome evaluation, the team assesses the effectiveness of the SMART objectives on meeting their goals within the target population. (credit: “Maryland Rural Health Day Calvert County Health Department” by Anthony DePanise/Flickr, CC BY 2.0)

Impact Evaluation

Impact evaluation determines the degree to which the community health program has achieved its primary goal (CDC, 2014). It occurs during an existing program, if appropriate, and at the end of a program and most often uses data collected over the long term, including community health assessment data and benchmarks. Most impact evaluations are quantitative. For example, an evaluation might compare pre-program and post-program morbidity, mortality, and health behavior data for the target population and community as a whole.

Theory in Action

Types of Program Evaluation

Various program evaluation types are available to determine if a community health program has been effectively and efficiently implemented. This video describes formative, process, impact, and outcome evaluations.

Watch the video, and then respond to the following questions.

  1. How does the nurse and program team determine which type of program evaluation should be used?
  2. What evaluation designs are used to conduct program outcome evaluations?

Systematic Processes to Direct Program Evaluation

The nurse in collaboration with the program planning team chooses an evaluation framework or tool to guide evaluation planning. Frameworks and tools provide systematic, evidence-based resources to organize important program evaluation components. Commonly used frameworks and tools include the CDC Framework for Program Evaluation in Public Health (CDC, 1999), Public Health Ontario’s steps for evaluating health promotion programs (Ontario Agency for Health Protection and Promotion [OAHPP] et al., 2016), and logic models.

CDC’s Framework for Program Evaluation in Public Health

The CDC Framework for Program Evaluation in Public Health is commonly used to summarize elements of program evaluation to assign value and judge a community health program based on evidence. The program team assigns value related to program quality, cost-effectiveness, and significance of the health problem. The framework contains two elements: six steps of program evaluation and standards to assess the quality of evaluation (Figure 20.3) (CDC, 1999). While the program team does not need to conduct the evaluation in a linear sequence, they must thoroughly address each step. Table 20.5 provides examples of activities that occur during each step of program evaluation.

The CDC’s Framework for Program Evaluation is presented as 4 circles nested inside each other. The outermost circle says Steps in Evaluation. The next circle shows the following steps connected by arrows: Engage stakeholders, Describe the program, Focus the evaluation design, Gather credible evidence, Justify conclusions, Ensure use and share lessons learned. The next circle says Standards for Evaluation. The innermost circle is divided into 4 quadrants that say Utility, Feasibility, Propriety, and Accuracy.
Figure 20.3 The Framework for Program Evaluation in Public Health is commonly used to evaluate public health programs. (See CDC, 1999; attribution: Copyright Rice University, OpenStax, under CC BY 4.0 license)
Steps of Program Evaluation Examples of Activities
Engage interested parties
  • Invite community members and partners identified during the community assessment and program planning process to assist with program evaluation
  • Make a list of what evaluation data would be useful to each interested party and partner
Describe the program
  • Write the first portion of the evaluation plan, which includes program mission, objectives, activities, intended outcomes, and program maturity
  • Create a logic model, if desired, that includes program inputs, activities, outputs, and short-, intermediate-, and long-term outcomes
Focus on the evaluation design
  • Choose the evaluation design (process, outcome, and/or impact evaluation), considering the program’s purpose and maturity
Gather credible evidence
  • Determine the quantity and quality of data to collect, using multiple data sources to increase the accuracy of evaluation results
  • Prepare data collection methods, which could include surveys, interviews, focus groups, retrospective document reviews, and observation
  • Collect data from participants, staff, volunteers, and other relevant parties, and use secondary data sources for benchmarking
Justify conclusions
  • Analyze and review results, comparing program objectives, benchmarks, literature, and previous implementations, if applicable
  • Summarize strengths and areas for improvement of the program
  • Meet with partners to review data and make conclusions regarding the program
Ensure the use and share lessons learned
  • Share results with program partners, and community
Table 20.5 Activities Completed During Program Evaluation (See CDC, 1999.)

The program team incorporates the standards of utility, feasibility, propriety, and accuracy throughout program evaluation. Utility standards include determining who needs evaluation information, what information they need, the evaluation’s purpose, and how the information will be used (CDC, 1999). Feasibility standards involve considering resources available to conduct program evaluation, including money, time, and effort (CDC, 1999). Propriety standards confirm that program evaluation is fair and ethical (CDC, 1999). Accuracy standards substantiate that program evaluation methods, data, and documentation are appropriate and contain accurate information (CDC, 1999). The CDC (2011) provides a workbook to guide program teams through the evaluation process.

Public Health Ontario’s Steps for Evaluating Health Promotion Programs

Public Health Ontario (OAHPP et al., 2016), a scientific and technical public health organization in Ontario, Canada, recommends 10 systematic steps to evaluate health promotion programs (Table 20.6). Similar to other public health evaluation frameworks, the program team conducts the first steps of program evaluation planning concurrently with program development. The program team engages interested parties, develops the program goals and objectives, determines the target population, creates program strategies and activities, and locates program resources. The organization recommends developing a logic model to represent the program to summarize its main components and to align evaluation questions with program activities (OAHPP et al., 2016). Planning Health Promotion and Disease Prevention Interventions discusses using logic models in health program planning. Process and outcome evaluation measures should be used. The program team plans to gather data using quantitative and qualitative measures to have substantial information to determine program effectiveness and make decisions regarding health programs. The program team shares findings with interested parties to solicit recommendations and make program decisions. An introductory workbook (OAHPP et al., 2016) is available to assist the program team through evaluation planning and gathering, analyzing, and reporting program data.

Planning Step 1: Clarify the program
Step 2: Engage interested parties
Step 3: Assess resources and evaluability
Step 4: Determine your evaluation questions
Step 5: Determine appropriate methods of measurement and procedures
Step 6: Develop an evaluation plan
Implementation Step 7: Collect data
Step 8: Process data and analyze results
Utilization Step 9: Interpret and disseminate the results
Step 10: Apply evaluation findings
For more information, see Evaluating health promotion programs: introductory workbook.
Table 20.6 Public Health Ontario’s 10 Steps to Systematically Evaluate Public Health Programs

Logic Models

Logic models are tools used to visually present the relationships among resources that are used to implement a program, the activities planned, and the intended results of a program (W. K. Kellogg Foundation, 2004). Logic models are also used to map evaluation questions and indicators. If the program team did not create a logic model during the program planning process, it is recommended that the program team create one to assist in evaluation efforts. Planning Health Promotion and Disease Prevention Interventions describes how to create a health program logic model. After creating a logic model, the team can use it to decide on process and/or outcome evaluation methods and link evaluation questions to logic model components.

Theory in Action

Logic Models in Program Planning and Evaluation

Logic models are used during program planning, implementation, and evaluation. This video demonstrates how to develop a logic model and provides an example using a parent training program.

Watch the video, and then respond to the following questions.

  1. What are the components of a logic model?
  2. How does the nurse connect the logic model to program evaluation and evaluation methods?
  3. Using what you have learned regarding types of program evaluation, which components of the logic model align with process evaluation, which components align with outcome evaluation, and which components align with impact evaluation?

Evaluating Outcomes of Action Plans and Interventions

A community health program’s outcomes should always be evaluated to determine if program goals and objectives have been met. Data regarding program interventions and activities should be evaluated and analyzed individually and as a whole. The SMART objectives and logic model written during the planning phase, as discussed in Planning Health Promotion and Disease Prevention Interventions, are used to develop evaluation questions and determine data collection techniques.

Data collection techniques include questionnaires to measure knowledge, attitudes, or behavior; observation; interviews; focus groups; and epidemiologic data. The team may collect data from participants, staff, volunteers, and community partners. Participants should always be evaluated to determine knowledge, attitude, or behavior changes. The team may collect pre-implementation or baseline data from epidemiological data, community health assessments, and participant surveys prior to program interventions.

Short-term objectives are often measured immediately following program intervention. Intermediate objectives are measured within a few months following the program, usually within 3 to 6 months. Long-term objectives are usually measured at least one year following the program. The team evaluates impact using community health data. Most often, the nurse and program team use annual epidemiological data or community health assessment data, which is collected at minimum every three years. Benchmarks help determine the impact of programs. For example, in Planning Health Promotion and Disease Prevention Interventions, the nurse and Kenton Hardin County Family Bike Program (KHCFBP) team determined evaluation questions and data techniques from the outcome and impact sections of the logic model. Table 20.7 describes the outcome evaluation of the program.

Outcome as Stated on the Logic Model Evaluation Question Data Collection Technique
Short term—Increase participant bike safety knowledge post-program What was the effect of the KHCFBP on participants’ bike safety knowledge?
  • Pre-survey: five questions to determine baseline bike safety knowledge
  • Post-survey including the same questions at completion of activities to determine change in knowledge
Short term—Increase participant bike helmet use 30 days post-program What was the effect of the KHCFBP on participants’ report of bike helmet use?
  • Pre-survey: one Likert-scale question asking frequency of bike helmet use to determine baseline
  • 30 days post-survey: same question asking frequency of bike helmet use
  • Pre-survey: one question asking if participant owned a bike helmet
Short term—Increase participant biking frequency 30 days post-program What was the effect of the KHCFBP on participants’ report of bike riding?
  • Pre-survey: questionnaire asking days per week, average time per day, and intensity of biking for leisure and commuting to determine baseline
  • 30-day post-survey: same questionnaire
Long term—Increase incidence of biking in Hardin County over the next 5 years Did incidence of bike riding increase in Hardin County?
  • 2017 CHA data to determine community baseline
  • 2023 CHA data to be used for comparison
  • Healthy People data to use for benchmarking
Impact—Increase physical activity of Hardin County residents Did physical activity of Hardin County residents increase?
  • 2017 CHA data to determine community baseline
  • 2023 CHA data to be used for comparison
  • Healthy People data and National Physical Activity Guidelines to use for benchmarking
Table 20.7 KHCFBP Evaluation Questions and Data Collection Techniques (See Hunsicker, 2020.)

The program team analyzes data after collection. Pre-implementation and post-implementation data are compared. Program evaluation data are also compared to similar program evaluations and national benchmarks. The program team uses this information to evaluate the program’s strengths and weaknesses, determines if it has achieved desired outcomes, and examines its efficacy, effectiveness, and efficiency. The program team develops recommendations regarding the program and shares findings and recommendations with community members and partners. Ongoing evaluation of community health programs is necessary to ensure program success, program continuation, and that community needs are being met.


This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at
Citation information

© Apr 26, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.