Evaluating Public and Community Health Programs
Evaluating Public and Community Health Programs combines an introduction to public and community health program evaluation with a detailed survey of methods in community assessment, planning, program design, quantitative and qualitative data collection, data analysis, and reporting of findings. The book presents an approach built on the two primary evaluation frameworks that are most common in public and community health: the Donaldson three–step program theory–driven evaluation model and CDC′s six–step Framework for Program Evaluation in Public Health. The author emphasizes practical, ongoing evaluation strategies that involve all program stakeholders, not just evaluation experts, and presents a simple and effective standards–based four–step model that will produce rich and useful results. The book′s resources (scenarios, worksheets, and guidelines) can be used throughout the planning, implementation, and evaluation process. In addition, each chapter includes a list of learning objectives, key terms, and ideas for review, as well as summaries and discussion questions that can reinforce each chapter′s lessons.
1 An Introduction to Public and Community Health Evaluation.
The Links Among Community Assessment, Program Implementation, and Evaluation.
Overview of Evaluation.
Community–Based Participatory Research.
The Participatory Model for Evaluation.
Cultural Considerations in Evaluation.
2 The Community Assessment: An Overview.
The Ecological Model.
Reviewing the Scientific Literature.
Stakeholders Participation in Community Assessments.
3 Developing Initiatives: An Overview.
The Organization′s Mission.
Planning the Initiative.
Goals and Objectives.
The Initiative Activities
Using Existing Evidence–Based Programs.
The Program′s Theory of Change.
The Logic Model Depicting the Theory of Change.
Criteria for Successful Initiatives.
4 Planning for Evaluation: Purpose and Processes.
The Timing of the Evaluation.
The Purpose of Evaluation.
Establishing the Contract for Evaluation.
The Evaluation Team.
Creating and Maintaining Effective Partnerships.
Managing the Evaluation Process.
Factors That Influence the Evaluation Process.
5 Designing the Evaluation: Describing the Program.
Justifications for the Initiative.
The Initiative′s Goals, Objectives, and Activities.
The Initiative′s Theory of Change and Logic Model.
6 Designing the Evaluation: Determining the Evaluation Questions and the Evaluation Design.
Bases for Selecting the Evaluation Questions.
Approaches to Selecting the Evaluation Questions.
Types of Evaluations.
7 Collecting the Data: Quantitative.
Choosing a Data–Collection Approach.
Designing Survey Instruments.
Institutional Review Boards.
8 Analyzing and Interpreting the Data: Quantitative.
Analyzing and Reporting Quantitative Data.
Steps in Quantitative–Data Analysis and Interpretation.
9 Collecting the Data: Qualitative.
Ensuring Validity and Reliability.
Document and Record Review.
Geographic Information Systems.
Training Data Collectors.
Managing and Storing Qualitative Data.
10 Analyzing and Interpreting the Data: Qualitative.
Analyzing Qualitative Data.
Interpreting the Data and Reaching Conclusions.
The Role of Stakeholders
11 Reporting Evaluation Findings.
The Content of the Report.
The Timing of the Report.
The Audience for the Report.
The Format of the Report.
Include the following.
12 Case Study.
The Community Assessment.
Design the Evaluation.
Collect the Data.
Analyze and Interpret the Data.
Report the Results.
Muriel J. Harris, PhD, MPH is an assistant professor, University of Louisville, School of Public Health and Information Sciences. Dr. Harris was the 2004 co–recipient of the CDC/ATSDR Honor Award and in 2000 was awarded membership in the national Delta Omega Honorary Society in Public Health. In 2008 she won the Delta Omega Innovative Curriculum award for her program evaluation course.