Evaluation Essentials is an indispensable text that offers an introduction to program evaluation. Examples of program descriptions from a variety of sectors including public policy, public health, non–profit management, social work, arts management, education, international assistance, and labor illustrate the book′s step–by–step approach to the process and methods of program evaluation. Perfect for students as well as new evaluators, Evaluation Essentials offers a comprehensive foundation in the core concepts, theories, and methods of program evaluation.
Beth Osborne Daponte a leading authority in program evaluation clearly shows how to form evaluation questions, describe programs using program theory and program logic models, understand causation as it relates to evaluation, use quasi–experimental design, and create meaningful outcome measures. The book offers appropriate approaches to collecting data and introduces readers to survey design and sampling. Daponte explores what it means to say that a program "causes" change to occur. Evaluation Essentials provides a rigorous introduction to quasi–experimental design, helps determine which designs are most appropriate for given situations, and explains the trade–offs between designs.
The Evaluation Framework.
TWO: DESCRIBING THE PROGRAM.
Motivations for Describing the Program.
Common Mistakes Evaluators Make When Describing the Program.
Conducting the Initial Informal Interviews.
Pitfalls in Describing Programs.
The Program Is Alive, and So Is Its Description.
The Program Logic Model.
Challenges of Programs with Multiple Sites.
Program Implementation Model.
Program Theory and Program Logic Model Examples.
THREE: LAYING THE EVALUATION GROUNDWORK.
Framing Evaluation Questions.
Insincere Reasons for Evaluation.
Who Will Do the Evaluation?
Confi dentiality and Ownership of Evaluation Ethics.
Building a Knowledge Base from Evaluations.
High Stakes Testing.
The Evaluation Report.
Necessary and Suffi cient.
Types of Effects.
Permanency of Effects.
Functional Form of Impact.
FIVE: THE PRISMS OF VALIDITY.
Statistical Conclusion Validity.
Small Sample Sizes.
Unreliable Treatment Implementation.
Threat of History.
Threat of Maturation.
Diffusion of Treatments.
Compensatory Equalization of Treatments.
Compensatory Rivalry and Resentful Demoralization.
SIX: ATTRIBUTING OUTCOMES TO THE PROGRAM: QUASI–EXPERIMENTAL DESIGN.
Frequently Used Designs That Do Not Show Causation.
Posttest–Only with Nonequivalent Groups.
Designs That Generally Permit Causal Inferences.
Untreated Control Group Design with Pretest and Posttest.
Delayed Treatment Control Group.
Different Samples Design.
Nonequivalent Observations Drawn from One Group.
Nonequivalent Groups Using Switched Measures.
Time Series Designs.
SEVEN: COLLECTING DATA.
Ways to Collect Survey Data.
Anonymity and Confi dentiality.
Using Evaluation Tools to Develop Grant Proposals.
Hiring an Evaluation Consultant.
Appendix A: American Community Survey.