Evaluation Essentials. Methods For Conducting Sound Research. Research Methods for the Social Sciences

  • ID: 2243949
  • Book
  • 326 Pages
  • John Wiley and Sons Ltd
1 of 4

Evaluation Essentials

Evaluation Essentials is an indispensable text that offers an introduction to program evaluation. Examples of program descriptions from a variety of sectors including public policy, public health, non–profit management, social work, arts management, education, international assistance, and labor illustrate the book′s step–by–step approach to the process and methods of program evaluation. Perfect for students as well as new evaluators, Evaluation Essentials offers a comprehensive foundation in the core concepts, theories, and methods of program evaluation.

Beth Osborne Daponte a leading authority in program evaluation clearly shows how to form evaluation questions, describe programs using program theory and program logic models, understand causation as it relates to evaluation, use quasi–experimental design, and create meaningful outcome measures. The book offers appropriate approaches to collecting data and introduces readers to survey design and sampling. Daponte explores what it means to say that a program "causes" change to occur. Evaluation Essentials provides a rigorous introduction to quasi–experimental design, helps determine which designs are most appropriate for given situations, and explains the trade–offs between designs.

Note: Product cover images may vary from those shown
2 of 4
Figures and Tables.

Preface.

Acknowledgments.

The Author.

ONE: INTRODUCTION.

Learning Objectives.

The Evaluation Framework.

Summary.

Key Terms.

Discussion Questions.

TWO: DESCRIBING THE PROGRAM.

Learning Objectives.

Motivations for Describing the Program.

Common Mistakes Evaluators Make When Describing the Program.

Conducting the Initial Informal Interviews.

Pitfalls in Describing Programs.

The Program Is Alive, and So Is Its Description.

Program Theory.

The Program Logic Model.

Challenges of Programs with Multiple Sites.

Program Implementation Model.

Program Theory and Program Logic Model Examples.

Summary.

Key Terms.

Discussion Questions.

THREE: LAYING THE EVALUATION GROUNDWORK.

Learning Objectives.

Evaluation Approaches.

Framing Evaluation Questions.

Insincere Reasons for Evaluation.

Who Will Do the Evaluation?

External Evaluators.

Internal Evaluators.

Confi dentiality and Ownership of Evaluation Ethics.

Building a Knowledge Base from Evaluations.

High Stakes Testing.

The Evaluation Report.

Summary.

Key Terms.

Discussion Questions.

FOUR: CAUSATION.

Learning Objectives.

Necessary and Suffi cient.

Types of Effects.

Lagged Effects.

Permanency of Effects.

Functional Form of Impact.

Summary.

Key Terms.

Discussion Questions.

FIVE: THE PRISMS OF VALIDITY.

Learning Objectives.

Statistical Conclusion Validity.

Small Sample Sizes.

Measurement Error.

Unclear Questions.

Unreliable Treatment Implementation.

Fishing.

Internal Validity.

Threat of History.

Threat of Maturation.

Selection.

Mortality.

Testing.

Statistical Regression.

Instrumentation.

Diffusion of Treatments.

Compensatory Equalization of Treatments.

Compensatory Rivalry and Resentful Demoralization.

Construct Validity.

Mono–Operation Bias.

Mono–Method Bias.

External Validity.

Summary.

Key Terms.

Discussion Questions.

SIX: ATTRIBUTING OUTCOMES TO THE PROGRAM: QUASI–EXPERIMENTAL DESIGN.

Learning Objectives.

Quasi–Experimental Notation.

Frequently Used Designs That Do Not Show Causation.

One–Group Posttest–Only.

Posttest–Only with Nonequivalent Groups.

Participants Pretest–Posttest.

Designs That Generally Permit Causal Inferences.

Untreated Control Group Design with Pretest and Posttest.

Delayed Treatment Control Group.

Different Samples Design.

Nonequivalent Observations Drawn from One Group.

Nonequivalent Groups Using Switched Measures.

Cohort Designs.

Time Series Designs.

Archival Data.

Summary.

Key Terms.

Discussion Questions.

SEVEN: COLLECTING DATA.

Learning Objectives.

Informal Interviews.

Focus Groups.

Survey Design.

Sampling.

Ways to Collect Survey Data.

Anonymity and Confi dentiality.

Summary.

Key Terms.

Discussion Questions.

EIGHT: CONCLUSIONS.

Learning Objectives.

Using Evaluation Tools to Develop Grant Proposals.

Hiring an Evaluation Consultant.

Summary.

Key Terms.

Discussion Questions.

Appendix A: American Community Survey.

Glossary.

References.

Index.

Note: Product cover images may vary from those shown
3 of 4

Loading
LOADING...

4 of 4
Beth Osborne Daponte
Note: Product cover images may vary from those shown
5 of 4
Note: Product cover images may vary from those shown
Adroll
adroll