The field has not, until now, address the topic of how being asked or required to participate in such evaluations affects these people who play a critical role in multisite evaluations. These issue does so in two ways.
The first six chapters present data and related analyses from research on four multisite evaluations, documenting the patterns of invovlement in these evaluation projects and the extent to which different levels of involvment in program evluations resulted in different patterns of evaluation use and influence. The remaining chapters offer reflections on the results of the cases or their implications, some by people who were part of the original research and some by those who were not. The goal is to encourage readers to think actively about ways to improve multisite evaluation practice.
This is the 129th volume of the Jossey–Bass quarterly report series New Directions for Evaluation, an official publication of the American Evaluation Association.
1. The Upside of an Annual Survey in Light of Involvement and Use: Evaluating the Advanced Technological Education Program (Stacie A. Toal, Arlen R. Gullickson).
The first of four case descriptions highlights a large–scale evaluation directed by external program evaluators and the surprising effect of a required annual survey on project staff who completed it.
2. Compulsory Project–Level Involvement and the Use of Program–Level Evaluations: Evaluating the Local Systemic Change for Teacher Enhancement Program (Kelli Johnson, Iris R. Weiss).
The second case description, in which the program evaluation mandated project–level staff to participate in specific ways, details the relationship between project–level involvement in the core evaluation and the use of that evaluation by project leaders and evaluators.
3. Tensions and Trade–Offs in Voluntary Involvement: Evaluating the Collaboratives for Excellence in Teacher Preparation (Lija O. Greenseid, Frances Lawrenz).
The third case description examines the tensions and trade–offs that arose from attempting to balance voluntary involvement in the evaluation by project principal investigators and evaluators with the need to collect complete and comparable data across sites.
4. The Effect of Technical Assistance on Involvement and Use: The Case of a Research, Evaluation, and Technical Assistance Project (Denise Roseland, Boris B. Volkov, Catherine Callow–Heusser).
In contrast to the other case descriptions, the fourth documents the effects of direct technical assistance and professional development and their results in terms of involvement and use.
5. Documenting the Impact of Multisite Evaluations on the Science, Technology, Engineering, and Mathematics Field (Denise Roseland, Lija O. Greenseid, Boris B. Volkov, Frances Lawrenz).
With the four case evaluation projects used as examples, this chapter discusses the impact of specific evaluations on the broader field of science, technology, engineering, and mathematics education and evaluation.
6. The Role of Involvement and Use in Multisite Evaluations (Frances Lawrenz, Jean A. King, Ann Ooms).
This cross–case analysis of the four case studies identifies both unique details and common themes related to promoting the use and influence of multisite evaluations.
7. Reflecting on Multisite Evaluation Practice (Jean A. King, Patricia A. Ross, Catherine Callow–Heusser, Arlen R. Gullickson, Frances Lawrenz, Iris R. Weiss).
The four lead evaluators for the large–scale evaluations included as case descriptions discuss their experiences and what they have learned about multisite evaluation practice.
8. Culture and Influence in Multisite Evaluation (Karen E. Kirkhart).
This chapter explores the basic premise that evaluation influence must be understood and studied as a cultural phenomenon, especially in the complex environments that characterize multisite evaluation.
9. Reflection on Four Multisite Evaluation Case Studies (Paul R. Brandon).
What do the findings of the four evaluation case studies suggest to an evaluation scholar who was not part of the research team that created them? This chapter reviews the cases and summarizes their comparative findings.
10. Building a Community of Evaluation Practice Within a Multisite Program (Leslie K. Goodyear).
Using a programmatic example, this chapter articulates how the provision of evaluation technical assistance to a large, multisite program and its funded projects can contribute to evaluation use.
11. Toward Better Research On and Thinking About Evaluation Influence, Especially in Multisite Evaluations (Melvin M. Mark).
The final chapter provides a review of the concepts of evaluation use, influence, and influence pathways, then discusses approaches and challenges to studying evaluation influence and influence pathways, including the special challenges of multisite settings.
Jean A. King is a professional director of graduate studies in the Department of ORganizational Leadership, Policy, and Development at the University of Minnesota.
France Lawrenz is the Wallace Professor of Teaching and Learning in the Department of Educational Psychology and the associate vice president for research at the University of Minnesota.