New US government requirements state that federally funded grants and school programs must prove that they are based on scientifically proved improvements in teaching and learning. All new grants must show they are based on scientifically sound research to be funded, and budgets to schools must likewise show that they are based on scientifically sound research. However, the movement in education over the past several years has been toward qualitative rather than quantitative measures. The new legislation comes at a time when researchers are ill trained to measure results or even to frame questions in an empirical way, and when school administrators and teachers are no longer remember or were never trained to prove statistically that their programs are effective.
Experimental Methods for Evaluating Educational Interventions is a tutorial on what it means to frame a question in an empirical manner, how one needs to test that a method works, what statistics one uses to measure effectiveness, and how to document these findings in a way so as to be compliant with new empirically based requirements. The book is simplistic enough to be accessible to those teaching and administrative educational professionals long out of schooling, but comprehensive and sophisticated enough to be of use to researchers who know experimental design and statistics but don't know how to use what they know to write acceptable grant proposals or to get governmental funding for their programs.
* Provides an overview to interpreting empirical data in education
* Reviews data analysis techniques: use and interpretation
* Discusses research on learning, instruction, and curriculum
* Explores importance of showing progress as well as cause and effect
* Identifies obstacles to applying research into practice
*Examines policy development for states, nations, and countries
Please Note: This is an On Demand product, delivery may take up to 11 working days after payment has been received.
Part II: Basic Issues When Addressing Human Behavior: An Experimental Research Perspective.J.C. Valentine and H.M. Cooper, Can We Measure the Quality of Causal Research in Education?J. D'Agostino, Measuring Learning Outcomes: Reliability and Validity Issues.J.T. Behrens and D.H. Robinson, the Micro and Macro in the Analysis and Conceptualization of Experimental Data.
Part III: Producing Credible Applied Educational Research.R. Boruch, Beyond the Laboratory or Classroom: The Empirical Basis of Educational Policy.G.D. Phye, Academic Learning and Academic Achievement: Correspondence Issues.A.M. O'Donnell, Experimental Research in Classrooms.S. Graham, K.H. Harris and J. Zito, Promoting Internal and External Validity: A Synergism of Laboratory-Like Experiments and Classroom-Based Self-Regulated Strategy Development Research.
Gary D. Phye, Past President of the Iowa Educational Research and Evaluation Association, is the new editor of the Academic Press Educational Psychology Book Series. He has published numerous research articles and book chapters in the areas of classroom learning and transfer. He previously co-edited two of the bestselling volumes in the book series: School Psychology with Dan Reschly and Cognitive Classroom Learning with Tom Andre. In addition to being the co-author of an undergraduate educational psychology text, Dr. Phye has also co-authored (with K. Josef Klauer) a newly-published program designed to teach and assess the inductive reasoning and metacognitive strategies of young children. Dr. Phye is currently working with the Ames Community public schools in the training and assessment of inductive reasoning strategies of special needs children in primary and intermediate grades.
Robinson, Daniel H.