1 Collecting data by experiments.
1.3 Measurements of yield or response.
1.4 Natural variation in data.
1.5 Initial data analysis.
1.6 General applications of experimentation.
2 Basic statistical methods: the normal distribution.
2.1 Statistical inference for one sample of normally distributed data.
2.2 Hypothesis test.
2.3 Comparison of two samples of normally distributed data.
2.4 The F-test for comparing two estimated variances.
2.5 Confidence interval for the difference between two means.
2.6 'Paired data' t-test when samples are not independent.
2.7 Linear functions of normally distributed variables.
2.8 Linear models including normal random variation.
3 Principles of experimental design.
3.2 Treatment structure.
3.3 Changing background conditions – the need for comparison.
3.7 Sources of variation.
3.8 Planning the size of an experiment.
4 The analysis of data from orthogonal designs.
4.2 Comparing treatments.
4.3 Confidence intervals.
4.4 Homogeneity of variance.
4.5 The randomized complete block.
4.6 Duncan's multiple range test.
4.7 Extra replication of important treatments.
4.8 Contrasts among treatments.
4.9 Latin squares and other orthogonal designs.
4.10 Graeco-Latin squares.
4.11 Two fallacies.
4.12 Assumptions in analysis: using residuals to examine them.
4.14 Theory of variance stabilization.
4.15 Missing data in block designs.
Appendix 4A Cochran's Theorem on Quadratic Forms.
5 Factorial experiments.
5.2 Notation for factors at two levels.
5.3 Definition of main effect and interaction.
5.4 Three factors each at two levels.
5.5 A single factor at more than two levels.
5.6 General method for computing coefficients for orthogonal polynomials.
6 Experiments with many factors: confounding and fractional replication.
6.2 The principal block in confounding.
6.3 Single replicate.
6.4 Small experiments: partial confounding.
6.5 Very large experiments: fractional replication.
6.6 Replicates smaller than half size.
6.7 Confounding with fractional replication.
6.8 Confounding three-level factors.
6.9 Fractional replication in 3-level experiments.
Appendix 6A Methods of confounding in 2p factorial experiments.
7 Confounding main effects – split-plot designs.
7.2 Linear model and analysis.
7.3 Studying interactions.
7.4 Repeated splitting.
7.5 Confounding in split-plot experiments.
7.6 Other designs for main plots.
7.7 Criss-cross design.
8 Industrial experimentation.
8.2 Taguchi methods in statistical quality control.
8.3 Loss functions.
8.4 Sources of variation.
8.5 Orthogonal arrays.
8.6 Choice of design.
9 Response surfaces and mixture designs.
9.2 Are experimental conditions ‘constant’?
9.3 Response surfaces.
9.4 Experiments with three factors, x1, x2 and x3.
9.5 Second-order surfaces.
9.6 Contour diagrams in analysis.
9.8 Mixture designs.
9.9 Other types of response surface.
10 The analysis of covariance.
10.2 Analysis for a design in randomized blocks: general theory.
10.3 Individual contrasts.
10.4 Dummy covariance.
10.5 Systematic trend not removed by blocking.
10.6 Accidents in recording.
10.7 Assumptions in covariance analysis.
10.8 Missing values.
10.9 Double covariance.
11 Balanced incomplete blocks and general non-orthogonal block designs.
11.2 Definition and existence of a balanced incomplete block.
11.3 Methods of construction.
11.4 Linear model and analysis.
11.5 Row and column design: the Youden square.
11.6 General block designs.
11.7 Linear model and analysis.
11.8 Generalized inverse.
11.9 Application to designs with special patterns.
Appendix 11A Generalized inverse matrix by spectral decomposition.
Appendix 11B Natural contrasts and effective replication.
12 More advanced designs.
12.2 Crossover designs.
12.4 Alpha designs.
12.5 Partially balanced incomplete blocks (PBIBs).
13 Random effects models: variance components and sampling schemes.
13.2 Two stages of sampling: between and within units.
13.3 Assessing alternative sampling schemes.
13.4 Using variance components in planning when sampling costs are given.
13.5 Three levels of variation.
13.6 Costs in a three-stage scheme.
13.7 Example where one estimate is negative.
14 Computer output using SAS.
Bibliography and references.
Robert E. Kempson Formerly of the Applied Statistics Research Unit, University of Kent at Canterbury and of Wye College, University of London.