The text is addressed to readers who haven't used mathematics since school, who were perhaps more confused than enlightened by their undergraduate lectures in statistics and who have never used a computer for much more than word processing and data entry. From this starting point, it slowly but surely instils an understanding of mathematics, statistics and programming, sufficient for initiating research in ecology. The book's practical value is enhanced by extensive use of biological examples and the computer language R for graphics, programming and data analysis.
- Provides a complete introduction to mathematics statistics and computing for ecologists.
- Presents a wealth of ecological examples demonstrating the applied relevance of abstract mathematical concepts, showing how a little technique can go a long way in answering interesting ecological questions.
- Covers elementary topics, including the rules of algebra, logarithms, geometry, calculus, descriptive statistics, probability, hypothesis testing and linear regression.
- Explores more advanced topics including fractals, non–linear dynamical systems, likelihood and Bayesian estimation, generalised linear, mixed and additive models, and multivariate statistics.
- R boxes provide step–by–step recipes for implementing the graphical and numerical techniques outlined in each section.
- A companion website <a href="[external URL] is specifically designed for independent study.
How to be a Quantitative Ecologist provides a comprehensive introduction to mathematics, statistics and computing and is the ideal textbook for late undergraduate and postgraduate courses in environmental biology.
'With a book like this, there is no excuse for people to be afraid of maths, and to be ignorant of what it can do. Professor Tim Benton, Faculty of Biological Sciences, University of Leeds, UK
0. How to start a meaningful relationship with your computer.
Introduction to R.
0.1 What is R?
0.2 Why use R for this book?
0.3 Computing with a scientific package like R.
0.4 Installing and interacting with R.
0.5 Style conventions.
0.6 Valuable R accessories.
0.7 Getting help.
0.8 Basic R usage.
0.9 Importing data from a spreadsheet.
0.10 Storing data in data frames.
0.11 Exporting data from R.
0.12 Quitting R.
1. How to make mathematical statements.
Numbers, equations and functions.
1.1 Qualitative and quantitative scales.
1.4 Logical operations.
1.5 Algebraic operations.
1.6 Manipulating numbers.
1.7 Manipulating units.
1.8 Manipulating expressions.
1.11 First order polynomial equations.
1.12 Proportionality and scaling: a special kind of first order polynomial equation.
1.13 Second and higher order polynomial equations.
1.14 Systems of polynomial equations.
1.16 Coordinate systems.
1.17 Complex numbers.
1.18 Relations and functions.
1.19 The graph of a function.
1.20 First order polynomial functions.
1.21 Higher order polynomial functions.
1.22 The relationship between equations and functions.
1.23 Other useful functions.
1.24 Inverse functions.
1.25 Functions of more than one variable.
2. How to describe regular shapes and patterns.
Geometry and trigonometry.
2.1 Primitive elements.
2.2 Axioms of Euclidean geometry.
2.4 Distance between two points.
2.5 Areas and volumes.
2.6 Measuring angles.
2.7 The trigonometric circle.
2.8 Trigonometric functions.
2.9 Polar coordinates.
2.10 Graphs of trigonometric functions.
2.11 Trigonometric identities.
2.12 Inverses of trigonometric functions.
2.13 Trigonometric equations.
2.14 Modifying the basic trigonometric graphs.
2.15 Superimposing trigonometric functions.
2.16 Spectral analysis.
2.17 Fractal geometry.
3. How to change things, one step at a time.
Sequences, difference equations and logarithms.
3.2 Difference equations.
3.3 Higher order difference equations.
3.4 Initial conditions and parameters.
3.5 Solutions of a difference equation.
3.6 Equilibrium solutions.
3.7 Stable and unstable equilibria.
3.8 Investigating stability.
3.10 Exponential function.
3.11 Logarithmic function.
3.12 Logarithmic equations.
4. How to change things, continuously.
Derivatives and their applications.
4.1 Average rate of change.
4.2 Instantaneous rate of change.
4.4 The derivative of a function.
4.5 Differentiating polynomials.
4.6 Differentiating other functions.
4.7 The chain rule.
4.8 Higher order derivatives.
4.9 Derivatives of functions of many variables.
4.11 Local stability for difference equations.
4.12 Series expansions.
5. How to work with accumulated change.
Integrals and their applications.
5.2 Indefinite integrals.
5.3 Three analytical methods of integration.
5.5 Area under a curve.
5.6 Definite integrals.
5.7 Some properties of definite integrals.
5.8 Improper integrals.
5.9 Differential equations.
5.10 Solving differential equations.
5.11 Stability analysis for differential equations.
6. How to keep stuff organised in tables.
Matrices and their applications.
6.2 Matrix operations.
6.3 Geometric interpretation of vectors and square matrices.
6.4 Solving systems of equations with matrices.
6.5 Markov chains.
6.6 Eigenvalues and eigenvectors.
6.7 Leslie matrix models.
6.8 Analysis of linear dynamical systems.
6.9 Analysis of nonlinear dynamical systems.
7. How to visualise and summarise data.
7.1 Overview of statistics.
7.2 Statistical variables.
7.3 Populations and samples.
7.4 Single–variable samples.
7.5 Frequency distributions.
7.6 Measures of centrality.
7.7 Measures of spread.
7.8 Skewness and kurtosis.
7.9 Graphical summaries.
7.10 Data sets with more than one variable.
7.11 Association between qualitative variables.
7.12 Association between quantitative variables.
7.13 Joint frequency distributions.
8. How to put a value on uncertainty.
8.1 Random experiments and event spaces.
8.3 Frequentist probability.
8.4 Equally likely events.
8.5 The union of events.
8.6 Conditional probability.
8.7 Independent events.
8.8 Total probability.
8.9 Bayesian probability.
9. How to identify different kinds of randomness.
9.1 Probability distributions.
9.2 Discrete probability distributions.
9.3 Continuous probability distributions.
9.5 Named distributions.
9.6 Equally likely events: the uniform distribution.
9.7 Hit or miss: the Bernoulli distribution.
9.8 Count of occurrences in a given number of trials: the binomial distribution.
9.9 Counting different types of occurrences: the multinomial distribution.
9.10 Number of occurrences in a unit of time or space: the Poisson distribution.
9.11 The gentle art of waiting: geometric, negative binomial, exponential and gamma distributions.
9.12 Assigning probabilities to probabilities: the beta and Dirichlet distributions.
9.13 Perfect symmetry: the normal distribution.
9.14 Because it looks right: using probability distributions empirically.
9.15 Mixtures, outliers and the t–distribution.
9.16 Joint, conditional and marginal probability distributions.
9.17 The bivariate normal distribution.
9.18 Sums of random variables: the central limit theorem.
9.19 Products of random variables: the log–normal distribution.
9.20 Modelling residuals: the chi–square distribution.
9.21 Stochastic simulation.
10. How to see the forest from the trees.
Estimation and testing.
10.1 Estimators and their properties.
10.2 Normal theory.
10.3 Estimating the population mean.
10.4 Estimating the variance of a normal population.
10.5 Confidence intervals.
10.6 Inference by bootstrapping.
10.7 More general estimation methods.
10.8 Estimation by least squares.
10.9 Estimation by maximum likelihood.
10.10 Bayesian estimation.
10.11 Link between maximum likelihood and Bayesian estimation.
10.12 Hypothesis testing: rationale.
10.13 Tests for the population mean.
10.14 Tests comparing two different means.
10.15 Hypotheses about qualitative data.
10.16 Hypothesis testing debunked.
11. How to separate the signal from the noise.
11.1 Comparing the means of several populations.
11.2 Simple linear regression.
11.4 How good is the best–fit line?
11.5 Multiple linear regression.
11.6 Model selection.
11.7 Generalised linear models.
11.8 Evaluation, diagnostics and model selection for GLMs.
11.9 Modelling dispersion.
11.10 Fitting more complicated models to data: polynomials, interactions, nonlinear regression.
11.11 Letting the data suggest more complicated models: smoothing.
11.12 Partitioning variation: mixed effects models.
12. How to measure similarity.
12.1 The problem with multivariate data.
12.2 Ordination in general.
12.3 Principal components analysis.
12.4 Clustering in general.
12.5 Agglomerative hierarchical clustering.
12.6 Nonhierarchical clustering: k means analysis.
12.7 Classification in general.
12.8 Logistic regression: two classes.
12.9 Logistic regression: many classes.