Taylor's law


Taylor's power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship. It is named after the ecologist who first proposed it in 1961, Lionel Roy Taylor. Taylor's original name for this relationship was the law of the mean.

Definition

This law was originally defined for ecological systems, specifically to assess the spatial clustering of organisms. For a population count Y with mean µ and variance var, Taylor's law is written,
where a and b are both positive constants. Taylor proposed this relationship in 1961, suggesting that the exponent b be considered a species specific index of aggregation. This power law has subsequently been confirmed for many hundreds of species.
Taylor's law has also been applied to assess the time dependent changes of population distributions. Related variance to mean power laws have also been demonstrated in several non-ecological systems:
The first use of a double log-log plot was by Reynolds in 1879 on thermal aerodynamics. Pareto used a similar plot to study the proportion of a population and their income.
The term variance was coined by Fisher in 1918.

Biology

Fisher in 1921 proposed the equation
Neyman studied the relationship between the sample mean and variance in 1926. Barlett proposed a relationship between the sample mean and variance in 1936
Smith in 1938 while studying crop yields proposed a relationship similar to Taylor's. This relationship was
where Vx is the variance of yield for plots of x units, V1 is the variance of yield per unit area and x is the size of plots. The slope is the index of heterogeneity. The value of b in this relationship lies between 0 and 1. Where the yield are highly correlated b tends to 0; when they are uncorrelated b tends to 1.
Bliss in 1941, Fracker and Brischle in 1941 and Hayman & Lowe in 1961 also described what is now known as Taylor's law, but in the context of data from single species.
L. R. Taylor was an English entomologist who worked on the Rothamsted Insect Survey for pest control. His 1961 paper used data from 24 papers published between 1936 and 1960. These papers considered a variety of biological settings: virus lesions, macro-zooplankton, worms and symphylids in soil, insects in soil, on plants and in the air, mites on leaves, ticks on sheep and fish in the sea. In these papers the b value lay between 1 and 3. Taylor proposed the power law as a general feature of the spatial distribution of these species. He also proposed a mechanistic hypothesis to explain this law. Among the papers cited were those of Bliss and Yates and Finney.
Initial attempts to explain the spatial distribution of animals had been based on approaches like Bartlett's stochastic population models and the negative binomial distribution that could result from birth-death processes. Taylor's novel explanation was based the assumption of a balanced migratory and congregatory behavior of animals. His hypothesis was initially qualitative, but as it evolved it became semi-quantitative and was supported by simulations. In proposing that animal behavior was the principal mechanism behind the clustering of organisms, Taylor though appeared to have ignored his own report of clustering seen with tobacco necrosis virus plaques.
Following Taylor's initial publications several alternative hypotheses for the power law were advanced. Hanski proposed a random walk model, modulated by the presumed multiplicative effect of reproduction. Hanski's model predicted that the power law exponent would be constrained to range closely about the value of 2, which seemed inconsistent with many reported values.
Anderson et al formulated a simple stochastic birth, death, immigration and emigration model that yielded a quadratic variance function. As a response to this model Taylor argued that such a Markov process would predict that the power law exponent would vary considerably between replicate observations, and that such variability had not been observed.
About this time concerns were, however, raised regarding the statistical variability with measurements of the power law exponent, and the possibility that observations of a power law might reflect more mathematical artifact than a mechanistic process. Taylor et al responded with an additional publication of extensive observations which he claimed refuted Downing's concerns.
In addition, Thórarinsson published a detailed critique of the animal behavioral model, noting that Taylor had modified his model several times in response to concerns raised, and that some of these modifications were inconsistent with earlier versions. Thórarinsson also claimed that Taylor confounded animal numbers with density and that Taylor had incorrectly interpreted simulations that had been constructed to demonstrate his models as validation.
Kemp reviewed a number of discrete stochastic models based on the negative binomial, Neyman type A, and Polya–Aeppli distributions that with suitable adjustment of parameters could produce a variance to mean power law. Kemp, however, did not explain the parameterizations of his models in mechanistic terms. Other relatively abstract models for Taylor's law followed.
A number of additional statistical concerns were raised regarding Taylor's law, based on the difficulty with real data in distinguishing between Taylor's law and other variance to mean functions, as well the inaccuracy of standard regression methods.
Reports also began to accumulate where Taylor's law had been applied to time series data. Perry showed how simulations based on chaos theory could yield Taylor's law, and Kilpatrick & Ives provided simulations which showed how interactions between different species might lead to Taylor's law.
Other reports appeared where Taylor's law had been applied to the spatial distribution of plants and bacterial populations As with the observations of Tobacco necrosis virus mentioned earlier, these observations were not consistent with Taylor's animal behavioral model.
Earlier it was mentioned that variance to mean power function had been applied to non-ecological systems, under the rubric of Taylor's law. To provide a more general explanation for the range of manifestations of the power law a hypothesis was proposed based on the Tweedie distributions, a family of probabilistic models that express an inherent power function relationship between the variance and the mean. Details regarding this hypothesis will be provided in the next section.
A further alternative explanation for Taylor's law was proposed by Cohen et al, derived from the Lewontin Cohen growth model. This model was successfully used to describe the spatial and temporal variability of forest populations.
Another paper by Cohen and Xu that random sampling in blocks where the underling distribution is skewed with the first four moments finite gives rise to Taylor's law. Approximate formulae for the parameters and their variances were also derived. These estimates were tested again data from the Black Rock Forest and found to be in reasonable agreement.
Following Taylor's initial publications several alternative hypotheses for the power law were advanced. Hanski proposed a random walk model, modulated by the presumed multiplicative effect of reproduction. Hanski's model predicted that the power law exponent would be constrained to range closely about the value of 2, which seemed inconsistent with many reported values. Anderson et al formulated a simple stochastic birth, death, immigration and emigration model that yielded a quadratic variance function. The Lewontin Cohen growth model. is another proposed explanation. The possibility that observations of a power law might reflect more mathematical artifact than a mechanistic process was raised. Variation in the exponents of Taylor's Law applied to ecological populations cannot be explained or predicted based solely on statistical grounds however. Research has shown that variation within the Taylor's law exponents for the North Sea fish community varies with the external environment, suggesting ecological processes at least partially determine the form of Taylor's law.

Physics

In the physics literature Taylor's law has been referred to as fluctuation scaling. Eisler et al, in a further attempt to find a general explanation for fluctuation scaling, proposed a process they called impact inhomogeneity in which frequent events are associated with larger impacts. In appendix B of the Eisler article, however, the authors noted that the equations for impact inhomogeneity yielded the same mathematical relationships as found with the Tweedie distributions.
Another group of physicists, Fronczak and Fronczak, derived Taylor's power law for fluctuation scaling from principles of equilibrium and non-equilibrium statistical physics. Their derivation was based on assumptions of physical quantities like free energy and an external field that caused the clustering of biological organisms. Direct experimental demonstration of these postulated physical quantities in relationship to animal or plant aggregation has yet to be achieved, though. Shortly thereafter, an analysis of Fronczak and Fronczak's model was presented that showed their equations directly lead to the Tweedie distributions, a finding that suggested that Fronczak and Fronczak had possibly provided a maximum entropy derivation of these distributions.

Mathematics

Taylor's law has been shown to hold for prime numbers not exceeding a given real number. This result has been shown to hold for the first 11 million primes. If the Hardy–Littlewood twin primes conjecture is true then this law also holds for twin primes.

Naming of law

The law itself is named after the ecologist Lionel Roy Taylor. The name Taylor's law was coined by Southwood in 1966. Taylor's original name for this relationship was the law of the mean

The Tweedie hypothesis

About the time that Taylor was substantiating his ecological observations, MCK Tweedie, a British statistician and medical physicist, was investigating a family of probabilistic models that are now known as the Tweedie distributions. As mentioned above, these distributions are all characterized by a variance to mean power law mathematically identical to Taylor's law.
The Tweedie distribution most applicable to ecological observations is the compound Poisson-gamma distribution, which represents the sum of N independent and identically distributed random variables with a gamma distribution where N is a random variable distributed in accordance with a Poisson distribution. In the additive form its cumulant generating function is:
where κb is the cumulant function,
the Tweedie exponent
s is the generating function variable, and θ and λ are the canonical and index parameters, respectively.
These last two parameters are analogous to the scale and shape parameters used in probability theory. The cumulants of this distribution can be determined by successive differentiations of the CGF and then substituting s=0 into the resultant equations. The first and second cumulants are the mean and variance, respectively, and thus the compound Poisson-gamma CGF yields Taylor's law with the proportionality constant
The compound Poisson-gamma cumulative distribution function has been verified for limited ecological data through the comparison of the theoretical distribution function with the empirical distribution function. A number of other systems, demonstrating variance to mean power laws related to Taylor's law, have been similarly tested for the compound Poisson-gamma distribution.
The main justification for the Tweedie hypothesis rests with the mathematical convergence properties of the Tweedie distributions. The Tweedie convergence theorem requires the Tweedie distributions to act as foci of convergence for a wide range of statistical processes. As a consequence of this convergence theorem, processes based on the sum of multiple independent small jumps will tend to express Taylor's law and obey a Tweedie distribution. A limit theorem for independent and identically distributed variables, as with the Tweedie convergence theorem, might then be considered as being fundamental relative to the ad hoc population models, or models proposed on the basis of simulation or approximation.
This hypothesis remains controversial; more conventional population dynamic approaches seem preferred amongst ecologists, despite the fact that the Tweedie compound Poisson distribution can be directly applied to population dynamic mechanisms.
One difficulty with the Tweedie hypothesis is that the value of b does not range between 0 and 1. Values of b < 1 are rare but have been reported.

Mathematical formulation

In symbols
where si2 is the variance of the density of the ith sample, mi is the mean density of the ith sample and a and b are constants.
In logarithmic form

Scale invariance

Taylor's law is scale invariant. If the unit of measurement is changed by a constant factor c, the exponent remains unchanged.
To see this let y = cx. Then
Taylor's law expressed in the original variable is
and in the rescaled variable it is
It has been shown that Taylor's law is the only relationship between the mean and variance that is scale invariant.

Extensions and refinements

A refinement in the estimation of the slope b has been proposed by Rayner.
where r is the Pearson moment correlation coefficient between log and log m, f is the ratio of sample variances in log and log m and φ is the ratio of the errors in log and log m.
Ordinary least squares regression assumes that φ = ∞. This tends to underestimate the value of b because the estimates of both log and log m are subject to error.
An extension of Taylor's law has been proposed by Ferris et al when multiple samples are taken
where s2 and m are the variance and mean respectively, b, c and d are constants and n is the number of samples taken. To date, this proposed extension has not been verified to be as applicable as the original version of Taylor's law.

Small samples

An extension to this law for small samples has been proposed by Hanski. For small samples the Poisson variation - the variation that can be ascribed to sampling variation - may be significant. Let S be the total variance and let V be the biological variance. Then
Assuming the validity of Taylor's law, we have
Because in the Poisson distribution the mean equals the variance, we have
This gives us
This closely resembles Barlett's original suggestion.

Interpretation

Slope values significantly > 1 indicate clumping of the organisms.
In Poisson-distributed data, b = 1. If the population follows a lognormal or gamma distribution, then b = 2.
For populations that are experiencing constant per capita environmental variability, the regression of log versus log should have a line with b = 2.
Most populations that have been studied have b < 2 but values of 2 have been reported. Occasionally cases with b > 2 have been reported. b values below 1 are uncommon but have also been reported.
It has been suggested that the exponent of the law is proportional to the skewness of the underlying distribution. This proposal has criticised: additional work seems to be indicated.

Extension to cluster sampling of binary data

A form of Taylor's law applicable to binary data in clusters has been proposed. In a binomial distribution, the theoretical variance is
where is the binomial variance, n is the sample size per cluster, and p is the proportion of individuals with a trait, an estimate of the probability of an individual having that trait.
One difficulty with binary data is that the mean and variance, in general, have a particular relationship: as the mean proportion of individuals infected increases above 0.5, the variance deceases.
It is now known that the observed variance changes as a power function of.
Hughes and Madden noted that if the distribution is Poisson, the mean and variance are equal. As this is clearly not the case in many observed proportion samples, they instead assumed a binomial distribution. They replaced the mean in Taylor's law with the binomial variance and then compared this theoretical variance with the observed variance. For binomial data, they showed that varobs = varbin with overdispersion, varobs > varbin.
In symbols, Hughes and Madden's modification to Tyalor's law was
In logarithmic form this relationship is
This latter version is known as the binary power law.
A key step in the derivation of the binary power law by Hughes and Madden was the observation made by Patil and Stiteler that the variance-to-mean ratio used for assessing over-dispersion of unbounded counts in a single sample is actually the ratio of two variances: the observed variance and the theoretical variance for a random distribution. For unbounded counts, the random distribution is the Poisson. Thus, the Taylor power law for a collection of samples can be considered as a relationship between the observed variance and the Poisson variance.
More broadly, Madden and Hughes considered the power law as the relationship between two variances, the observed variance and the theoretical variance for a random distribution. With binary data, the random distribution is the binomial. Thus the Taylor power law and the binary power law are two special cases of a general power-law relationships for heterogeneity.
When both a and b are equal to 1, then a small-scale random spatial pattern is suggested and is best described by the binomial distribution. When b = 1 and a > 1, there is over-dispersion. When b is > 1, the degree of aggregation varies with p. Turechek et al have showed that the binary power law describes numerous data sets in plant pathology. In general, b is greater than 1 and less than 2.
The fit of this law has been tested by simulations. These results suggest that rather than a single regression line for the data set, a segmental regression may be a better model for genuinely random distributions. However, this segmentation only occurs for very short-range dispersal distances and large quadrat sizes. The break in the line occurs only at p very close to 0.
An extension to this law has been proposed. The original form of this law is symmetrical but it can be extended to an asymmetrical form. Using simulations the symmetrical form fits the data when there is positive correlation of disease status of neighbors. Where there is a negative correlation between the likelihood of neighbours being infected, the asymmetrical version is a better fit to the data.

Applications

Because of the ubiquitous occurrence of Taylor's law in biology it has found a variety of uses some of which are listed here.

Recommendations as to use

It has been recommended based on simulation studies in applications testing the validity of Taylor's law to a data sample that:
the total number of organisms studied be > 15

the minimum number of groups of organisms studied be > 5

the density of the organisms should vary by at least 2 orders of magnitude within the sample

Randomly distributed populations

It is common assumed that a population is randomly distributed in the environment. If a population is randomly distributed then the mean and variance of the population are equal and the proportion of samples that contain at least one individual is
When a species with a clumped pattern is compared with one that is randomly distributed with equal overall densities, p will be less for the species having the clumped distribution pattern. Conversely when comparing a uniformly and a randomly distributed species but at equal overall densities, p will be greater for the randomly distributed population. This can be graphically tested by plotting p against m.
Wilson and Room developed a binomial model that incorporates Taylor's law. The basic relationship is
where the log is taken to the base e.
Incorporating Taylor's law this relationship becomes

Dispersion parameter estimator

The common dispersion parameter of the negative binomial distribution is
where m is the sample mean and s2 is the variance. If 1 / k is > 0 the population is considered to be aggregated; 1 / k = 0 the population is considered to be randomly distributed and if 1 / k is < 0 the population is considered to be uniformly distributed. No comment on the distribution can be made if k = 0.
Wilson and Room assuming that Taylor's law applied to the population gave an alternative estimator for k:
where a and b are the constants from Taylor's law.
Jones using the estimate for k above along with the relationship Wilson and Room developed for the probability of finding a sample having at least one individual
derived an estimator for the probability of a sample containing x individuals per sampling unit. Jones's formula is
where P is the probability of finding x individuals per sampling unit, k is estimated from the Wilon and Room equation and m is the sample mean. The probability of finding zero individuals P is estimated with the negative binomial distribution
Jones also gives confidence intervals for these probabilities.
where CI is the confidence interval, t is the critical value taken from the t distribution and N is the total sample size.

Katz family of distributions

Katz proposed a family of distributions with 2 parameters. This family of distributions includes the Bernoulli, Geometric, Pascal and Poisson distributions as special cases. The mean and variance of a Katz distribution are
where m is the mean and s2 is the variance of the sample. The parameters can be estimated by the method of moments from which we have
For a Poisson distribution w2 = 0 and w1 = λ the parameter of the Possion distribution. This family of distributions is also sometimes known as the Panjer family of distributions.
The Katz family is related to the Sundt-Jewel family of distributions:
The only members of the Sundt-Jewel family are the Poisson, binomial, negative binomial, extended truncated negative binomial and logarithmic series distributions.
If the population obeys a Katz distribution then the coefficients of Taylor's law are
Katz also introduced a statistical test
where Jn is the test statistic, s2 is the variance of the sample, m is the mean of the sample and n is the sample size. Jn is asymptotically normally distributed with a zero mean and unit variance. If the sample is Poisson distributed Jn = 0; values of Jn < 0 and > 0 indicate under and over dispersion respectively. Overdispersion is often caused by latent heterogeneity - the presence of multiple sub populations within the population the sample is drawn from.
This statistic is related to the Neyman–Scott statistic
which is known to be asymptotically normal and the conditional chi-squared statistic
which is known to have an asymptotic chi squared distribution with n − 1 degrees of freedom when the population is Poisson distributed.
If the population obeys Taylor's law then

Time to extinction

If Taylor's law is assumed to apply it is possible to determine the mean time to local extinction. This model assumes a simple random walk in time and the absence of density dependent population regulation.
Let where Nt+1 and Nt are the population sizes at time t + 1 and t respectively and r is parameter equal to the annual increase. Then
where var is the variance of r.
Let K be a measure of the species abundance. Then
where TE is the mean time to local extinction.
The probability of extinction by time t is

Minimum population size required to avoid extinction

If a population is lognormally distributed then the harmonic mean of the population size is related to the arithmetic mean
Given that H must be > 0 for the population to persist then rearranging we have
is the minimum size of population for the species to persist.
The assumption of a lognormal distribution appears to apply to about half of a sample of 544 species. suggesting that it is at least a plausible assumption.

Sampling size estimators

The degree of precision is defined to be s / m where s is the standard deviation and m is the mean. The degree of precision is known as the coefficient of variation in other contexts. In ecology research it is recommended that D be in the range 10–25%. The desired degree of precision is important in estimating the required sample size where an investigator wishes to test if Taylor's law applies to the data. The required sample size has been estimated for a number of simple distributions but where the population distribution is not known or cannot be assumed more complex formulae may needed to determine the required sample size.
Where the population is Poisson distributed the sample size needed is
where t is critical level of the t distribution for the type 1 error with the degrees of freedom that the mean was calculated with.
If the population is distributed as a negative binomial distribution then the required sample size is
where k is the parameter of the negative binomial distribution.
A more general sample size estimator has also been proposed
where a and b are derived from Taylor's law.
An alternative has been proposed by Southwood
where n is the required sample size, a and b are the Taylor's law coefficients and D is the desired degree of precision.
Karandinos proposed two similar estimators for n. The first was modified by Ruesink to incorporate Taylor's law.
where d is the ratio of half the desired confidence interval to the mean. In symbols
The second estimator is used in binomial sampling. The desired sample size is
where the dp is ratio of half the desired confidence interval to the proportion of sample units with individuals, p is proportion of samples containing individuals and q = 1 − p. In symbols
For binary sampling, Schulthess et al modified Karandinos' equation
where N is the required sample size, p is the proportion of units containing the organisms of interest, t is the chosen level of significance and Dip is a parameter derived from Taylor's law.

Sequential sampling

is a method of statistical analysis where the sample size is not fixed in advance. Instead samples are taken in accordance with a predefined stopping rule. Taylor's law has been used to derive a number of stopping rules.
A formula for fixed precision in serial sampling to test Taylor's law was derived by Green in 1970.
where T is the cumulative sample total, D is the level of precision, n is the sample size and a and b are obtained from Taylor's law.
As an aid to pest control Wilson et al developed a test that incorporated a threshold level where action should be taken. The required sample size is
where a and b are the Taylor coefficients, || is the absolute value, m is the sample mean, T is the threshold level and t is the critical level of the t distribution. The authors also provided a similar test for binomial sampling
where p is the probability of finding a sample with pests present and q = 1 − p.
Green derived another sampling formula for sequential sampling based on Taylor's law
where D is the degree of precision, a and b are the Taylor's law coefficients, n is the sample size and T is the total number of individuals sampled.
Serra et al have proposed a stopping rule based on Taylor's law.
where a and b are the parameters from Taylor's law, D is the desired level of precision and Tn is the total sample size.
Serra et al also proposed a second stopping rule based on Iwoa's regression
where α and β are the parameters of the regression line, D is the desired level of precision and Tn is the total sample size.
The authors recommended that D be set at 0.1 for studies of population dynamics and D = 0.25 for pest control.

Related analyses

It is considered to be good practice to estimate at least one additional analysis of aggregation because the use of only a single index may be misleading. Although a number of other methods for detecting relationships between the variance and mean in biological samples have been proposed, to date none have achieved the popularity of Taylor's law. The most popular analysis used in conjunction with Taylor's law is probably Iowa's Patchiness regression test but all the methods listed here have been used in the literature.

Barlett–Iawo model

Barlett in 1936 and later Iawo independently in 1968 both proposed an alternative relationship between the variance and the mean. In symbols
where s is the variance in the ith sample and mi is the mean of the ith sample
When the population follows a negative binomial distribution, a = 1 and b = k.
This alternative formulation has not been found to be as good a fit as Taylor's law in most studies.

Nachman model

Nachman proposed a relationship between the mean density and the proportion of samples with zero counts:
where p0 is the proportion of the sample with zero counts, m is the mean density, a is a scale parameter and b is a dispersion parameter. If a = b = 0 the distribution is random. This relationship is usually tested in its logarithmic form
Allsop used this relationship along with Taylor's law to derive an expression for the proportion of infested units in a sample
where
where D2 is the degree of precision desired, zα/2 is the upper α/2 of the normal distribution, a and b are the Taylor's law coefficients, c and d are the Nachman coefficients, n is the sample size and N is the number of infested units.

Kono–Sugino equation

Binary sampling is not uncommonly used in ecology. In 1958 Kono and Sugino derived an equation that relates the proportion of samples without individuals to the mean density of the samples.
where p0 is the proportion of the sample with no individuals, m is the mean sample density, a and b are constants. Like Taylor's law this equation has been found to fit a variety of populations including ones that obey Taylor's law. Unlike the negative binomial distribution this model is independent of the mean density.
The derivation of this equation is straightforward. Let the proportion of empty units be p0 and assume that these are distributed exponentially. Then
Taking logs twice and rearranging, we obtain the equation above. This model is the same as that proposed by Nachman.
The advantage of this model is that it does not require counting the individuals but rather their presence or absence. Counting individuals may not be possible in many cases particularly where insects are the matter of study.
;Note
The equation was derived while examining the relationship between the proportion P of a series of rice hills infested and the mean severity of infestation m. The model studied was
where a and b are empirical constants. Based on this model the constants a and b were derived and a table prepared relating the values of P and m
;Uses
The predicted estimates of m from this equation are subject to bias and it is recommended that the adjusted mean be used instead
where var is the variance of the sample unit means mi and m is the overall mean.
An alternative adjustment to the mean estimates is
where MSE is the mean square error of the regression.
This model may also be used to estimate stop lines for enumerative sampling. The variance of the estimated means is
where
where MSE is the mean square error of the regression, α and β are the constant and slope of the regression respectively, sβ2 is the variance of the slope of the regression, N is the number of points in the regression, n is the number of sample units and p is the mean value of p0 in the regression. The parameters a and b are estimated from Taylor's law:

Hughes–Madden equation

Hughes and Madden have proposed testing a similar relationship applicable to binary observations in cluster, where each cluster contains from 0 to n individuals.
where a, b and c are constants, varobs is the observed variance, and p is the proportion of individuals with a trait, an estimate of the probability of an individual with a trait. In logarithmic form, this relationship is
In most cases, it is assumed that b = c, leading to a simple model
This relationship has been subjected to less extensive testing than Taylor's law. However, it has accurately described over 100 data sets, and there are no published examples reporting that it does not works.
A variant of this equation was proposed by Shiyomi et al. who suggested testing the regression
where varobs is the variance, a and b are the constants of the regression, n here is the sample size and p is the probability of a sample containing at least one individual.

Negative binomial distribution model

A negative binomial model has also been proposed. The dispersion parameter using the method of moments is m2 / and pi is the proportion of samples with counts > 0. The s2 used in the calculation of k are the values predicted by Taylor's law. pi is plotted against 1 − k and the fit of the data is visually inspected.
Perry and Taylor have proposed an alternative estimator of k based on Taylor's law.
A better estimate of the dispersion parameter can be made with the method of maximum likelihood. For the negative binomial it can be estimated from the equation
where Ax is the total number of samples with more than x individuals, N is the total number of individuals, x is the number of individuals in a sample, m is the mean number of individuals per sample and k is the exponent. The value of k has to be estimated numerically.
Goodness of fit of this model can be tested in a number of ways including using the chi square test. As these may be biased by small samples an alternative is the U statistic – the difference between the variance expected under the negative binomial distribution and that of the sample. The expected variance of this distribution is m + m2 / k and
where s2 is the sample variance, m is the sample mean and k is the negative binomial parameter.
The variance of U is
where p = m / k, q = 1 + p, R = p / q and N is the total number of individuals in the sample. The expected value of U is 0. For large sample sizes U is distributed normally.
Note: The negative binomial is actually a family of distributions defined by the relation of the mean to the variance
where a and p are constants. When a = 0 this defines the Poisson distribution. With p = 1 and p = 2, the distribution is known as the NB1 and NB2 distribution respectively.
This model is a version of that proposed earlier by Barlett.

Tests for a common dispersion parameter

The dispersion parameter is
where m is the sample mean and s2 is the variance. If k−1 is > 0 the population is considered to be aggregated; k−1 = 0 the population is considered to be random; and if k−1 is < 0 the population is considered to be uniformly distributed.
Southwood has recommended regressing k against the mean and a constant
where ki and mi are the dispersion parameter and the mean of the ith sample respectively to test for the existence of a common dispersion parameter. A slope value significantly > 0 indicates the dependence of k on the mean density.
An alternative method was proposed by Elliot who suggested plotting against. kc is equal to 1/slope of this regression.

Charlier coefficient

This coefficient is defined as
If the population can be assumed to be distributed in a negative binomial fashion, then C = 100 0.5 where k is the dispersion parameter of the distribution.

Cole's index of dispersion

This index is defined as
The usual interpretation of this index is as follows: values of Ic < 1, = 1, > 1 are taken to mean a uniform distribution, a random distribution or an aggregated distribution.
Because s2 = Σ x22, the index can also be written
If Taylor's law can be assumed to hold, then

Lloyd's indexes

Lloyd's index of mean crowding is the average number of other points contained in the sample unit that contains a randomly chosen point.
where m is the sample mean and s2 is the variance.
Lloyd's index of patchiness is
It is a measure of pattern intensity that is unaffected by thinning. This index was also proposed by Pielou in 1988 and is sometimes known by this name also.
Because an estimate of the variance of IP is extremely difficult to estimate from the formula itself, LLyod suggested fitting a negative binomial distribution to the data. This method gives a parameter k
Then
where SE is the standard error of the index of patchiness,var is the variance of the parameter k and q is the number of quadrats sampled..
If the population obeys Taylor's law then

Patchiness regression test

Iwao proposed a patchiness regression to test for clumping
Let
yi here is Lloyd's index of mean crowding. Perform an ordinary least squares regression of mi against y.
In this regression the value of the slope is an indicator of clumping: the slope = 1 if the data is Poisson-distributed. The constant is the number of individuals that share a unit of habitat at infinitesimal density and may be < 0, 0 or > 0. These values represent regularity, randomness and aggregation of populations in spatial patterns respectively. A value of a < 1 is taken to mean that the basic unit of the distribution is a single individual.
Where the statistic s2/m is not constant it has been recommended to use instead to regress Lloyd's index against am + bm2 where a and b are constants.
The sample size for a given degree of precision for this regression is given by
where a is the constant in this regression, b is the slope, m is the mean and t is the critical value of the t distribution.
Iawo has proposed a sequential sampling test based on this regression. The upper and lower limits of this test are based on critical densities mc where control of a pest requires action to be taken.
where Nu and Nl are the upper and lower bounds respectively, a is the constant from the regression, b is the slope and i is the number of samples.
Kuno has proposed an alternative sequential stopping test also based on this regression.
where Tn is the total sample size, D is the degree of precision, n is the number of samples units, a is the constant and b is the slope from the regression respectively.
Kuno's test is subject to the condition that n ≥ / D2
Parrella and Jones have proposed an alternative but related stop line
where a and b are the parameters from the regression, N is the maximum number of sampled units and n is the individual sample size.

Morisita’s index of dispersion

Morisita's index of dispersion is the scaled probability that two points chosen at random from the whole population are in the same sample. Higher values indicate a more clumped distribution.
An alternative formulation is
where n is the total sample size, m is the sample mean and x are the individual values with the sum taken over the whole sample.
It is also equal to
where IMC is Lloyd's index of crowding.
This index is relatively independent of the population density but is affected by the sample size. Values > 1 indicate clumping; values < 1 indicate a uniformity of distribution and a value of 1 indicates a random sample.
Morisita showed that the statistic
is distributed as a chi squared variable with n − 1 degrees of freedom.
An alternative significance test for this index has been developed for large samples.
where m is the overall sample mean, n is the number of sample units and z is the normal distribution abscissa. Significance is tested by comparing the value of z against the values of the normal distribution.
A function for its calculation is available in the statistical R language.
Note, not to be confused with Morisita's overlap index.

Standardised Morisita’s index

Smith-Gill developed a statistic based on Morisita's index which is independent of both sample size and population density and bounded by −1 and +1. This statistic is calculated as follows
First determine Morisita's index in the usual fashion. Then let k be the number of units the population was sampled from. Calculate the two critical values
where χ2 is the chi square value for n − 1 degrees of freedom at the 97.5% and 2.5% levels of confidence.
The standardised index is then calculated from one of the formulae below.
When IdMc > 1
When Mc > Id ≥ 1
When 1 > IdMu
When 1 > Mu > Id
Ip ranges between +1 and −1 with 95% confidence intervals of ±0.5. Ip has the value of 0 if the pattern is random; if the pattern is uniform, Ip < 0 and if the pattern shows aggregation, Ip > 0.

Southwood's index of spatial aggregation

Southwood's index of spatial aggregation is defined as
where m is the mean of the sample and m* is Lloyd's index of crowding.

Fisher's index of dispersion

Fisher's index of dispersion is
This index may be used to test for over dispersion of the population. It is recommended that in applications n > 5 and that the sample total divided by the number of samples is > 3. In symbols
where x is an individual sample value. The expectation of the index is equal to n and it is distributed as the chi-square distribution with n − 1 degrees of freedom when the population is Poisson distributed. It is equal to the scale parameter when the population obeys the gamma distribution.
It can be applied both to the overall population and to the individual areas sampled individually. The use of this test on the individual sample areas should also include the use of a Bonferroni correction factor.
If the population obeys Taylor's law then

Index of cluster size

The index of cluster size was created by David and Moore. Under a random distribution ICS is expected to equal 0. Positive values indicate a clumped distribution; negative values indicate a uniform distribution.
where s2 is the variance and m is the mean.
If the population obeys Taylor's law
The ICS is also equal to Katz's test statistic divided by 1/2 where n is the sample size. It is also related to Clapham's test statistic. It is also sometimes referred to as the clumping index.

Green’s index

Green's index is a modification of the index of cluster size that is independent of n the number of sample units.
This index equals 0 if the distribution is random, 1 if it is maximally aggregated and −1 / if it is uniform.
The distribution of Green's index is not currently known so statistical tests have been difficult to devise for it.
If the population obeys Taylor's law

Binary dispersal index

Binary sampling is frequently used where it is difficult to obtain accurate counts. The dispersal index is used when the study population is divided into a series of equal samples. The theoretical variance of a sample from a population with a binomial distribution is
where s2 is the variance, n is the number of units sampled and p is the mean proportion of sampling units with at least one individual present. The dispersal index is defined as the ratio of observed variance to the expected variance. In symbols
where varobs is the observed variance and varbin is the expected variance. The expected variance is calculated with the overall mean of the population. Values of D > 1 are considered to suggest aggregation. D is distributed as the chi squared variable with n − 1 degrees of freedom where n is the number of units sampled.
An alternative test is the C test.
where D is the dispersal index, n is the number of units per sample and N is the number of samples. C is distributed normally. A statistically significant value of C indicates overdispersion of the population.
D is also related to intraclass correlation which is defined as
where T is the number of organisms per sample, p is the likelihood of the organism having the sought after property, and xi is the number of organism in the ith unit with this property. T must be the same for all sampled units. In this case with n constant
If the data can be fitted with a beta-binomial distribution then
where θ is the parameter of the distribution.

Ma's population aggregation critical density

Ma has proposed a parameter − the population aggregation critical density - to relate population density to Taylor's law.

Related statistics

A number of statistical tests are known that may be of use in applications.

de Oliveria's statistic

A related statistic suggested by de Oliveria is the difference of the variance and the mean. If the population is Poisson distributed then
where t is the Poisson parameter, s2 is the variance, m is the mean and n is the sample size. The expected value of s2 - m is zero. This statistic is distributed normally.
If the Poisson parameter in this equation is estimated by putting t = m, after a little manipulation this statistic can be written
This is almost identical to Katz's statistic with replacing n. Again OT is normally distributed with mean 0 and unit variance for large n. This statistic is the same as the Neyman-Scott statistic.
;Note
de Oliveria actually suggested that the variance of s2 - m was / n where t is the Poisson parameter. He suggested that t could be estimated by putting it equal to the mean of the sample. Further investigation by Bohning showed that this estimate of the variance was incorrect. Bohning's correction is given in the equations above.

Clapham's test

In 1936 Clapham proposed using the ratio of the variance to the mean as a test statistic. In symbols
For a Possion distribution this ratio equals 1. To test for deviations from this value he proposed testing its value against the chi square distribution with n degrees of freedom where n is the number of sample units. The distribution of this statistic was studied further by Blackman who noted that it was approximately normally distributed with a mean of 1 and a variance of
The derivation of the variance was re analysed by Bartlett who considered it to be
For large samples these two formulae are in approximate agreement. This test is related to the later Katz's Jn statistic.
If the population obeys Taylor's law then
;Note
A refinement on this test has also been published These authors noted that the original test tends to detect overdispersion at higher scales even when this was not present in the data. They noted that the use of the multinomial distribution may be more appropriate than the use of a Poisson distribution for such data. The statistic θ is distributed
where N is the number of sample units, n is the total number of samples examined and xi are the individual data values.
The expectation and variance of θ are
For large N, E is approximately 1 and
If the number of individuals sampled is large this estimate of the variance is in agreement with those derived earlier. However, for smaller samples these latter estimates are more precise and should be used.