In statistics, identifiability is a property which a model must satisfy in order for precise inference to be possible. A model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an infinite number of observations from it. Mathematically, this is equivalent to saying that different values of the parameters must generate different probability distributions of the observable variables. Usually the model is identifiable only under certain technical restrictions, in which case the set of these requirements is called the identification conditions. A model that fails to be identifiable is said to be non-identifiable or unidentifiable: two or more parametrizations are observationally equivalent. In some cases, even though a model is non-identifiable, it is still possible to learn the true values of a certain subset of the model parameters. In this case we say that the model is partially identifiable. In other cases it may be possible to learn the location of the true parameter up to a certain finite region of the parameter space, in which case the model is set identifiable. Aside from strictly theoretical exploration of the model properties, identifiability can be referred to in a wider scope when a model is tested with experimental data sets, using identifiability analysis.
Definition
Let be a statistical model where the parameter space is either finite- or infinite-dimensional. We say that is identifiable if the mapping is one-to-one: This definition means that distinct values of θ should correspond to distinct probability distributions: if θ1≠θ2, then also Pθ1≠Pθ2. If the distributions are defined in terms of the probability density functions, then two pdfs should be considered distinct only if they differ on a set of non-zero measure = 10 ≤ x < 1 and ƒ2. Identifiability of the model in the sense of invertibility of the map is equivalent to being able to learn the model's true parameter if the model can be observed indefinitely long. Indeed, if ⊆ S is the sequence of observations from the model, then by the strong law of large numbers, for every measurable setA ⊆ S. Thus, with an infinite number of observations we will be able to find the true probability distributionP0 in the model, and since the identifiability condition above requires that the map be invertible, we will also be able to find the true value of the parameter which generated given distribution P0.
Examples
Example 1
Let be the normal location-scale family: Then This expression is equal to zero for almost all x only when all its coefficients are equal to zero, which is only possible when |σ1| = |σ2| and μ1 = μ2. Since in the scale parameterσ is restricted to be greater than zero, we conclude that the model is identifiable: ƒθ1 = ƒθ2 ⇔ θ1 = θ2.
Example 2
Let be the standard linear regression model: . Then the parameter β is identifiable if and only ifthe matrix is invertible. Thus, this is the identification condition in the model.
Example 3
Suppose is the classical errors-in-variables linear model: where are jointly normalindependent random variables with zero expected value and unknown variances, and only the variables are observed. Then this model is not identifiable, only the product βσ²∗ is. This is also an example of a set identifiable model: although the exact value of β cannot be learned, we can guarantee that it must lie somewhere in the interval, where βyx is the coefficient in OLS regression of y on x, and βxy is the coefficient in OLS regression of x on y. If we abandon the normality assumption and require that x* were notnormally distributed, retaining only the independence condition ε ⊥ η ⊥ x*, then the model becomes identifiable.