Fisher kernel


In statistical classification, the Fisher kernel, named after Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model. In a classification procedure, the class for a new object can be estimated by minimising, across classes, an average of the Fisher kernel distance from the new object to each known member of the given class.
The Fisher kernel was introduced in 1998. It combines the advantages of generative statistical models and those of discriminative methods :

Fisher score

The Fisher kernel makes use of the Fisher score, defined as
with θ being a set of parameters. The function taking θ to log P is the log-likelihood of the probabilistic model.

Fisher kernel

The Fisher kernel is defined as
with being the Fisher information matrix.

Applications

Information retrieval

The Fisher kernel is the kernel for a generative probabilistic model. As such, it constitutes a bridge between generative and probabilistic models of documents. Fisher kernels exist for numerous models, notably tf–idf, Naive Bayes and probabilistic latent semantic analysis.

Image classification and retrieval

The Fisher kernel can also be applied to image representation for classification or retrieval problems. Currently, the most popular bag-of-visual-words representation suffers from sparsity and high dimensionality. The Fisher kernel can result in a compact and dense representation, which is more desirable for image classification and retrieval problems.
The Fisher Vector, a special, approximate, and improved case of the general Fisher kernel, is an image representation obtained by pooling local image features. The FV encoding stores the mean and the covariance deviation vectors per component k of the Gaussian-Mixture-Model and each element of the local feature descriptors together. In a systematic comparison, FV outperformed all compared encoding methods, Kernel Codebook encoding, Locality Constrained Linear Coding, Vector of Locally Aggregated Descriptors ) showing that the encoding of second order information indeed benefits classification performance.