Pointwise mutual information


Pointwise mutual information, or point mutual information, is a measure of association used in information theory and statistics. In contrast to mutual information which builds upon PMI, it refers to single events, whereas MI refers to the average of all possible events.

Definition

The PMI of a pair of outcomes x and y belonging to discrete random variables X and Y quantifies the discrepancy between the probability of their coincidence given their joint distribution and their individual distributions, assuming independence. Mathematically:
The mutual information of the random variables X and Y is the expected value of the PMI.
The measure is symmetric. It can take positive or negative values, but is zero if X and Y are independent. Note that even though PMI may be negative or positive, its expected outcome over all joint events is positive. PMI maximizes when X and Y are perfectly associated, yielding the following bounds:
Finally, will increase if is fixed but decreases.
Here is an example to illustrate:
xyp
000.1
010.7
100.15
110.05

Using this table we can marginalize to get the following additional table for the individual distributions:
With this example, we can compute four values for. Using base-2 logarithms:

Similarities to mutual information

Pointwise Mutual Information has many of the same relationships as the mutual information. In particular,
Where is the self-information, or.

Normalized pointwise mutual information (npmi)

Pointwise mutual information can be normalized between resulting in -1 for never occurring together, 0 for independence, and +1 for complete co-occurrence.
Where is the joint self-information, which is estimated as.

PMI variants

In addition to the above-mentioned npmi, PMI has many other interesting variants. A comparative study of these variants can be found in

Chain-rule for pmi

Like mutual information, point mutual information follows the chain rule, that is,
This is easily proven by:

Applications

In computational linguistics, PMI has been used for finding collocations and associations between words. For instance, countings of occurrences and co-occurrences of words in a text corpus can be used to approximate the probabilities and respectively. The following table shows counts of pairs of words getting the most and the least PMI scores in the first 50 millions of words in Wikipedia filtering by 1,000 or more co-occurrences. The frequency of each count can be obtained by dividing its value by 50,000,952.
word 1word 2count word 1count word 2count of co-occurrencesPMI
puertorico19381311115910.0349081703
hongkong2438269422059.72831972408
losangeles3501280827919.56067615065
carbondioxide4265135310329.09852946116
prizelaureate5131167612108.85870710982
sanfrancisco5237247717798.83305176711
nobelprize4098513124988.68948811416
icehockey5607300219338.6555759741
startrek8264159414898.63974676575
cardriver5578274913848.41470768304
itthe28389132932963347-1.72037278119
areof23445817614361019-2.09254205335
thisthe19988232932961211-2.38612756961
isof56567917614361562-2.54614706831
andof137539617614362949-2.79911817902
aand98444213753961457-2.92239510038
inand118765213753961537-3.05660070757
toand102565913753961286-3.08825363041
toin102565911876521066-3.12911348956
ofand176143613753961190-3.70663100173

Good collocation pairs have high PMI because the probability of co-occurrence is only slightly lower than the probabilities of occurrence of each word. Conversely, a pair of words whose probabilities of occurrence are considerably higher than their probability of co-occurrence gets a small PMI score.