IBM alignment models


IBM alignment models are a sequence of increasingly complex models used in statistical machine translation to train a translation model and an alignment model, starting with lexical translation probabilities and moving to reordering and word duplication. They underpinned the majority of statistical machine translation systems for almost twenty years starting in the early 1990s, until neural machine translation began to dominate. These models offer principled probabilistic formulation and tractable inference.
The original work on statistical machine translation at IBM proposed five models, and a model 6 was proposed later. The sequence of the six models can be summarized as:
IBM Model 1 is weak in terms of conducting reordering or adding and dropping words. In most cases, words that follow each other in one language would have a different order after translation, but IBM Model 1 treats all kinds of reordering as equally possible.
Another problem while aligning is the fertility. In most cases one input word will be translated into one single word, but some words will produce multiple words or even get dropped. The fertility of word models addresses this aspect of translation. While adding additional components increases the complexity of models, the main principles of IBM Model 1 are constant.

Model 2

The IBM Model 2 has an additional model for alignment that is not present in Model 1. For example, using only IBM Model 1 the translation probabilities for these translations would be the same:
The IBM Model 2 addressed this issue by modeling the translation of a foreign input word in position to a native language word in position using an alignment probability distribution defined as:
In the above equation, the length of the input sentence f is denoted as lf, and the length of the translated sentence e as le. The translation done by IBM Model 2 can be presented as a process divided into two steps.
Assuming is the translation probability and is the alignment probability, IBM Model 2 can be defined as:
In this equation, the alignment function maps each output word to a foreign input position.

Model 3

The fertility problem is addressed in IBM Model 3. The fertility is modeled using probability distribution defined as:
For each foreign word, such distribution indicates to how many output words it usually translates. This model deals with dropping input words because it allows. But there is still an issue when adding words. For example, the English word do is often inserted when negating. This issue generates a special NULL token that can also have its fertility modeled using a conditional distribution defined as:
The number of inserted words depends on sentence length. This is why the NULL token insertion is modeled as an additional step: the fertility step. It increases the IBM Model 3 translation process to four steps:
The last step is called distortion instead of alignment because it is possible to produce the same translation with the same alignment in different ways.
IBM Model 3 can be mathematically expressed as:
where represents the fertility of, each source word is assigned a fertility distribution, and and refer to the absolute lengths of the target and source sentences, respectively.

Model 4

In IBM Model 4, each word is dependent on the previously aligned word and on the word classes of the surrounding words. Some words tend to get reordered during translation more than others. Adjectives often get moved before the noun that precedes them. The word classes introduced in Model 4 solve this problem by conditioning the probability distributions of these classes. The result of such distribution is a lexicalized model. Such a distribution can be defined as follows:
For the initial word in the cept:
For additional words:
where and functions map words to their word classes, and and are distortion probability distributions of the words. The cept is formed by aligning each input word to at least one output word.
Both Model 3 and Model 4 ignore if an input position was chosen and if the probability mass was reserved for the input positions outside the sentence boundaries. It is the reason for the probabilities of all correct alignments not sum up to unity in these two models.

Model 5

IBM Model 5 reformulates IBM Model 4 by enhancing the alignment model with more training parameters in order to overcome the model deficiency. During the translation in Model 3 and Model 4 there are no heuristics that would prohibit the placement of an output word in a position already taken. In Model 5 it is important to place words only in free positions. It is done by tracking the number of free positions and allowing placement only in such positions. The distortion model is similar to IBM Model 4, but it is based on free positions. If denotes the number of free positions in the output, the IBM Model 5 distortion probabilities would be defined as:
For the initial word in the cept:
For additional words:
The alignment models that use first-order dependencies like the HMM or IBM Models 4 and 5 produce better results than the other alignment methods. The main idea of HMM is to predict the distance between subsequent source language positions. On the other hand, IBM Model 4 tries to predict the distance between subsequent target language positions. Since it was expected to achieve better alignment quality when using both types of such dependencies, HMM and Model 4 were combined in a log-linear manner in Model 6 as follows:
where the interpolation parameter is used in order to count the weight of Model 4 relatively to the hidden Markov model. A log-linear combination of several models can be defined as with as:
The log-linear combination is used instead of linear combination because the values are typically different in terms of their orders of magnitude for HMM and IBM Model 4.