Transdichotomous model
In computational complexity theory, and more specifically in the analysis of algorithms with integer data, the transdichotomous model is a variation of the random access machine in which the machine word size is assumed to match the problem size. The model was proposed by Michael Fredman and Dan Willard, who chose its name "because the dichotomy between the machine model and the problem size is crossed in a reasonable manner."
In a problem such as integer sorting in which there are integers to be sorted, the transdichotomous model assumes that each integer may be stored in a single word of computer memory, that operations on single words take constant time per operation, and that the number of bits that can be stored in a single word is at least. The goal of complexity analysis in this model is to find time bounds that depend only on and not on the actual size of the input values or the machine words. In modeling integer computation, it is necessary to assume that machine words are limited in size, because models with unlimited precision are unreasonably powerful. The transdichotomous model makes a minimal assumption of this type: that there is some limit, and that the limit is large enough to allow random access indexing into the input data.
As well as its application to integer sorting, the transdichotomous model has also been applied to the design of priority queues and to problems in computational geometry and graph algorithms.