Parallel computation thesis


In computational complexity theory, the parallel computation thesis is a hypothesis which states that the time used by a parallel machine is polynomially related to the space used by a sequential machine. The parallel computation thesis was set forth by Chandra and Stockmeyer in 1976.
In other words, for a computational model which allows computations to branch and run in parallel without bound, a formal language which is decidable under the model using no more than steps for inputs of length n is decidable by a non-branching machine using no more than units of storage for some constant k. Similarly, if a machine in the unbranching model decides a language using no more than storage, a machine in the parallel model can decide the language in no more than steps for some constant k.
The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model. A parallel machine must be sufficiently powerful to emulate the sequential machine in time polynomially related to the sequential space; compare Turing machine, non-deterministic Turing machine, and alternating Turing machine. N. Blum introduced a model for which the thesis does not hold.
However, the model allows parallel threads of computation after steps. Parberry suggested a more "reasonable" bound would be or, in defense of the thesis.
Goldschlager proposed a model which is sufficiently universal to emulate all "reasonable" parallel models, which adheres to the thesis.
Chandra and Stockmeyer originally formalized and proved results related to the thesis for deterministic and alternating Turing machines, which is where the thesis originated.