Hypercomputation
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
The Church–Turing thesis states that any "computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense.
Technically, the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of deterministic, rather than random, uncomputable functions.
History
A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals. This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are mathematical abstractions, and are not physically realizable.State space
In a sense, most functions are uncomputable: there are Aleph 0| computable functions, but there are an uncountable number of possible Super-Turing functions.Hypercomputer models
Hypercomputer models range from useful but probably unrealizable, to less-useful random-function generators that are more plausibly "realizable".Hypercomputers with uncomputable inputs or black-box components
A system granted knowledge of the uncomputable, oracular Chaitin's constant as an input can solve a large number of useful undecidable problems; a system granted an uncomputable random-number generator as an input can create random uncomputable functions, but is generally not believed to be able to meaningfully solve "useful" uncomputable functions such as the halting problem. There are an unlimited number of different types of conceivable hypercomputers, including:- Turing's original oracle machines, defined by Turing in 1939.
- A real computer can perform hypercomputation if physics admits general real variables, and these are in some way "harnessable" for useful computation. This might require quite bizarre laws of physics, and would require the ability to measure the real-valued physical value to arbitrary precision.
- *Similarly, a neural net that somehow had Chaitin's constant exactly embedded in its weight function would be able to solve the halting problem, though constructing such an infinitely precise neural net, even if you somehow know Chaitin's constant beforehand, is impossible under the laws of quantum mechanics.
- Certain fuzzy logic-based "fuzzy Turing machines" can, by definition, accidentally solve the halting problem, but only because their ability to solve the halting problem is indirectly assumed in the specification of the machine; this tends to be viewed as a "bug" in the original specification of the machines.
- *Similarly, a proposed model known as fair nondeterminism can accidentally allow the oracular computation of noncomputable functions, because some such systems, by definition, have the oracular ability to identify reject inputs that would "unfairly" cause a subsystem to run forever.
- Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic. These models require an uncomputable input, such as a physical event-generating process where the interval between events grows at an uncomputably large rate.
- *Similarly, one unorthodox interpretation of a model of unbounded nondeterminism posits, by definition, that the length of time required for an "Actor" to settle is fundamentally unknowable, and therefore it cannot be proven, within the model, that it does not take an uncomputably long period of time.
"Infinite computational steps" models
- A Turing machine that can complete infinitely many steps in finite time, a feat known as a supertask. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine. The Zeno machine performs its first computation step in 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+..., thus undefined exactly at 2 minutes after beginning of the computation.
- It seems natural that the possibility of time travel makes hypercomputation possible by itself. However, this is not so since a CTC does not provide the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation. According to a 1992 paper, a computer operating in a Malament–Hogarth spacetime or in orbit around a rotating black hole could theoretically perform non-Turing computations for an observer inside the black hole. Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable.
Quantum models
"Eventually correct" systems
Some physically-realizable systems will always eventually converge to the correct answer, but have the defect that they will often output an incorrect answer and stick with the incorrect answer for an uncomputably large period of time before eventually going back and correcting the mistake.- In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference. These models enable some nonrecursive sets of numbers or languages to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. " L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
- A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. The 'Monotone Turing machines' traditionally used in description size theory cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel, it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can eventually converge to a correct solution of the halting problem by evaluating a Specker sequence.
Analysis of capabilities
Model | Computable predicates | Notes | Refs |
supertasking | tt | dependent on outside observer | |
limiting/trial-and-error | |||
iterated limiting | |||
Blum–Shub–Smale machine | incomparable with traditional computable real functions | ||
Malament–Hogarth spacetime | HYP | dependent on spacetime structure | |
analog recurrent neural network | f is an advice function giving connection weights; size is bounded by runtime | ||
infinite time Turing machine | Arithmetical Quasi-Inductive sets | ||
classical fuzzy Turing machine | for any computable t-norm | ||
increasing function oracle | for the one-sequence model; are r.e. |
Criticism
, in his writings on hypercomputation,refers to this subject as "a myth" and offers counter-arguments to the
physical realizability of hypercomputation. As for its theory, he argues against
the claims that this is a new field founded in the 1990s. This point of view relies
on the history of computability theory, as also mentioned above.
In his argument, he makes a remark that all of hypercomputation is little more than: "if non-computable inputs are permitted, then non-computable outputs are attainable."