Random self-reducibility is the rule that a good algorithm for the average case implies a good algorithm for the worst case. RSR is the ability to solve all instances of a problem by solving a large fraction of the instances.
Definition
If for a functionf evaluating any instance x can be reduced in polynomial time to the evaluation of f on one or more random instances yi, then it is self-reducible. In a random self-reduction, an arbitrary worst-case instance x in the domain of f is mapped to a random set of instances y1,..., yk. This is done so that f can be computed in polynomial time, given the coin-toss sequence from the mapping, x, and f,..., f. Therefore, taking the average with respect to the induced distribution on yi, the average-case complexity of f is the same as the worst-case randomized complexity of f. One special case of note is when each random instance yi is distributed uniformly over the entire set of elements in the domain of f that have a length of |x|. In this case f is as hard on average as it is in the worst case. This approach contains two key restrictions. First the generation of y1,..., yk is performed non-adaptively. This means that y2 is picked before f is known. Second, it is not necessary that the points y1,..., yk be uniformly distributed.
Theorem: Given a cyclic group G of size |G|. If a deterministic polynomial time algorithmA computes the discrete logarithm for a 1/poly fraction of all inputs, then there is a randomized polynomial time algorithm for discrete logarithm for all inputs. Given a generator g of a cyclic group G =, and an x ∈ G, the discrete log of x to the base g is the integer k with x = gk. Take B to be distributed uniformly on, then xgB = gk+B is also distributed uniformly on G. Therefore xgB is independent of x, and its logarithm can be computed with probability 1/poly in polynomial time. Then loggx ≡ loggxgB - B and the discrete logarithm is self-reducible.
Permanent of a matrix
Given the definition of the permanent of a matrix, it is clear that PERM for any n-by-n matrix M is a multivariate polynomial of degree n over the entries in M. Calculating the permanent of a matrix is a difficult computational task—PERM has been shown to be #P-complete. Moreover, the ability to compute PERM for most matrices implies the existence of a random program that computes PERM for all matrices. This demonstrates that PERM is random self-reducible. The discussion below considers the case where the matrix entries are drawn from a finite fieldFp for some prime p, and where all arithmetic is performed in that field. Let X be a random n-by-n matrix with entries from Fp. Since all the entries of any matrix M + kX are linear functions of k, by composing those linear functions with the degree n multivariate polynomial that calculates PERM we get another degree n polynomial on k, which we will callp. Clearly, p is equal to the permanent of M. Suppose we know a program that computes the correct value of PERM for most n-by-n matrices with entries from Fp---specifically, 1 − 1/ of them. Then with probability of approximately two-thirds, we can calculate PERM for k = 1,2,...,n + 1. Once we have those n + 1 values, we can solve for the coefficients of p using interpolation. Once we know p exactly, we evaluate p, which is equal to PERM. If we do so, we run the risk of being wrong 1/3 of the time, but by picking multiple random Xs and repeating the above procedure many times, and only providing the majority winner as an answer, we can drive the error rate down very low.