Quasi-Monte Carlo method


In numerical analysis, the quasi-Monte Carlo method is a method for numerical integration and solving some other problems using low-discrepancy sequences. This is in contrast to the regular Monte Carlo method or Monte Carlo integration, which are based on sequences of pseudorandom numbers.
Monte Carlo and quasi-Monte Carlo methods are stated in a similar way.
The problem is to approximate the integral of a function f as the average of the function evaluated at a set of points x1,..., xN:
Since we are integrating over the s-dimensional unit cube, each xi is a vector of s elements. The difference between quasi-Monte Carlo and Monte Carlo is the way the xi are chosen. Quasi-Monte Carlo uses a low-discrepancy sequence such as the Halton sequence, the Sobol sequence, or the Faure sequence, whereas Monte Carlo uses a pseudorandom sequence. The advantage of using low-discrepancy sequences is a faster rate of convergence. Quasi-Monte Carlo has a rate of convergence close to O, whereas the rate for the Monte Carlo method is O.
The Quasi-Monte Carlo method recently became popular in the area of mathematical finance or computational finance. In these areas, high-dimensional numerical integrals, where the integral should be evaluated within a threshold ε, occur frequently. Hence, the Monte Carlo method and the quasi-Monte Carlo method are beneficial in these situations.

Approximation error bounds of quasi-Monte Carlo

The approximation error of the quasi-Monte Carlo method is bounded by a term proportional to the discrepancy of the set x1,..., xN. Specifically, the Koksma–Hlawka inequality states that the error
is bounded by
where V is the Hardy–Krause variation of the function f. DN is the so-called star discrepancy of the set and is defined as
where Q is a rectangular solid in s with sides parallel to the coordinate axes. The inequality can be used to show that the error of the approximation by the quasi-Monte Carlo method is, whereas the Monte Carlo method has a probabilistic error of. Though we can only state the upper bound of the approximation error, the convergence rate of quasi-Monte Carlo method in practice is usually much faster than its theoretical bound. Hence, in general, the accuracy of the quasi-Monte Carlo method increases faster than that of the Monte Carlo method. However, this advantage is only guaranteed if N is large enough and if the variation is finite.

Monte Carlo and quasi-Monte Carlo for multidimensional integrations

For one-dimensional integration, quadrature methods such as the trapezoidal rule, Simpson's rule, or Newton–Cotes formulas are known to be efficient if the function is smooth. These approaches can be also used for multidimensional integrations by repeating the one-dimensional integrals over multiple dimensions. However, the number of function evaluations grows exponentially as s, the number of dimensions, increases. Hence, a method that can overcome this curse of dimensionality should be used for multidimensional integrations. The standard Monte Carlo method is frequently used when the quadrature methods are difficult or expensive to implement. Monte Carlo and quasi-Monte Carlo methods are accurate and relatively fast when the dimension is high, up to 300 or higher.
Morokoff and Caflisch studied the performance of Monte Carlo and quasi-Monte Carlo methods for integration. In the paper, Halton, Sobol, and Faure sequences for quasi-Monte Carlo are compared with the standard Monte Carlo method using pseudorandom sequences. They found that the Halton sequence performs best for dimensions up to around 6; the Sobol sequence performs best for higher dimensions; and the Faure sequence, while outperformed by the other two, still performs better than a pseudorandom sequence.
However, Morokoff and Caflisch gave examples where the advantage of the quasi-Monte Carlo is less than expected theoretically. Still, in the examples studied by Morokoff and Caflisch, the quasi-Monte Carlo method did yield a more accurate result than the Monte Carlo method with the same number of points. Morokoff and Caflisch remark that the advantage of the quasi-Monte Carlo method is greater if the integrand is smooth, and the number of dimensions s of the integral is small.

Drawbacks of quasi-Monte Carlo

Lemieux mentioned the drawbacks of quasi-Monte Carlo:
In order to overcome some of these difficulties, we can use a randomized quasi-Monte Carlo method.

Randomization of quasi-Monte Carlo

Since the low discrepancy sequence are not random, but deterministic, quasi-Monte Carlo method can be seen as a deterministic algorithm or derandomized algorithm. In this case, we only have the bound for error, and the error is hard to estimate. In order to recover our ability to analyze and estimate the variance, we can randomize the method. The resulting method is called the randomized quasi-Monte Carlo method and can be also viewed as a variance reduction technique for the standard Monte Carlo method. Among several methods, the simplest transformation procedure is through random shifting. Let be the point set from the low discrepancy sequence. We sample s-dimensional random vector U and mix it with. In detail, for each xj, create
and use the sequence instead of. If we have R replications for Monte Carlo, sample s-dimensional random vector U for each replication. Randomization allows to give an estimate of the variance while still using quasi-random sequences. Compared to pure quasi Monte-Carlo, the number of samples of the quasi random sequence will be divided by R for an equivalent computational cost, which reduces the theoretical convergence rate. Compared to standard Monte-Carlo, the variance and the computation speed are slightly better from the experimental results in Tuffin