Reducing Sample Complexity in Stochastic Derivative-Free Optimization via Tail Bounds and Hypothesis Testing
Abstract
We introduce and analyze new probabilistic strategies for enforcing sufficient decrease conditions in stochastic derivative-free optimization, with the goal of reducing sample complexity and simplifying convergence analysis. First, we develop a new tail bound condition imposed on the estimated reduction in function value, which permits flexible selection of the power used in the sufficient decrease test, q in (1,2]. This approach allows us to reduce the number of samples per iteration from the standard O(delta^{−4}) to O(delta^{-2q}), assuming that the noise moment of order q/(q-1) is bounded. Second, we formulate the sufficient decrease condition as a sequential hypothesis testing problem, in which the algorithm adaptively collects samples until the evidence suffices to accept or reject a candidate step. This test provides statistical guarantees on decision errors and can further reduce the required sample size, particularly in the Gaussian noise setting, where it can approach O(delta^{−2-r}) when the decrease is of the order of delta^r. We incorporate both techniques into stochastic direct-search and trust-region methods for potentially non-smooth, noisy objective functions, and establish their global convergence rates and properties.