Thu, 28 May 2026

14:00 - 15:00
Lecture Room 3

Reducing Sample Complexity in Stochastic Derivative-Free Optimization via Tail Bounds and Hypothesis Testing

Prof Luis Nunes Vicente
(Lehigh University)
Abstract

Speaker Professor Luis Nunes Vicente will talk about 'Reducing Sample Complexity in Stochastic Derivative-Free Optimization via Tail Bounds and Hypothesis Testing';

We introduce and analyze new probabilistic strategies for enforcing sufficient decrease conditions in stochastic derivative-free optimization, with the goal of reducing sample complexity and simplifying convergence analysis. First, we develop a new tail bound condition imposed on the estimated reduction in function value, which permits flexible selection of the power used in the sufficient decrease test, q in (1,2]. This approach allows us to reduce the number of samples per iteration from the standard O(delta^{−4}) to O(delta^{-2q}), assuming that the noise moment of order q/(q-1) is bounded. Second, we formulate the sufficient decrease condition as a sequential hypothesis testing problem, in which the algorithm adaptively collects samples until the evidence suffices to accept or reject a candidate step. This test provides statistical guarantees on decision errors and can further reduce the required sample size, particularly in the Gaussian noise setting, where it can approach O(delta^{−2-r}) when the decrease is of the order of delta^r. We incorporate both techniques into stochastic direct-search and trust-region methods for potentially non-smooth, noisy objective functions, and establish their global convergence rates and properties. 

This is joint work with Anjie Ding, Francesco Rinaldi, and Damiano Zeffiro.

 

Thu, 16 Feb 2017

14:00 - 15:00
L5

STORM: Stochastic Trust Region Framework with Random Models

Prof. Katya Scheinberg
(Lehigh University)
Abstract

We will present a very general framework for unconstrained stochastic optimization which is based on standard trust region framework using  random models. In particular this framework retains the desirable features such step acceptance criterion, trust region adjustment and ability to utilize of second order models. We make assumptions on the stochasticity that are different from the typical assumptions of stochastic and simulation-based optimization. In particular we assume that our models and function values satisfy some good quality conditions with some probability fixed, but can be arbitrarily bad otherwise. We will analyze the convergence and convergence rates of this general framework and discuss the requirement on the models and function values. We will will contrast our results with existing results from stochastic approximation literature. We will finish with examples of applications arising the area of machine learning. 
 

Thu, 14 May 2015

14:00 - 15:00
L5

A Trust Region Algorithm with Improved Iteration Complexity for Nonconvex Smooth Optimization

Frank Curtis
(Lehigh University)
Abstract

We present a trust region algorithm for solving nonconvex optimization problems that, in the worst-case, is able to drive the norm of the gradient of the objective below a prescribed threshold $\epsilon > 0$ after at most ${\cal O}(\epsilon^{-3/2})$ function evaluations, gradient evaluations, or iterations.  Our work has been inspired by the recently proposed Adaptive Regularisation framework using Cubics (i.e., the ARC algorithm), which attains the same worst-case complexity bound.  Our algorithm is modeled after a traditional trust region algorithm, but employs modified step acceptance criteria and a novel trust region updating mechanism that allows it to achieve this desirable property.  Importantly, our method also maintains standard global and fast local convergence guarantees.

Tue, 05 May 2015

14:00 - 15:00
L3

Alternating direction methods for structured nonconvex optimization with applications in risk parity portfolio selection

Katya Scheinberg
(Lehigh University)
Abstract

We will begin by discussing the risk parity portfolio selection problem, which aims to find  portfolios for which the contributions of risk from all assets are equally weighted. The risk parity may be satisfied over either individual assets or groups of assets. We show how convex optimization techniques can find a risk parity solution in the nonnegative  orthant, however, in general cases the number of such solutions can be anywhere between zero and  exponential in the dimension. We then propose a nonconvex least-squares formulation which allows us to consider and possibly solve the general case. 

Motivated by this problem we present several alternating direction schemes for specially structured nonlinear nonconvex problems. The problem structure allows convenient 2-block variable splitting.  Our methods rely on solving convex subproblems at each iteration and converge to a local stationary point. Specifically, discuss approach  alternating directions method of multipliers and the alternating linearization method and we provide convergence rate results for both classes of methods. Moreover, global optimization techniques from polynomial optimization literature are applied to complement our local methods and to provide lower bounds.

Tue, 20 Jan 2015

14:30 - 15:00
L3

Completely Positive Relaxations of Quadratically Constrained Quadratic Programs

Luis Zuluaga
(Lehigh University)
Abstract

There is a well established body of research on quadratic optimization problems based on reformulations of the original problem as a conic program over the cone of completely positive matrices, or its conic dual, the cone of copositive matrices. As a result of this reformulation approach, novel solution schemes for quadratic polynomial optimization problems have been designed by drawing on conic programming tools, and the extensively studied cones of completely positive and of copositive matrices. In particular, this approach has been applied to address key combinatorial optimization problems. Along this line of research, we consider quadratically constrained quadratic programs and provide sufficient and necessary conditions for
this type of problems to be reformulated as a conic program over the cone of completely positive matrices. Thus, recent related results for quadratic problems can be further strengthened. Moreover, these results can be generalized to optimization problems involving higher order polynomias.

Subscribe to Lehigh University