Plenary Speakers and Round Table
Plenary speakers
The following distinguished researchers will be the plenary speakers of Brazopt 2026
- 🇺🇸 Boris Mordukhovich(Wayne State University, USA)
- 🇧🇷 Maicon Marques Alves (UFSC, Brazil)
- 🇺🇸 Katya Scheinberg (Georgia Institute of Technology, USA)
- 🇧🇷 Mikhail Solodov (IMPA, Brazil)
- 🇯🇵 Ellen Hidemi Fukuda (Kyoto University, Japan)
- 🇧🇷 Orizon Ferreira (UFG, Brazil)
- 🇩🇪 Andreas Fischer (TU Dresden, Germany
- 🇺🇸 George Lan (Georgia Institute of Technology, USA)
- 🇧🇷 Ernesto Birgin (USP, Brazil)
- 🇫🇷 Samir Adly (Université de Limoges, France)
- 🇧🇷 Claudia Sagastizábal (Unicamp, Brazil)
Round table: Real-World Optimization Challenges
We are pleased to announce a special round table discussion mediated by Prof. Paulo J. S. Silva. The session will feature experts from both academia and industry.
Mediator
- 🇧🇷 Paulo J. S. Silva (Unicamp, Brazil)
Panelists
- 🇧🇷 Luis Felipe Bueno (Unifesp, Brazil)
- 🇧🇷 Daniel Haro (Venturus, Brazil)
- 🇧🇷 André Luiz Diniz (Eletrobras Cepel, Brazil)
- 🇧🇷 Andressa Schiessl (AmbevTech, Brazil)
Plenary abstracts
Boris Mordukhovich
Convergence of Descent Methods under Polyak-Lojasiewicz-Kurdyka Properties
This talk presents the novel convergence analysis of a generic class of descent methods in nonsmooth and nonconvex optimization under several versions of the Polyak-Lojasiewicz-Kurdyka (PLK) properties. Along with other results, we prove the finite termination of generic algorithms under the PLK property with lower exponents. Specifications are given to convergence rates of some particular algorithms including inexact reduced gradient methods and the boosted algorithm in DC programming. It is revealed, e.g., that the lower exponent PLK property in the DC framework is incompatible with the gradient Lipschitz continuity for the plus function around a local minimizer. On the other hand, we show that the above inconsistency observation may fail if the Lipschitz continuity is replaced by merely the gradient continuity.
Chair: Luiz-Rafael Santos.
Mon 23/02, 9:30–10:30
Maicon Alves
A general framework for inexact splitting algorithms with relative errors and applications to Chambolle-Pock and Davis-Yin methods
In this work we apply the recently introduced framework of degenerate preconditioned proximal point algorithms to the hybrid proximal extragradient (HPE) method for maximal monotone inclusions. The latter is a method that allows inexact proximal (or resolvent) steps where the error is controlled by a relative-error criterion. Recently the HPE framework has been extended to the Douglas-Rachford method by Eckstein and Yao. In this paper we further extend the applicability of the HPE framework to splitting methods. To this end we use the framework of degenerate preconditioners that allows to write a large class of splitting methods as preconditioned proximal point algorithms. In this way, we modify many splitting methods such that one or more of the resolvents can be computed inexactly with an error that is controlled by an adaptive criterion. Further, we illustrate the algorithmic framework in the case of Chambolle-Pock’s primal dual hybrid gradient method and the Davis-Yin’s forward Douglas-Rachford method. In both cases, the inexact computation of the resolvent shows clear advantages in computing time and accuracy. Joint work with D. A. Lorenz and E. Naldi.
Chair: Benar Svaiter.
Mon 23/02, 11:00–12:00
Katya Scheinberg
Stochastic and noisy oracles in unconstrained continuous optimization
Classical unconstrained continuous optimization algorithms, such as gradient descent rely on the ability to compute the gradient and, sometimes, value of the objective at the iterates. Many practical algorithms today relax the requirement of access to the exact derivatives or function values and work with their approximations. We call these inexact computations – zeroth- and first-order oracles. They appear in stochastic optimization, where the true quantities are expectations over a distribution, and thus cannot be computed exactly, but can be approximated by sample averages. Inexact oracles also appear in derivative-free optimization where first order derivatives are approximated using function values. There are many other examples arising from extensions of these settings as well as from using randomization. We will classify the different types of error of such oracles with specific examples. We will then show how these different types of error affect algorithmic behavior. Our results will recover many existing algorithms using a simple unified framework.
Chair: Alfredo Iusem.
Mon 23/02, 16:00–17:00
Mikhail Solodov
Descent sequences in weakly convex optimization
We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex.
The framework is general, in the sense that it combines the possibility of explicit iterations (based on the gradient or a subgradient at the current iterate), implicit iterations (using a subgradient at the next iteration, like in the proximal schemes), as well as iterations when the associated subgradient is specially constructed and does not correspond neither to the current nor the next point (this is the case of descent steps in bundle methods).
Under the subdifferential-based error bound on the distance to critical points, linear rates of convergence are established. Our analysis applies, among other techniques, to prox-descent for decomposable functions, the proximal-gradient method for a sum of functions, redistributed bundle methods, and a class of algorithms that can be cast in the feasible descent framework for constrained optimization.
Chair: Geovani Nunes Grapiglia.
Tue 24/02, 9:00–10:00
Ellen Hidemi Fukuda
Recent developments in vector optimization
In vector optimization, we aim to minimize several objective functions simultaneously, under the order induced by a well-defined cone and possibly subject to constraints. Many methods have been proposed for both continuous and combinatorial vector optimization problems. Among those designed for the former class, descent-type algorithms provide theoretical convergence guarantees. However, in most applications, the ordering cone is restricted to the nonnegative orthant. In the first part of this talk, we introduce the basic ideas of descent-type methods and illustrate examples that go beyond this simple cone. In contrast, when dealing with discrete structures, even for linear objective functions, simple approaches such as the weighting method may fail to generate Pareto-optimal points. In the second part, we analyze vector optimization problems with discrete convex functions and show that, for such problems, the entire Pareto frontier can be obtained in polynomial time.
Chair: Paulo J. S. Silva.
Tue 24/02, 14:00–15:00
Orizon Ferreira
On Constraint Qualifications and Algorithms of Nonlinear Programming on Riemannian Manifolds
The extension of nonlinear programming from Euclidean spaces to Riemannian manifolds has recently gained significant attention in the optimization community, motivated by applications whose geometry is inherently curved. This talk presents perspectives on constrained optimization in this setting, focusing on classical concepts such as constraint qualifications and sequential optimality conditions, adapted to the Riemannian framework and connected to algorithms. We show how these tools ensure global and local convergence for state-of-the-art methods, and how relaxed qualifications still provide strong theoretical guarantees. Beyond algorithmic performance, the results contribute to a systematic foundation for constrained nonlinear optimization on manifolds, bridging modern developments in nonlinear programming with the geometry of non-Euclidean spaces.
Chair: Max Leandro Nobre Gonçalves.
Wed 25/02, 9:00–10:00
Andreas Fischer
Advances in Newton-type Methods
Starting from the classical Newton method for the solution of a square system of smooth equations, we will look at cases, where standard conditions for the local convergence of this method are violated. This includes systems of equations with solutions that can be nonisolated, nondifferentiable, or singular, possibly arising from reformulations of necessary optimality systems. With respect to such situations,
modifications of Newton’s method and corresponding local convergence results will be presented and discussed.
Chair: Roger Behling.
Wed 25/02, 10:30–11:30
George Lan
Policy Optimization for Reinforcement Learning
This talk presents an optimization-based perspective on reinforcement learning, focusing on policy mirror descent (PMD) and policy dual averaging (PDA). While classical RL methods rely on greedy dynamic programming updates, we show that many of their limitations, such as poor sample efficiency and unreliable exploration, can be mitigated by adopting more conservative stochastic optimization principles. I will highlight how PDA extends PMD by aggregating advantage information over time, making it naturally compatible with function approximation and continuous action spaces. Finally, I will present recent results on actor-accelerated PDA, which combines theoretical guarantees with practical efficiency and demonstrates significantly improved accuracy and robustness over proximal policy optimization (PPO), a highly popular RL method, on standard continuous-control benchmarks.
Chair: Marcia Fampa.
Thu 26/02, 9:00–10:00
Ernesto Birgin
Flexible Job Shop Scheduling with Nonlinear Routes
In this talk, we will consider the Flexible Job Shop (FJS) with non-linear routes. The Job Shop (JS) scheduling involves a set of jobs, each composed of ordered operations that must be processed on specific machines. The problem consists of deciding when each operation will be executed, while respecting the order of operations within each job and the limited capacity of the machines. While in the JS each operation must be processed on a specific machine, in the FJS each operation can be executed on one of several alternative machines, increasing the flexibility of the problem. In the FJS with non-linear routes, each job’s operations do not necessarily follow a linear order, with precedence relationships represented by a directed acyclic graph (DAG). Our interest in the FJS with non-linear routes stems from the fact that it naturally models a real and contemporary problem: the Online Print Shop (OPS) scheduling. This problem involves the planning of production in digital print shops, where orders arrive at different times and can follow alternative processing routes. In this presentation, we will highlight specific characteristics of the OPS and discuss different alternatives for the cost function to be minimized. Finally, we will mention some variants that incorporate non-linear aspects in the context of scheduling problems.
Chair: María Laura Schuverdt.
Thu 26/02, 14:00–15:00
Samir Adly
The Variational Convexity Modulus: A Spectral Curvature Invariant and Its Exact Transport by Moreau Envelopes
In this talk, we briefly review the notion of variational convexity modulus at a proximal base pair
as a quantitative second-order curvature invariant for possibly nonsmooth and nonconvex functions. This modulus captures the best local quadratic convexity (or hypoconvexity) compatible with the variational geometry of the subdifferential, and it plays a central role in numerical optimization: it acts as a local conditioning parameter, underpins stability of critical points, and governs the well-posedness and sensitivity of proximal/resolventbased mappings, thereby shaping the local behavior (and, in favorable regimes, the rates) of proximal splitting and related first-order schemes beyond the smooth setting.
We then highlight a recent breakthrough due to Rockafellar [1]: the modulus admits an exact spectral characterization in terms of the strict second subderivative, namely it coincides with the minimal “eigenvalue-like” value obtained by evaluating the strict second-order subderivative object along unit directions. Moreover, is prox-regular at
for
if and only if
.
Building on this viewpoint, we explain how one can establish a precise transfer principle under proximal smoothing. In particular, we show a commutation mechanism between strict second subdifferentiation and the Moreau envelope in a localized resolvent chart, yielding an explicit attenuation law that transforms the variational convexity modulus under Moreau regularization. As a consequence, we obtain both forward stability results for the modulus of and an explicit inversion formula that allows spectral recovery of the base-pair curvature of f from its smooth surrogate. We conclude with illustrative examples spanning smooth, nonsmooth, and trully nonconvex regimes, emphasizing when the spectral picture is sharp and when it collapses.
Chair: Gabriel Haeser.
Fri 27/02, 9:00–10:00
Claudia Sagastizábal
Approximate subdifferentials as a bridge between variational analysis
and numerical optimization
The fundamental Brøndsted-Rockafellar theorem shows that epsilon-subgradients of proper convex functions are close to exact subgradients at nearby points. Thanks to this result, the epsilon-subdifferential can be viewed as a continuous multivalued function which constitutes a localized approximation to the subdifferential.
In a reverse approach, motivated by numerical considerations, Lemaréchal’s transportation formula expresses subgradients at a given point as epsilon-subgradients at another point.
The continuity of the epsilon-subdifferential, combined with the constructive approach of epsilon-subgradients using the transportation formula, underlies many nonsmooth optimization algorithms, particularly those based on inexact variants of the proximal point method.
We explain how to tailor continuous extensions of the subdifferential for structured nonsmooth functions, not necessarily convex, arising in problems with composite or difference-of-convex objective functions.
The value of this new approach is illustrated with theoretical and numerical applications.
Credit to co-authors will be given along the talk.
Chair: Juan Pablo Luna.
Fri 27/02, 13:30–14:30


