Contributed Talks

Room B101 Room B103 Room B105
Mon, 23rd – 14:00-15:30
14:00
Ademir Alves Ribeiro
A new convergence rate of the steepest descent regarding the Euclidean norm
14:00
Andreas H Hamel
Set-valued utility maximization
14:00
Hilal Ahmad Bhat
Mond Weir Duality and Saddle Point Analysis on Riemannian Manifolds
14:30
Douglas Soares Gonçalves
Convergence and regularization properties of the Levenberg–Marquardt method with Singular Scaling
14:30
Bernardo Freitas Paulo da Costa
Random Gradient-free Optimization in Infinite Dimensional Spaces
14:30
Mahwash Aftab
Saddle Point Criteria and KKT Conditions for Interval-Valued Optimization Problems on Riemannian Manifolds
15:00
Paulo J. S. Silva
Basis pursuit by inconsistent alternating projections
15:00
Tiago Lino Bello
Novel Heuristic and Exact algorithms for Redundant Benders Cut Elimination in Stochastic Dual Dynamic Programming
15:00
Mariane Cassenote
From Exactness to Efficiency: A Spectrum of Hybrid Strategies for Numerical Constrained Global Optimization
15:30
Esnil Josue Guevara Mendoza
Beyond Traditional Risk Measures: Cluster-Driven Robustness in Stochastic Energy Models
Tue. 24th – 10:30-12:30
10:30
Manoel Jardim
Decomposition Methods for Quasi-Variational Inequalities
10:30
Leandro da Fonseca Prudente
An active set method for box-constrained multiobjective optimization problems
10:30
Adriana Washington Henarejos
Quadratic Cuts for Strongly Convex Deterministic and Stochastic Optimization
11:00
Hugo José Lara Urdaneta
On the Generalized DWGM for convex minimization
11:00
María Laura Schuverdt
The Relaxed Quasinormality Constraint Qualification: a new condition ensuring boundedness of dual augmented Lagrangian sequences
11:00
Vincent Guigues
A single cut proximal bundle method for stochastic convex composite optimization
11:30
Ray Victor Guimarães Serra
A Proximal Gradient Method with an Explicit Line search for Multiobjective Optimization
11:30
Mariana da Rosa
On the boundedness of multipliers in augmented Lagrangian methods for MPCC
11:30
Renata Pedrini
Two-Period Decomposition in Stochastic Dual Dynamic Programming
12:00
Luciano Santana Begot
General descent methods for unconstrained multiobjective optimization via scalarization
12:00
Daiana Oliveira dos Santos
Augmented Lagrangian methods for nonlinear semidefinite programming with complementarity constraints: stationarity analysis and convergence
12:00
Lilian Chaves Brandão
Efficient Generation of Lagrangian Cuts through Boundary Dual Maximization in SDDiP
Tue, 24th – 15:30-17:30
15:30
Alfredo Iusem
Asymptotic convergence of high-order proximal-point methods
15:30
Jon Lee
ADMM for 0/1 D-Opt and MESP relaxations
15:30
Paula Monserratte Castro Castro
Analysis of four-dimensional variational data assimilation problems in low regularity spaces
16:00
Benar Svaiter
A family of projection splitting methods for the sum of maximal monotone operators
16:00
Marcia Fampa
On the relationship between MESP and 0/1 D-Opt and their upper bounds
16:00
Thiago Silveira
Relationship between Constraint Qualifications and Strong Second-Order Optimality Condition
16:30
Di Liu
On circumcentered direct methods for monotone variational inequality problems
16:30
Emerson Vitor Castelani
A Conformal Geometric Algebra Framework for Fitting and Classification of Geometric Entities
16:30
HOUNKPE Nounassou Vincent
Modeling carbon monoxide dispersion by a three-dimensional analytical model in the Gulf of Guinea using the 4D-Var assimilation method
17:00
Jefferson Divino Gonçalves de Melo
A Cubic Regularization Method for Multiobjective Optimization
17:00
Wagner da Rocha
A hybrid combinatorial-continuous strategy for solving molecular distance geometry problems
17:00
Edilaine dos Santos Duran
Sofia: Tests for Local and Global Optimality for Continuous Optimization Models
Wed, 26th – 10:30-12:30
10:30
Leonardo Delarmelina Secchin
Parallel Newton methods for the continuous quadratic knapsack problem: A Jacobi and Gauss-Seidel tale
10:30
Flávia Chorobura
Coordinate Descent Methods for Nonseparable Composite Optimization
10:30
Glaydston de Carvalho Bento
Subdifferential theory and the Fenchel Conjugate via Busemann Functions on Hadamard Manifolds
11:00
Luis Felipe Bueno
An Inexact General Descent Method with Applications in Differential Equation-Constrained Optimization
11:00
Amanda Ferreira de Azevedo
Hybrid Time-Granularity Strategies for Day-Ahead Hydrothermal Scheduling in the Brazilian Power System (DESSEM Model)
11:00
Maurício Silva Louzeiro
An Adaptive Cubic Regularization Inexact-Newton Method on Riemannian Manifolds
11:30
Max Leandro Nobre Gonçalves
Subsampled cubic regularization method with distinct sample sizes for function, gradient and Hessian
11:30
Juan Pablo Luna
Modeling the New Brazilian Gas Market: Equilibrium Structure and Optimization Reformulation
11:30
Volker Schulz
Riemannian Optimization in Statistical Modeling: From Mixed Models to Mixture Models
12:00
Vilmar Gehlen Filho
Inexact Cubic Regularization Method with Adaptive Reuse of Hessian Approximations
12:00
Ricardo Turano Figueiredo
Energy Pricing in Unit Commitment Using Lagrangean Relaxation
12:00
Igor Fontenele do Amaral
On boosted proximal point method for DC functions in Hadamard manifolds
Thu, 26th – 15:30-17:00
15:30
Geovani Nunes Grapiglia
On the Complexity of Lower-Order Implementations of Higher-Order Methods
15:30
Mateus Braga Oliveira
Linear Programming Applied to Strategic Planning in Cosmetics Resale Operations
15:30
Milena Almeida Leite Brandão
Optimal Control of Infectious Diseases: Comparative Analysis of Deterministic and Heuristic Approaches Applied to the SELIRVHD Model
16:00
Francisco Sobral
Well poised interpolation sets for linearly constrained derivative-free optimization problems
16:00
Eduardo Zorzo Sartoretto
BESOT: An Evolutionary Algorithm for Size Optimization of 2D Truss Structures
16:00
Hamidreza Anbarlooei
Why Particle Swarm Optimization is Inherently Suited for Massively Parallel Computation
16:30
Gabriel Haeser
On constraint qualifications for lower-level sets
16:30
Vitaliano de Sousa Amaral
A Partially Derivative-Free Proximal Method for Composite Multiobjective Optimization in the Hölder Setting
16:30
Raul Tintaya Marcavillaca
A Unified Inertial Framework for Strongly Convergent Proximal-Point Methods
Fri, 27th – 10:30-12:00
10:30
Roberto Andreani
A new constant-rank-type condition related to MFCQ and local error bounds
10:30
Evelin Heringer Manoel Krulikovski
Continuous Augmented Lagrangian Approach for Cardinality-Constrained Multi-objective Optimization
10:30
Philip Thompson
Robust Mean Estimation in High Dimensions: A Statistical Optimization Landscape
11:00
Felipe Lara
Star Quasiconvexity: a Unified Approach for Linear Convergence of First-Order Methods Beyond Convexity
11:00
Gilson do Nascimento Silva
A Regularized Hessian-Free Newton-Type Method with Global $\mathcal{O}(k^{-2})$ Convergence
11:00
Vladmir Pestov
On the consistency of k nearest neighbour classifiers optimized over variable metrics
11:30
Mauricio Romero Sicre
Alternating Inertial Regularized \tHPE-Type Methods for Solving Monotone Inclusions problems
11:30
José Luis Romero
On the solution of the subproblem in the smoothed sharp augmented Lagrangian method
11:30
Maria Eduarda Pinheiro
An Optimization-Based Algorithm for Fair and Calibrated Synthetic Data Generation

Session: Mon 23th – 14:00-15:30

Room: B101

Chair: Ademir Alves Ribeiro

14:00 — Ademir Alves Ribeiro/UFPR (Faculty / Researcher)

A new convergence rate of the steepest descent regarding the Euclidean norm

We analyze the convergence rate of the steepest descent method, regarding the Euclidean
norm with exact line search, applied to solve the problem of unconstrained minimization
of strongly convex quadratic functions. We present an improved bound for the convergence
rate presented in the literature, which provides a highly accurate estimate for the exact
Q-linear convergence rate of the method. Moreover, we study the behavior of the exact
rate, establishing some expected and natural properties, as the dependence only on the
condition number of the Hessian and its monotonicity. We prove that the general analysis
can be carried out by treating the two-dimensional case.

14:30 — Douglas Soares Gonçalves/UFSC (Faculty / Researcher)

Convergence and regularization properties of the Levenberg–Marquardt method with Singular Scaling

In the Levenberg–Marquardt method with Singular Scaling, the regularization term added to the Gauss-Newton model is a semi-norm defined by a positive semidefinite matrix with certain properties. In this talk we will discuss the local convergence analysis of such a method under error bound assumptions as well as its regularization properties under a stronger tangential cone condition. Some applications in nonlinear least squares problems arising from parameter reconstruction in heat conduction problems will be presented to illustrate the advantages of the method.

15:00 — Paulo J. S. Silva/Universidade Estadual de Campinas (Faculty / Researcher)

Basis pursuit by inconsistent alternating projections

In this talk, we present a new method for solving the basis pursuit problem, which seeks a vector with smallest l1​-norm among all solutions of a given linear system of equations. The basis pursuit is a well-known convex relaxation of the sparse affine feasibility problem, where one seeks sparse solutions to underdetermined systems. While the basis pursuit can be trivially reformulated as a linear program and solved with standard linear programming tools, we propose an alternative approach that tackles the problem in its original formulation.

We will present a scheme based on alternating projections applied to carefully designed subproblems. A key feature of our method is that these subproblems are constructed to be inconsistent, involving two non-intersecting sets. Recent theoretical advances have shown that such inconsistency, when arising from infeasibility, can actually accelerate the convergence of alternating projection methods. In our work, we exploit this phenomenon systematically to develop a numerically competitive algorithm for the basis pursuit problem.

We will discuss the theoretical foundations of the method, demonstrate its numerical performance, and compare it with existing approaches for solving basis pursuit.

Room: B103

Chair: Bernardo Freitas Paulo da Costa

14:00 — Andreas H Hamel/Free University of Bozen-Bolzano (Faculty / Researcher)

Set-valued utility maximization

How should one do utility maximization with respect to an incomplete preference which cannot be represented by a single real-valued utility function? Such a preference relation only has a multi-utility representation with, in general, infinitely many functions. Particular cases include stochastic dominance orders as well as Bewley preferences and vector orders in linear spaces. It is shown that a multi-utility representation generates a complete lattice of sets which turns utility maximization into a set optimization problem, and this also sheds new light on traditional multi-criteria optimization. Solution concepts are given which include the attainment of the supremum on the one side and maximality on the other side since these two concepts do not coincide any more as in the real-valued case. Duality concepts are introduced and corresponding results given. The link to preferences for flexibility known in Economics is established and applications to economic and financial decision making are discussed, e.g., a set-valued certainty equivalent for multi-dimensional random positions. Finally, new stochastic dominance orders based on set-valued quantiles are shown to be special cases of order relations with a multi-utility representation.

14:30 — Bernardo Freitas Paulo da Costa/FGV (Faculty / Researcher)

Random Gradient-free Optimization in Infinite Dimensional Spaces

Functional optimization problems are core to many applications. Though often solved through finite-dimensional gradient descent over a parametrization of the functions such as neural networks, an enticing alternative is to instead do gradient descent directly in function space, seen as an infinite-dimensional Hilbert space; this generally enables provable guarantees and fast convergence. However, infinite-dimensional gradients are often hard to compute in practice, hindering the applicability of such methods. On the other hand, individual directional derivatives can be easily computed using forward-mode scalar automatic differentiation.
We propose a random gradient-free method for optimization in infinite dimensional Hilbert spaces that leverages only directional derivatives, thus bypassing the tractability issue. Our method only requires, in closed form, knowledge of a pre-basis for the Hilbert space domain, i.e., a linearly-independent set whose span is dense in the Hilbert space. This is much easier to obtain than full orthonormal bases or reproducing kernels, which may not even exist, and is independent of the loss itself.
We showcase the use of our method to solve partial differential equations à la PINNs, where it effectively enables provable convergence.

15:00 — Tiago Lino Bello/CEPEL (Faculty / Researcher)

Novel Heuristic and Exact algorithms for Redundant Benders Cut Elimination in Stochastic Dual Dynamic Programming

The iterative process of Stochastic Dual Dynamic Programming (SDDP) often leads to an excessive growth in the number of Benders cuts (or cutting planes), many of which are redundant — i.e., can be removed without affecting the optimal value of the optimization problem. These redundant cuts do not contribute to improving the solution but may significantly increase the computational cost. To address this issue, two methods were implemented and evaluated for identifying and eliminating redundant cuts during SDDP execution: the Modified Shapiro Algorithm and the Pairwise Redundancy Analysis (PRA). In addition, two distinct parallelization strategies were developed for the PRA algorithm. The algorithms have been implemented in the official long term planning software used in the Brazilian system (NEWAVE), and the results obtained from a case study based on the Monthly Operation Program carried out by the independent System Operator indicate that: (i) the inclusion of the PRA reduced NEWAVE’s execution time by 7%, decreasing the backward and forward phases by up to 10% and eliminating over 15% of redundant cuts; (ii) the Modified Shapiro Algorithm reduced the backward and forward phase times by up to 20% and removed over 50% of redundant cuts—however, its high computational cost outweighs its benefits; (iii) cut selection in the SDDP process plays a key role in NEWAVE’s performance, potentially reducing total execution time by up to 50%; and (iv) the period-based parallelization strategy applied to the PRA reduced its execution time by up to 60% compared to the Cut-based parallelization approach.

15:30 — Esnil Josue Guevara Mendoza/Universidad UNIACC (Faculty / Researcher)

Beyond Traditional Risk Measures: Cluster-Driven Robustness in Stochastic Energy Models

Strategic energy planning under uncertainty faces critical challenges when addressing high-dimensional parameter spaces through Two-Stage Stochastic Programming. This research investigates the structural patterns emerging from applying different risk measures – risk-neutral, CVaR, and others – within high-dimensional stochastic energy systems. Using unsupervised learning techniques (PCA, UMAP, t-SNE), we first map and characterize the solution space geometry and uncertainty structures inherent to these complex systems. Building on these insights, we propose a novel methodology that leverages hierarchical and K-means clustering to estimate scenario weights directly from data, moving beyond traditional probability assumptions. The clustering analysis enables the identification of critical uncertainty regimes and facilitates the design of innovative constraints that minimize cost dispersion and enhance solution robustness. By interpreting cluster characteristics, we develop targeted model enhancements that strengthen strategic investment decisions against distributional ambiguity. Our approach provides a systematic framework to transform high-dimensional uncertainty into actionable insights, yielding more resilient and interpretable energy planning solutions validated through computational experiments

Room: B105

Chair: Hilal Ahmad Bhat

14:00 — Hilal Ahmad Bhat/Institute de Matematica e estatistica, Universidade de São Paulo (IME-USP) (Postdoc)

Mond Weir Duality and Saddle Point Analysis on Riemannian Manifolds

This study explores an optimization problem defined on Riemannian manifolds, where the
objective functions are interval-valued and the constraints consist of real-valued equalities and inequalities. Assuming geodesic convexity, the research establishes weak, strong, and strict converse duality theorems for the Mond-Weir dual, highlighting the connection between optimal solutions of the primal and dual problems. Additionally, the concept of a saddle point for the interval-valued Lagrangian function associated with the primal problem is introduced, and the relationship between these saddle points
and the primal optimal solutions is analyzed. Several examples are provided to illustrate and support the theoretical results on various manifolds.

14:30 — Mahwash Aftab/Aligarh Muslim University (Graduate student)

Saddle Point Criteria and KKT Conditions for Interval-Valued Optimization Problems on Riemannian Manifolds

Riemannian manifolds provide a powerful framework for optimization, overcoming limitations of Euclidean spaces and enabling more precise and efficient problem-solving. Their unique geometric structure allows for advanced approaches to complex problems. Interval optimization further enhances these capabilities, offering solutions to real-world challenges involving uncertainty. In this paper, we explore the relationship between two key optimality conditions – saddle point criteria and Karush-Kuhn-Tucker (KKT) conditions – in the context of Riemannian manifolds. Our study focuses on two types of optimization problems based on the nature of the functions involved:
Real-valued optimization: Both the objective and constraint functions are real-valued. Interval-valued optimization: Both the objective and constraint functions are interval-valued.
For the interval-valued optimization problem, we incorporate the concepts of weak differentiability and generalized Hukuhara (gH) directional differentiability of the involved functions. Additionally, we establish fundamental results on Riemannian manifolds using gHdirectional differentiability of interval-valued functions (IVFs) to derive the main results. The theoretical findings are supported and clarified through illustrative examples.

15:00 — Mariane Cassenote/Universidade Federal do Paraná (Faculty / Researcher)

From Exactness to Efficiency: A Spectrum of Hybrid Strategies for Numerical Constrained Global Optimization

Numerical constrained global optimization presents significant challenges, especially in non-convex problems where traditional exact methods can be computationally infeasible and metaheuristics offer no optimality guarantees. Hybridization, combining the strengths of different approaches, emerges as a promising alternative. This work presents and draws a parallel between three optimizers, each exploring a unique synergy to address these challenges.

First, OGRe (Relaxed Global Optimization), an interval-based Branch and Bound (B&B) method that employs constraint propagation techniques and exploits the problem’s structural properties through an Epiphytic decomposition of the constraint hypergraph.

Second, DECP (Differential Evolution with Constraint Propagation), a structural metaheuristic based on Differential Evolution (DE). Unlike pure black-box approaches, DECP exploits the problem structure, enabling constraint propagation to effectively prune sub-optimal and inconsistent regions during the stochastic search.

Third, HIBB (Hybrid Interval Branch and Bound), a hybrid B&B algorithm that integrates constraint propagation with a DE-based metaheuristic for local search within each subspace. This local search assesses subspace quality to determine the next region to explore, effectively guiding the search process.

Finally, by analyzing this spectrum ranging from OGRe’s systematic exactness to DECP’s stochastic efficiency, with HIBB as a balanced hybrid, we demonstrate how these different degrees of hybridization create a trade-off between solution quality and computational cost, providing a flexible toolkit for solving complex constrained problems.

Session: Tue 24th – 10:30-12:30

Room: B101

Chair: Hugo José Lara Urdaneta

10:30 — Manoel Jardim/IMPA (Graduate student)

Decomposition Methods for Quasi-Variational Inequalities

We propose a novel algorithm for solving Quasi-Variational Inequalities inspired by decomposition methods from convex analysis. We introduce new constraint qualifications specifically designed for this class of problems and prove some convergence results for our algorithm. The method is efficient and aims to solve large-scale problems, as demonstrated through applications to Walrasian equilibrium computation formulated as Generalized Nash Equilibrium Problems. This is a joint work with Claudia Sagastizábal and Mikhail V. Solodov.

11:00 — Hugo José Lara Urdaneta/CAC/CTE/UFSC – Blumenau – Brasil (Faculty / Researcher)

On the Generalized DWGM for convex minimization

A family of weighted conjugate–type methods for solving convex
quadratic optimization problems was recently introduced. This family
is based on convex combinations of the objective function and the
gradient norm, and includes the classical Conjugate Gradient method
and the Delayed Weighted Gradient method as extreme cases. The
intermediate methods offer a trade–off between function–value reduction
and stationarity, making the framework promising for practical
applications. In this work, we extend the framework to address more
general convex minimization problems. We propose two algorithmic
variants to deal with both theoretical and practical considerations.
We analyze their convergence properties and present numerical
experiments to demonstrate the effectiveness of the proposals.

11:30 — Ray Victor Guimarães Serra/Federal University of Piauí (Faculty / Researcher)

A Proximal Gradient Method with an Explicit Line search for Multiobjective Optimization

We present a proximal gradient method for solving convex multiobjective optimization problems, where each objective function is the sum of two convex functions, with one assumed to be continuously differentiable. The algorithm incorporates a backtracking line search procedure that requires solving only one proximal subproblem per iteration, and is exclusively applied to the differentiable part of the objective functions. Under mild assumptions, we show that the sequence generated by the method convergences to a weakly Pareto optimal point of the problem. Additionally, we establish an iteration complexity bound by showing that the method finds an -approximate weakly Pareto point in at most iterations.

12:00 — Luciano Santana Begot/UNICAMP – Universidade Estadual de Campinas (Graduate student)

General descent methods for unconstrained multiobjective optimization via scalarization

We present a general first-order algorithm for locating the Pareto region in multiobjective optimization problems. Instead of relying on multiple restarts, we adopt a “gathering” strategy that accumulates, over a single run, representative points of the Pareto front. The method is based on minimizing scalarizations of the objectives and non-monotone line search. An important aspect is the dynamic update of the weights, where we adopt diverse strategies to adjust the weights throughout the optimization process. We show theoretical convergence results for the proposed method. Numerical experiments indicate that the gathering strategy is efficient, generating a good representation of the Pareto front without the cost of multiple independent initializations.

Room: B103

Chair: Leandro da Fonseca Prudente

10:30 — Leandro da Fonseca Prudente/UFG (Faculty / Researcher)

An active set method for box-constrained multiobjective optimization problems

This talk discusses an active-set algorithm for smooth multiobjective optimization problems subject to box constraints. The algorithm works on one face of the feasible set at a time, treating it as a lower-dimensional region on which the problem can be simplified. At each iteration, it decides whether to remain on the current face or to move to a lower-dimensional one, thus characterizing two types of iterations: face-abandoning and face-exploring iterations. Backtracking and extrapolation strategies are combined to ensure global convergence while improving practical performance. Under standard regularity conditions, we establish dual nondegeneracy and prove finite identification of the active set.

11:00 — María Laura Schuverdt/Universidad Nacional de La Plata (Faculty / Researcher)

The Relaxed Quasinormality Constraint Qualification: a new condition ensuring boundedness of dual augmented Lagrangian sequences

In this work, we introduce a relaxed version of the well-known quasinormality constraint qualification introduced by Hestenes in the 1970s, called the relaxed quasinormality condition (RQN). This relaxed variant provides an effective way to handle equality constraints, similar to the approach used in the relaxed versions of CRCQ and CPLD. We analyze the relationship between RQN and other well-known constraint qualifications from the literature. Additionally, we demonstrate an important stability property: under RQN, subsequences of approximate Lagrange multipliers associated with accumulation points generated by the augmented Lagrangian method remain bounded.

11:30 — Mariana da Rosa/IMECC – UNICAMP (Graduate student)

On the boundedness of multipliers in augmented Lagrangian methods for MPCC

In this talk, we present a theoretical analysis of augmented Lagrangian (AL) methods for mathematical programs with complementarity constraints (MPCCs). Our focus is on a variant that reformulates the complementarity conditions via slack variables, enforcing them directly within the subproblems rather than through penalization. Within this framework, we introduce specialized constraint qualifications of quasi-normality type and establish a global convergence result under these assumptions. We also examine the behavior of the associated dual sequences and show that they remain bounded under the proposed CQs. Finally, we compare this approach with the standard AL framework, discussing both theoretical properties and numerical performance on instances from the MacMPEC collection. The results indicate that maintaining complementarity within the subproblems often leads to improved numerical stability.

12:00 — Daiana Oliveira dos Santos/UNIFESP (Faculty / Researcher)

Augmented Lagrangian methods for nonlinear semidefinite programming with complementarity constraints: stationarity analysis and convergence

We consider nonlinear semidefinite programming problems with semidefinite cone complementarity constraints, a class of highly degenerate problems where classical constraint qualifications typically fail. To analyze and address such problems, we introduce a tightened formulation that exploits the spectral decomposition of the complementarity structure, enabling a more tractable analysis of optimality conditions. Based on this formulation, we define a hierarchy of stationarity concepts—W, C, M, and S—that generalize standard notions from mathematical programs with complementarity constraints to the semidefinite setting; we also propose sequential optimality conditions. We then analyze an augmented Lagrangian method tailored to the tightened problem and prove that the accumulation points of the generated sequence are C-stationary.

Room: B105

Chair: Vincent Guigues

10:30 — Adriana Washington Henarejos/Fundação Getulio Vargas (Graduate student)

Quadratic Cuts for Strongly Convex Deterministic and Stochastic Optimization

The Quadratic Cuts for Strongly Convex Optimization (QCSC) is introduced as a new deterministic method designed to solve problems with strongly convex functions through the construction of piecewise quadratic lower approximations. The approach extends Kelley’s classical cutting-plane method by replacing affine cuts with quadratic ones, exploiting strong convexity to build tighter and more accurate models of the objective function. This curvature-aware formulation eliminates the need for extra regularization while improving numerical stability and convergence speed.

A detailed theoretical analysis establishes the validity of the quadratic cuts and derives convergence and complexity guarantees, showing that the method shares similar theoretical properties with proximal bundle-type algorithms, while preserving a simpler structure. An equivalent composite formulation is also presented, clarifying the contribution of the global quadratic term to the model. Also, initial tests indicate good performance compared to compething methods.

Building upon this deterministic foundation, the same quadratic-cut modeling principles are extended to multistage stochastic optimization through the Stochastic Quadratic Dynamic Programming (SQDP) algorithm. SQDP generalizes the classical SDDP (Stochastic Dual Dynamic Programming) method by replacing its affine cuts with quadratic ones to approximate recourse functions, allowing more accurate representations of cost-to-go functions and improving the convergence behavior. Numerical results indicate that SQDP can significantly outperform SDDP in problems with strong convexity, particularly when curvature plays a central role. This highlights the broader relevance of the quadratic-cut framework, positioning QCSC and its stochastic extension as promising tools for both deterministic and stochastic strongly convex optimization.

11:00 — Vincent Guigues/FGV (Faculty / Researcher)

A single cut proximal bundle method for stochastic convex composite optimization

This paper considers optimization problems where the objective is the sum of a function given by an expectation and a closed convex composite function, and proposes stochastic composite proximal bundle (SCPB) methods for solving it. Complexity guarantees are established for them without requiring knowledge of parameters associated with the problem instance. Moreover, it is shown that they have optimal complexity when these problem parameters are known. To the best of our knowledge, this is the first proximal bundle method for stochastic programming able to deal with continuous distributions. Finally, we present computational results showing that SCPB substantially outperforms the robust stochastic approximation (RSA) method in all instances considered.

11:30 — Renata Pedrini/Federal University of Santa Catarina (UFSC) (Faculty / Researcher)

Two-Period Decomposition in Stochastic Dual Dynamic Programming

Stochastic dual dynamic programming (SDDP) is a well-established algorithm for solving large-scale multistage stochastic linear programs and has been extensively applied since the early 1990s, particularly in hydrodominant energy systems. The method represents uncertainty through a scenario tree and avoids the explicit enumeration of all nodes by sampling scenarios at each iteration. SDDP decomposes the original multistage problem into a sequence of linear subproblems, which are solved independently in forward and backward passes to construct approximations of future cost functions.
This work proposes an extension of the standard SDDP based on scenario tree node aggregation. Instead of solving only one-period subproblems, connected nodes spanning two periods are grouped and solved jointly, resulting in larger subproblems. While this aggregation increases the computational effort per subproblem, it improves the quality of the resulting future cost function approximations and leads to tighter optimality cuts. The trade-off between computational burden and solution quality is explicitly analyzed.
In addition to a single aggregation scheme, the paper investigates the combination of different node aggregations within the same algorithm using synchronous and asynchronous parallelization strategies. This framework allows subproblems of different sizes to coexist, balancing fast iterations with higher-quality cuts. The proposed methods are evaluated on a long-term hydrothermal scheduling problem based on data from the Brazilian power system. Numerical results show that SDDP with node aggregation achieves improved solution quality compared to the standard SDDP, while maintaining the same overall execution time.

12:00 — Lilian Chaves Brandão/UFJF (Graduate student)

Efficient Generation of Lagrangian Cuts through Boundary Dual Maximization in SDDiP

This work investigates the solution strategy to compute Lagrangian cuts, from the dual maximization problem, for multistage stochastic mixed-integer programming, focusing on situations where the set of optimal dual solutions is infinite and unbounded. Unlike traditional approaches that seek any optimal dual, our study emphasizes the identification of specific dual solutions located at the boundary of the optimality region. These boundary duals are essential for constructing Lagrangian cuts that significantly enhance the efficiency and convergence of decomposition algorithms, particularly Stochastic Dual Dynamic Integer Programming (SDDiP). To achieve this, we adapt unconstrained and non-differentiable optimization methods—including subgradient, cutting plane, and bundle techniques—to systematically search for these boundary dual optimal solutions. The proposed methodology demonstrates that such solutions yield tighter Lagrangian cuts, which are more suitable for the decomposition process and lead to improved convergence rates in SDDiP. Computational experiments validate the approach, comparing general-purpose maximization methods with specialized adaptations designed to locate boundary solutions. Results show that the new class of “Tighter Lagrangian Cuts” outperforms conventional strategies, reducing the number of iterations required for convergence and providing stronger approximations of the recourse function. This research contributes to a deeper understanding of dual solution selection in Lagrangian relaxation and offers practical guidance for enhancing decomposition algorithms in large-scale multi-stage stochastic optimization.

Session: Tue 24/02 – 15:30-17:30

Room: B101

Chair: Jefferson Divino Gonçalves de Melo

15:30 — Alfredo Iusem/EMAp/FGV (Brasil)

Asymptotic convergence of high-order proximal-point methods

This paper investigates the asymptotic convergence behavior of high-order proximal-point algorithms (HiPPA) toward global minimizers, extending the analysis beyond sublinear
convergence rate results. Specifically, we consider the proximal operator of a lower semicontinuous function augmented with a $p$th-order regularization for p>1, and establish
the convergence of HiPPA to a global minimizer with a particular focus on its convergence rate. To this end, we focus on minimizing the class of uniformly quasiconvex functions,
including strongly convex, uniformly convex, and strongly quasiconvex functions as special cases. Our analysis reveals the following convergence behaviors of HiPPA when the
uniform quasiconvexity modulus admits a power function of degree q as a lower bound on an interval I:
(i) for q in (1,2) and I=[0,1), HiPPA exhibits local linear rate for p in (1,2);
(ii) for q=2 and I being the positive halfline, HiPPA converges linearly for p=2;
(iii) for p=q>2 and I being the positive halfline, HiPPA converges linearly;
(iv) for q>2 and I being the positive halfline, HiPPA achieves superlinear rate for p>q. Notably, to our knowledge, some of these results are novel, even in the context
of strongly or uniformly convex functions.

16:00 — Benar Svaiter/IMPA (Faculty / Researcher)

A family of projection splitting methods for the sum of maximal monotone operators

We present a projection splitting framework to finding zeros of sums of maximal monotone operator. These methods where proposed for two operators by Eckstein and Svaiter in 2004, and further extended to an arbitrary numbers in 2007.
The main ideas are to define an extended solution set and then define linear separators for the current iterate and the (extended) solution set. We also discuss posterior developments and extensions of this framework.

16:30 — Di Liu/IMPA (Postdoc)

On circumcentered direct methods for monotone variational inequality problems

The variational inequality problem (VIP) plays a central role in the theory and applications in
continuous optimization. In particular, minimization problems and KKT systems can be regarded as VIPs. In this work, we present the first methods using circumcenters for solving VIPs. The circumcentered-reflection method (CRM) is a projection-based tool developed with the aim of finding a point in the intersection of finitely many closed convex sets. CRM has gone through enhancements and adaptations over the last few years and was proven to be faster in many settings than competitors such as alternating projections and the Douglas-Rachford methods. One of CRM features is that it is able to deal with approximate projections, as we do in this paper. Here, we present circumcenter schemes that solve VIPs with two types of operators: paramonotone and monotone ones. For the former we employ exact projections, whereas for the latter we base our approach on approximate ones. We establish convergence results for both methods and we perform numerical experiments, which show a remarkable performance when compared to other well know methods, like the extragradient algorithm.

17:00 — Jefferson Divino Gonçalves de Melo/Universidade Federal de Goiás (Faculty / Researcher)

A Cubic Regularization Method for Multiobjective Optimization

In this talk, we present a cubic regularization method for solving nonconvex multiobjective optimization problems. The method minimizes a regularized model of each objective, allowing approximate first- and second-order derivatives that adapt dynamically with the regularization parameter through a nonmonotone line search.
We also discuss finite-difference implementations that make the method practical when derivatives are unavailable. Under mild assumptions, the algorithm achieves a worst-case complexity of order O(eps^{-3/2}) to compute an eps-approximate Pareto critical point and exhibits superlinear or even quadratic convergence when exact derivative information is available.

Room: B103

Chair: Jon Lee

15:30 — Jon Lee/University of Michigan (Faculty / Researcher)

ADMM for 0/1 D-Opt and MESP relaxations

The Gaussian cases of 0/1 D-optimality problem and the Maximum-Entropy Sampling (MESP) problem are two well-known NP-hard discrete maximization problems in experimental design. 0/1 D-optimality aims to select a subset of s design points, from a universe of n given design points in R^m, with the goal of minimizing the “generalized variance” of the least-squares parameter estimates. For MESP, we have an input covariance matrix of order n, and we wish to select a principal submatrix of order s, so as to maximize the “differential entropy”.

Algorithms for exact optimization (of moderate-sized instances) are based on branch-and-bound. The best upper-bounding methods are based on convex relaxation. We present new ADMM (Alternating Direction Method of Multipliers) algorithms for solving these relaxations and experimentally demonstrate their practical value.

16:00 — Marcia Fampa/Universidade Federal do Rio de Janeiro (Faculty / Researcher)

On the relationship between MESP and 0/1 D-Opt and their upper bounds

We establish strong connections between two fundamental nonlinear 0/1 optimization problems coming from the area of experimental design, namely maximum entropy sampling and 0/1 D-Optimality. The connections are based on maps between instances, and we analyze the behavior of these maps. Using these maps, we transport basic upper-bounding methods between these two problems, and we are able to establish new domination results and other inequalities relating various basic upper bounds. Further, we establish results relating how different branch-and-bound schemes based on these maps compare. Additionally, we observe some surprising numerical results, where bounding methods that did not seem promising in their direct application to real-data MESP instances, are now useful for MESP instances that come from 0/1 D-Optimality. This is a joint work with Jon Lee and Gabriel Ponte.

16:30 — Emerson Vitor Castelani/State University of Maringá (Faculty / Researcher)

A Conformal Geometric Algebra Framework for Fitting and Classification of Geometric Entities

The proposed problem consists of determining, given a set of points Q, which among different geometric objects, such as lines, planes, circles, and spheres, best fits the set by minimizing an algebraic distance between the points and the object. Unlike the classical least-squares formulation, where the type of object is fixed and only its parameters are adjusted, here the goal is also to identify which object provides the best fit. Thus, the problem involves two main stages: fitting and classification.

Solving each case separately through independent optimization procedures may lead to a fragmented approach, as it neglects the geometric relationships among the different types of objects. In this context, Conformal Geometric Algebra (CGA) offers a unified framework to represent and manipulate points, lines, planes, circles, and spheres in an integrated manner. By introducing a conformal representation of Euclidean space, CGA enables transformations such as rotations, translations, dilations, and inversions to be expressed linearly and elegantly.

In this work, we propose a new CGA-based framework that simultaneously performs the fitting and classification of geometric objects. This approach leverages the shared relationships among the listed objects and provides a direct and efficient method to solve the problem. Finally, numerical examples demonstrate the feasibility and efficiency of the proposed methodology, highlighting the potential of CGA as a powerful tool for geometric modeling and analysis.

17:00 — Wagner da Rocha/University of Campinas (Postdoc)

A hybrid combinatorial-continuous strategy for solving molecular distance geometry problems

The Molecular Distance Geometry Problem (MDGP) consists of determining a three-dimensional molecular configuration from partial interatomic distances, and its discretizable subclass (DMDGP) admits an exact combinatorial formulation that enables a structured exploration of the search space. In practical scenarios such as Nuclear Magnetic Resonance spectroscopy, distances are provided as uncertainty intervals, leading to the interval variant (iDMDGP), a challenging nonconvex optimization problem. We propose a hybrid discrete-continuous optimization framework that integrates a combinatorial enumeration strategy, derived from the structural properties of the DMDGP, with a continuous refinement stage that minimizes a nonconvex stress function penalizing violations of admissible distance intervals. This coupling preserves the advantages of discrete branching while enabling local optimization steps capable of correcting geometric inconsistencies that enumeration alone cannot resolve. The formulation incorporates torsion-angle intervals and chirality constraints through an atom ordering handcrafted to maintain protein-backbone geometry. Numerical experiments on protein-like instances show that the method efficiently reconstructs geometrically valid conformations even when distance bounds are wide. These results demonstrate that combining combinatorial structure with smooth nonconvex optimization yields an effective and scalable strategy for solving the iDMDGP, providing a unified perspective that bridges discrete search and continuous refinement.

Room: B105

Chair: Thiago Silveira

15:30 — Paula Monserratte Castro Castro/UFRJ/MODEMAT (Postdoc)

Analysis of four-dimensional variational data assimilation problems in low regularity spaces

We carry out a rigorous analysis of four-dimensional variational data assimilation (4D-VAR) problems for linear and semilinear parabolic partial differential equations. Due to the nature of the data assimilation problem, continuity of the state with respect to the spatial variable is required since pointwise observations of the state variable appear in the cost functional. Using maximal parabolic regularity tools, we prove this regularity for initial conditions with $L^\beta$-regularity guaranteed by control constraints, rather than Sobolev regularity of the controls ensured by artificial cost terms. We obtain existence of optimal controls and first-order necessary optimality conditions for both the convex and nonconvex problem with spatial dimension d=2,3, as well as second-order sufficient optimality conditions for the nonconvex problem for d=2.

16:00 — Thiago Silveira/University of Campinas (Unicamp) (Postdoc)

Relationship between Constraint Qualifications and Strong Second-Order Optimality Condition

Several constraint qualifications have been proposed in the context of nonlinear programming, among which the linear independence condition (LICQ), Mangasarian-Fromovitz condition (MFCQ), and constant rank condition (CRCQ) stand out. However, when considering second-order optimality conditions, the Mangasarian-Fromovitz condition is not sufficient to show that the Hessian of the Lagrangian is positive semidefinite on the critical cone. Consequently, all other qualification conditions that are implied by it also lack such a property. In this presentation, we will show a constant rank type qualification condition that is related to the Subspace Component, which has properties related to the critical cone and, furthermore, guarantees that the Hessian of the Lagrangian is independent of the Lagrange multipliers when analyzed in directions of the critical cone.

16:30 — HOUNKPE Nounassou Vincent/International chair in mathematical Physics and Applications (Graduate student)

Modeling carbon monoxide dispersion by a three-dimensional analytical model in the Gulf of Guinea using the 4D-Var assimilation method

Modeling atmospheric dispersion in West Africa is essential for understanding, predicting, and mitigating the impacts of pollution on public health, ecosystems, and the regional climate in a context of rapid urbanization and growing industrialization. Air quality modeling is an attempt to predict or simulate ambient concentrations of contaminants in the atmosphere. In this paper, we present a three-dimensional (3D) mathematical model of dispersion governed by the transport equation of turbulent scalar flows.The theoretical study of this model has demonstrated the existence and uniqueness of a solution to the problem posed, as well as the system of optimality. In practice, the model based on second-order Eulerian approach giving an analytical formulation derived from the theoretical model integrating diffusion coefficients, turbulence intensities dependent on atmospheric stability (clouds, radiation), and the horizontal (u,v) and vertical (w) wind components, derived from reanalysis data (ERA5). The model is calibrated using a data assimilation technique (4D-Var) to enable it to produce more reliable estimates consistent with satellite observations.
An application in the Gulf of Guinea, using reanalysis data and satellite images, demonstrates its effectiveness in producing more explanatory carbon monoxide (CO) dispersion maps with finer resolution than satellite observations (Copernicus). These maps make it possible to quantify, identify and track carbon monoxide plumes in time and space.

17:00 — Edilaine dos Santos Duran/UEM (Graduate student)

Sofia: Tests for Local and Global Optimality for Continuous Optimization Models

In continuous non-convex optimization, distinguishing local from global optima is a significant challenge. We present two tests, named “Sofia”: a strong test for local optimality and a weak test for global optimality. The theoretical foundation lies in the necessary and sufficient conditions presented by Bagajewicz (2025), which consist of reformulating the original problem into a Difference of Convex Functions format and applying a duality principle developed by Tuy (1987). From this reformulation, an auxiliary problem denoted as PT is derived. The strong test verifies whether, for a candidate solution, the optimal value of PT equals zero, a condition which is necessary and sufficient to identify it as a local optimum that is not global. The weak test, in turn, assesses whether the optimal value of PT is strictly positive. The main practical advantage is the ability to integrate these tests into global optimization algorithms, which often identify the solution quickly but spend most of their time on final verification. Applying the Sofia tests allows for an early and intelligent interruption of this process, drastically reducing the total computational time. Results on test problems demonstrated the methods’ effectiveness.
1 Bagajewicz, M. Global Optimality Necessary and Sufficient Conditions for Continuous Optimization Models. Submitted.
2 Tuy, H. Convex Programs with an Additional Reverse Convex Constraint, JOTA, 52, 463-486, (1987)

Session: Thu 26th – 10:30-12:30

Room: B101

Chair: Leonardo Delarmelina Secchin

10:30 — Leonardo Delarmelina Secchin/Federal University of Espírito Santo (Faculty / Researcher)

Parallel Newton methods for the continuous quadratic knapsack problem: A Jacobi and Gauss-Seidel tale

The continuous knapsack problem minimizes a diagonal convex quadratic function subject to box constraints and a single linear equality constraint. It has numerous applications in areas such as resource allocation, multicommodity flow problems, machine learning, and classical optimization applications like Lagrangian relaxation and quasi-Newton updates. The fastest algorithms for solving it combine Newtonian ideas with variable fixing (active constraint identification), enabling the solution of large instances in seconds. Among these methods, the most effective was introduced by Condat, which employs a Gauss-Seidel approach to the variable fixing strategy. Although very fast, this variant is difficult to parallelize properly, similar to the challenges faced when parallelizing the Gauss-Seidel algorithm compared to its Jacobi counterpart. In this talk, we will present our efforts to parallelize such methods. Our main tool uses a Newtonian approach to guide the method, ensuring a small number of iterations. Concurrently, we employ opportunistic variable fixing strategies within each worker. We will present both CPU and GPU implementations, discussing the limitations and advantages of each computational model.

11:00 — Luis Felipe Bueno/UNIFESP (Faculty / Researcher)

An Inexact General Descent Method with Applications in Differential Equation-Constrained Optimization

In many applications, gradients are inherently approximate, which motivates the development of gradient-based methods that remain reliable with inexact first-order information. A common approach in this context is adaptive evaluation: using coarse gradients early and refining them near a minimizer. This is particularly natural in differential equation-constrained optimization (DECO), where discrete adjoint gradients depend on iterative solvers. Motivated by DECO applications, we propose an inexact general descent framework and we establish its global convergence theory in two step-size regimes. With bounded steps sizes, we conducted the analysis assuming that the tolerance in the computation of the inexact gradient is proportional to its norm, while, with diminishing steps, we required that the tolerance sequence is summable. Another main contribution presented is a careful analysis of error bounds when solving the equations of state and their implications on the inaccuracy of the gradient of the objective function. We implemented the framework through inexact gradient descent and an inexact BFGS-like method, demonstrating its applicability on a second-order ODE inverse problem and a two-dimensional Laplace inverse problem using discrete adjoint gradients with adaptive accuracy. Across these examples, adaptive inexact gradients consistently shortened optimization time compared with fixed tight tolerances, while curvature information further improved efficiency.

11:30 — Max Leandro Nobre Gonçalves/Universidade Federal de Goiás (Faculty / Researcher)

Subsampled cubic regularization method with distinct sample sizes for function, gradient and Hessian

In this talk, we discuss a subsampled Cubic Regularization Method (CRM) for finite-sum composite optimization problems, in which the function, gradient, and Hessian are estimated using possibly different sample sizes. This flexible sampling strategy captures different levels of approximation accuracy while reducing computational cost. We establish iteration-complexity bounds for finding approximate first-order critical points and present numerical experiments illustrating the performance of the proposed method.

12:00 — Vilmar Gehlen Filho/Universidade Federal de Goiás (UFG) (Graduate student)

Inexact Cubic Regularization Method with Adaptive Reuse of Hessian Approximations

This work introduces an inexact cubic regularization method with adaptive reuse of Hessian approximations to solve general nonconvex optimization problems. In the proposed approach, the gradient is computed inexactly and updated at every iteration, whereas the Hessian approximation is updated at a specific iteration and then reused for $m$ subsequent iterations (a lazy strategy), where the value of $m$ may vary throughout the procedure. The method can be implemented either in a Hessian-free or a derivative-free manner. Implementations that approximate derivative information via finite-difference schemes are discussed. We establish first-order iteration-complexity results for the method. Numerical experiments are reported to illustrate the behavior of the proposed method and to compare its performance with existing lazy cubic algorithms.

Room: B103

Chair: Juan Pablo Luna

10:30 — Flávia Chorobura/Federal Institute of Education, Science and Technology of Paraná (Faculty / Researcher)

Coordinate Descent Methods for Nonseparable Composite Optimization

In this work we consider large-scale composite optimization problems having the objective function formed as a sum of two terms (possibly nonconvex); one has a (block) coordinatewise Lipschitz continuous gradient and the other is differentiable but nonseparable. Under these general settings we derive and analyze two new coordinate descent methods. The first algorithm, referred to as the coordinate proximal gradient method, considers the composite form of the objective function, while the other algorithm disregards the composite form of the objective and uses the partial gradient of the full objective, yielding a coordinate gradient descent scheme with novel adaptive stepsize
rules. We prove that these new stepsize rules make the coordinate gradient scheme a descent method, provided that additional assumptions hold for the second term in the objective function. We present the worst-case complexity analysis for these two new methods in both convex and nonconvex settings, provided that the (block) coordinates are chosen random or cyclic.

11:00 — Amanda Ferreira de Azevedo/Cepel (Faculty / Researcher)

Hybrid Time-Granularity Strategies for Day-Ahead Hydrothermal Scheduling in the Brazilian Power System (DESSEM Model)

Since January 2020, the DESSEM model has been used by the Brazilian Independent System Operator (ONS) for day-ahead scheduling of the National Interconnected Power System (SIN), and since 2021 it has been adopted by the Chamber of Electric Energy Commercialization (CCEE) for Short-Term Price calculation. In its official application, DESSEM solves a large-scale mixed-integer linear programming problem with a deterministic horizon of up to seven days. The first day is discretized in half-hour intervals, while the remaining days use hourly aggregation. This structure generates a high computational burden under the strict two-hour limit imposed for day-ahead operation and price formation. When this limit is exceeded, current practice relies on contingency simplifications that may compromise result quality and representativeness.
This work proposes temporal discretization strategies that reduce computational effort while aiming to preserve solution quality. Two schemes are evaluated: using hourly intervals on the first day of the planning horizon, and a mixed granularity that maintains half-hourly intervals during peak-load periods and hourly intervals elsewhere. Simulations with official cases show that the first-day hourly discretization significantly reduces runtime while maintaining strong operational consistency with the reference solution. The mixed-granularity approach may increase computation time but can improve price smoothness. Both strategies are tested in cases that include the hydro unit commitment formulation, which remains deactivated in official DESSEM runs due to its computational burden. The results indicate that the proposed methods consistently reduce runtime and appear promising both as contingency options and as a pathway toward enabling new modeling features.

11:30 — Juan Pablo Luna/Universidade Federal do Rio de Janeiro (Faculty / Researcher)

Modeling the New Brazilian Gas Market: Equilibrium Structure and Optimization Reformulation

Recent regulatory reforms in Brazil have significantly reshaped the natural gas sector, aiming to increase competition. These initiatives are part of a broader effort to attract new market participants and promote a more dynamic and efficient market structure. In this context, the Brazilian National Agency for Petroleum, Natural Gas and Biofuels (ANP) has outlined a strategic framework for the development of a competitive natural gas market, inspired by international benchmark models. The agency’s long-term vision is to transition Brazil from an emerging market to a mature one, characterized by liquid trading hubs and efficient price formation.

In this work, we present an abstract mathematical equilibrium model that has the restructured Brazilian natural gas market as a particular case. While the latter reflects the complexity of the current logistical and distribution infrastructure -featuring key transportation operators— we abstract its mathematical core as a General Equilibrium Problem (GEP). Under suitable paradigm shifts and the introduction of additional mathematical structures, we show that the problem can be reformulated as a Generalized Nash Equilibrium Problem (GNEP). This equivalence offers new insights into the system’s behavior and facilitates the use of advanced solution techniques, including reformulations as quadratic optimization problems in specific cases.

12:00 — Ricardo Turano Figueiredo/University of Campinas (Graduate student)

Energy Pricing in Unit Commitment Using Lagrangean Relaxation

This work examines the methodology currently adopted by the Brazilian Chamber of Electric Energy Commercialization (CCEE) to estimate the Short-Term Market Clearing Price (PLD), which reflects the daily energy spot price. The analysis focuses on the Day-Ahead Dispatch Model (DADM), formulated as a Mixed-Integer Linear Programming (MILP) problem due to the representation of Thermal Unit Commitment (TUC) constraints.

In the current operational procedure, prices are derived from the marginal costs of each submarket, computed as the shadow prices of the power-balance constraints. However, these prices are evaluated on a relaxation version of the model in which the integer variables are fixed at their optimal solution. Although widely used in MILP-based pricing schemes, this strategy may lead to distortions and inconsistencies between the reported multipliers and the true marginal costs of energy.

We propose and evaluate an alternative approach based on Lagrangean Relaxation (LR) to solve the DADM. Beyond being a robust technique for handling large-scale dispatch and commitment problems, the LR framework yields economically meaningful multipliers, providing marginal costs that arise naturally from the dual optimization process. We compare the prices obtained from the LR approach with those produce by the current methodology, demonstrating improvements in the accuracy and interpretability of the resulting spot prices.

Room: B105

Chair: Volker Schulz

10:30 — Glaydston de Carvalho Bento/Universidade Federal de Goiás (Faculty / Researcher)

Subdifferential theory and the Fenchel Conjugate via Busemann Functions on Hadamard Manifolds

In this paper, we propose a notion of subdifferential defined via Busemann functions and use it to identify a condition under which the Fenchel–Young inequality of Bento, Cruz Neto and Melo (Appl. Math. Optim. 88:83, 2023) holds with equality. This equality condition is particularly significant, as it captures a fundamental duality principle in convex analysis, linking a primal convex function to its conjugate and clarifying the sharpness of the associated inequality on Riemannian manifolds.
We also investigate the existence of non-trivial affine functions under Ricci curvature information. In particular, we extend the result of Bento, Cruz Neto and Melo, originally formulated for the case of negative Ricci curvature on an open set, to manifolds whose Ricci curvature may be non-zero. As a consequence, we prove new non-existence criteria for non-trivial affine functions and show that the assumption of non-zero Ricci curvature is, in general, necessary to ensure such a rigidity conclusion.

11:00 — Maurício Silva Louzeiro/Universidade Federal de Goiás (Faculty / Researcher)

An Adaptive Cubic Regularization Inexact-Newton Method on Riemannian Manifolds

An inexact-Newton method with cubic regularization is designed for solving Riemannian unconstrained nonconvex optimization problems. The proposed algorithm is fully adaptive with at most ${\cal O} (\epsilon_g^{-3/2})$ iterations to achieve the norm of the gradient smaller than $\epsilon_g$ for given $\epsilon_g>0$, and at most $\mathcal O(\max\{ \epsilon_g^{-3/2 }, \epsilon_H^{-3} \} )$ iterations to reach a second-order stationary point respectively. Notably, the proposed algorithm remains applicable even in cases of the gradient and Hessian of the objective function are unknown. Numerical experiments are performed with gradient and Hessian being approximated by forward finite-differences to illustrate the theoretical results and numerical comparison.

11:30 — Volker Schulz/Trier University (Faculty / Researcher)

Riemannian Optimization in Statistical Modeling: From Mixed Models to Mixture Models

Riemannian optimization exploits the intrinsic geometry of the parameter space for statistical estimation problems whose parameters naturally lie on curved spaces, in particular on the manifold of symmetric positive definite matrices. This talk presents two recent advances that leverage this geometric viewpoint to improve the numerical performance of classical statistical models.The main focus is on a new geometric formulation of variance estimation in linear mixed models, using a representation that treats Euclidean parameters and covariance matrices according to their natural geometry. Explicit expressions for the Riemannian gradient and Hessian enable the construction of efficient second-order optimization schemes, such as a Riemannian Newton trust-region method. Numerical experiments show improvements in robustness, stability of covariance estimates, and convergence behaviour compared to established optimizers based on Fisher scoring, profiled REML, or Cholesky-type parameterizations.

Furthermore, we revisit Gaussian mixture models, where a Riemannian Newton trust-region method—built upon a closed-form expression for the Riemannian Hessian of the reformulated log-likelihood—achieves clear performance gains over the traditional EM algorithm, especially in scenarios with strong component overlap. These two case studies illustrate how geometric optimization provides a unifying methodology that respects the intrinsic structure of statistic

12:00 — Igor Fontenele do Amaral/Universidade Federal do Piauí (Graduate student)

On boosted proximal point method for DC functions in Hadamard manifolds

We study proximal-type algorithms for solving non-differentiable and non-convex optimization problems defined as the difference of convex (DC) functions on Hadamard manifolds. Building upon the classical proximal point framework and its extensions to Riemannian settings, we propose a class of boosted proximal algorithms designed to improve practical efficiency while preserving theoretical convergence guarantees. The boosting strategy incorporates descent directions generated from DC decompositions together with suitable line search procedures, allowing for larger and more effective steps compared to standard proximal schemes. Numerical experiments defined on Euclidean and (non-zero seccional curvature) manifolds illustrate the advantages of the boosted strategy, showing improved robustness when compared to classical proximal and DC algorithms. These results highlight the potential of boosted proximal methods as efficient tools for large-scale and non-smooth and non-convex optimization problems on both linear and nonlinear geometric spaces.

Session: Thu 26th – 15:30-17:00

Room: B101

Chair: Francisco Sobral

15:30 — Geovani Nunes Grapiglia/Université catholique de Louvain (Faculty / Researcher)

On the Complexity of Lower-Order Implementations of Higher-Order Methods

In this work, we propose a method for minimizing non-convex functions with Lipschitz continuous $p$th-order derivatives, starting from $p \geq 1$. The method, however, only requires derivative information up to order $(p-1)$, since the $p$th-order derivatives are approximated via finite differences. To ensure oracle efficiency, instead of computing finite-difference approximations at every iteration, we reuse each approximation for $m$ consecutive iterations before recomputing it, with $m \geq 1$ as a key parameter. As a result, we obtain an adaptive method of order $(p-1)$ that requires no more than $\mathcal{O}(\epsilon^{-\frac{p+1}{p}})$ iterations to find an $\epsilon$-approximate stationary point of the objective function and that, for the choice $m=(p-1)n + 1$, where $n$ is the problem dimension, takes no more than $\mathcal{O}(n^{1/p}\epsilon^{-\frac{p+1}{p}})$ oracle calls of order $(p-1)$. This improves previously known bounds for tensor methods with finite-difference approximations in terms of the problem dimension.

16:00 — Francisco Sobral/State University of Maringá (UEM) (Faculty / Researcher)

Well poised interpolation sets for linearly constrained derivative-free optimization problems

When solving constrained derivative-free optimization problems, where some constraints are of the black-box type, penalization methods are a common choice, such as augmented Lagrangian. In such cases, subproblems consist of derivative-free optimization problems with simple constraints, usually bound constraints. In this work, we extend an unconstrained derivative-free trust-region method based on radial basis functions (ORBIT) to handle convex constraints. Theoretical and practical strategies are developed to provide global convergence and to build interpolation models using only feasible points, in the bound or linear constrained case. Numerical experiments are presented, which describe the behavior of the method.

16:30 — Gabriel Haeser/University of São Paulo (Faculty / Researcher)

On constraint qualifications for lower-level sets

We consider an augmented Lagrangian method with general lower-level constraints, that is, where some of the constraints are penalized while others are kept as subproblem constraints. Motivated by some recent results on optimization problems on manifolds, we present a general theory of global convergence when a feasible approximate KKT point is found for the subproblems at each iteration. In particular, we formulate new constant rank constraint qualifications that do not require a constant rank assumption in a full dimensional neighborhood of the point of interest. We also formulate an appropriate quasinormality condition which guarantees boundedness of the dual sequences generated by the algorithm. Our results apply to the current Algencan implementation which keeps bound constraints for the subproblems.

Room: B103

Chair: Vitaliano de Sousa Amaral

15:30 — Mateus Braga Oliveira/Centro Federal de Educação Tecnológica de Minas Gerais – Campus Timóteo (Faculty / Researcher)

Linear Programming Applied to Strategic Planning in Cosmetics Resale Operations

Product mix management is a central challenge for cosmetics resellers, especially in direct-selling models where limited budgets, fluctuating demand, and a diverse portfolio significantly influence financial performance. This study investigates the application of Linear Programming (LP) as a quantitative tool to support purchasing and planning decisions in small-scale cosmetics reselling operations. A mathematical model is proposed to maximize expected profit by considering acquisition cost, selling price, contribution margins, and demand limits for different product categories. The model incorporates commercial constraints typical of the sector, including available budget, stock availability, minimum category requirements, and sales capacity limitations. Using simulated data representing a small multi-brand reseller, the formulation was implemented in Python using the PuLP library, enabling a comparison between traditional empirical decision-making and the optimal solution provided by the model. Results indicate that LP yields a more efficient allocation of resources, increasing profit margins and reducing the accumulation of low-turnover products. Sensitivity analyses also reveal how changes in budget and demand affect the optimal mix, highlighting the usefulness of the approach for continuous decision support. It is concluded that Linear Programming represents an effective and accessible methodology for optimizing cosmetics reselling strategies, with potential for extension to integer, stochastic, or multi-objective models in future studies.

16:00 — Eduardo Zorzo Sartoretto/FGV EMAp (Faculty / Researcher)

BESOT: An Evolutionary Algorithm for Size Optimization of 2D Truss Structures

Structural optimization has become an essential tool in the development of efficient engineering solutions, reducing material usage without compromising performance. This work presents an evolutionary algorithm for the size optimization of two-dimensional trusses, based on an enhanced version of the bidirectional method BESO (Bidirectional Evolutionary Structural Optimization). The proposed algorithm, named BESOT, employs a continuous formulation of the bar cross-sectional areas as design variables and incorporates a penalization strategy inspired by the SIMP method to improve structural selectivity and the convergence of solutions. The methodology uses efficient numerical procedures for computing nodal displacements and evaluating the stress level in each bar, which guides the iterative update of the structural configuration. Numerical experiments were conducted on different truss configurations to observe the evolutionary behavior of topologies throughout the iterations and to assess the stability of the optimization process. The results indicate that BESOT can generate structures consistent with the main load paths, exhibiting coherent material distributions and physically interpretable geometries. In addition, a post-processing filtering procedure was introduced to remove redundant bars and ensure the constructive feasibility of the resulting geometries, highlighting the method’s applicability as a practical and robust tool for structural design and optimization.

16:30 — Vitaliano de Sousa Amaral/Universidade Federal do Piauí (Faculty / Researcher)

A Partially Derivative-Free Proximal Method for Composite Multiobjective Optimization in the Hölder Setting

This paper presents an algorithm for solving multiobjective optimization problems involving composite functions, where we minimize a quadratic model that approximates F(x) − F(xᵏ) and that can be derivative-free. We establish theoretical assumptions about the component functions of the composition and provide comprehensive convergence and complexity analysis. Specifically, we prove that the proposed method converges to a weakly ε-approximate Pareto point in at most O( ε⁻⁽ᵝ⁺¹⁾ᐟᵝ ) iterations, where β denotes the Hölder exponent of the gradient. The algorithm incorporates gradient approximations and a scaling matrix Bₖ to achieve an optimal balance between computational accuracy and efficiency. Numerical experiments on a collection of benchmark problems are presented, illustrating the practical behavior of the proposed approach and its competitiveness with existing composite algorithms.

Room: B105

Chair: Milena Almeida Leite Brandão

15:30 — Milena Almeida Leite Brandão/Federal University of Uberlândia (Faculty / Researcher)

Optimal Control of Infectious Diseases: Comparative Analysis of Deterministic and Heuristic Approaches Applied to the SELIRVHD Model

This work presents a comparative analysis between deterministic and heuristic approaches applied to the optimal control problem of infectious diseases, using an extended compartmental model SELIRVHD (Susceptible–Exposed–Mild–Infected–Recovered–Vaccinated– Hospitalized–Deceased). The objective is to simultaneously minimize the epidemiological impact and the cost of control interventions, considering as decision variables the vaccination rates and preventive measures.

The simulations were conducted considering the contamination process of the \textit{SARS-CoV-2} (COVID-19) virus, aiming to assess the effectiveness of control strategies in a real pandemic context. The deterministic approach employs ordinary differential equations solved by the 4th-order Runge–Kutta method, while the heuristic combines Monte Carlo stochastic simulation with the Grey Wolf Optimizer (GWO) algorithm in a parallel environment. Results show that although the deterministic method offers higher computational efficiency and numerical stability, the heuristic approach outperforms it globally, achieving lower accumulated costs and more realistic responses under uncertainty.

The simulations indicate that the stochastic model captures natural epidemic fluctuations more accurately and produces smoother, more adaptive controls, especially in the variables related to vaccination and social distancing. The analysis reinforces the potential of metaheuristic techniques in optimizing public health policies under uncertainty, contributing to the development of robust and economically viable epidemic mitigation strategies.

16:00 — Hamidreza Anbarlooei/Applied Mathematics Department, UFRJ (Faculty / Researcher)

Why Particle Swarm Optimization is Inherently Suited for Massively Parallel Computation

This work presents a scalable optimization framework tailored for high-dimensional, nonlinear problems in the petroleum and chemical industries, where the thermodynamic behavior of hydrocarbon mixtures must be reconciled with experimental observations. While gradient-based and sequential strategies are often sufficient for simpler tasks, our topological analysis reveals a strongly nonconvex landscape characterized by extensive plateaus, saddle points, and ill-conditioned and Rosenbrock-type valleys. These geometric pathologies generate unstable gradients and narrow ridges that severely compromise the convergence of standard Gradient Descent and Nelder-Mead algorithms.

Motivated by these limitations, we develop and analyze a massively parallel implementation of the Particle Swarm Optimization (PSO) framework on GPU architectures. This approach exploits massive parallelism to execute thousands of simultaneous thermodynamic evaluations with negligible wall-clock overhead, effectively bypassing the sequential bottlenecks of traditional methods. In this way, the method not only converges to the global minimum of the domain but also explores the landscape effectively, avoiding entrapment in local minima, one of the major obstacles in the target application.

In a rigorous comparative analysis, the proposed framework demonstrates decisive superiority over gradient-based alternatives, achieving higher solution quality and orders-of-magnitude faster convergence. Beyond optimization performance, the spatial distribution of the converged swarm offers critical insights into the intrinsic geometry of the solution space, validating the Manifold Hypothesis.

For example, in a practical application involving 210 parameters, the PSO method identifies only about 30\% of them as dominant, effectively revealing a much lower intrinsic dimension. This is a result that conventional sensitivity analysis methods fail to uncover reliably.

16:30 — Raul Tintaya Marcavillaca/Center for Mathematical Modeling, CMM-University of Chile (Postdoc)

A Unified Inertial Framework for Strongly Convergent Proximal-Point Methods

This talk presents a unified inertial algorithmic framework that guarantees strong conver-
gence for monotone inclusion problems. By integrating inertial effects into the Solodov-
Svaiter inexact proximal-point method, we establish strong convergence under weak and
verifiable assumptions. This framework acts as a blueprint, generating the first inertial
versions of Korpelevich’s extragradient, forward-backward, and Tseng’s methods with
provable strong convergence. Finally, we demonstrate the versatility of our approach
by extending it to composite problems with multiple operators, greatly expanding its
potential use cases.

Session: Fri 27th – 10:30-12:00

Room: B101

Chair: Felipe Lara

10:30 — Roberto Andreani/UNICAMP- Brasil (Faculty / Researcher)

A new constant-rank-type condition related to MFCQ and local error bounds

Constraint qualifications (CQs) are fundamental for understanding the geometry of feasible sets and for ensuring the validity of optimality conditions in nonlinear programming. A known idea is that constant-rank type CQs allow one to modify the description feasible set, by eliminating redundant constraints, so that the Mangasarian-Fromovitz CQ (MFCQ) holds. Traditionally, such modifications, called \emph{reductions} here, have served primarily as auxiliary tools to connect existing CQs. In this work, we adopt a different viewpoint: we treat the very existence of such reductions as a CQ in itself. We study these “reduction-induced” CQs in a general framework, relating them not only to MFCQ, but also to arbitrary CQs. Moreover, we establish their connection with the local error bound (LEB) property. Building on this, we introduce a relaxed variant of the constant rank CQ known as \emph{constant rank of the subspace component} (CRSC). This new CQ preserves the main geometric features of CRSC, guarantees LEB and the existence of reductions to MFCQ. Finally, we partially prove a recent conjecture stated in (SIAM J. Optim., 34(2):1799-1825, 2024) by showing that CRSC implies LEB in the manifold setting.

11:00 — Felipe Lara/Instituto de Alta Investigación, Universidad de Tarapacá, Arica, Chile (Faculty / Researcher)

Star Quasiconvexity: a Unified Approach for Linear Convergence of First-Order Methods Beyond Convexity

We introduce a new class of generalized convex functions, termed star quasiconvexity, which provides a unifying framework for convex, star-convex, quasiconvex, and quasar-convex functions. We provide several characterizations of this class, including for both nonsmooth and differentiable cases, and derive key properties that facilitate the analysis of first-order methods. For gradient-type methods, we establish linear convergence. Furthermore, for proximal point methods we ensures linear convergence on closed, star-shaped sets, which are not necessarily convex.

11:30 — Mauricio Romero Sicre/Universidade Federal da Bahia (Faculty / Researcher)

Alternating Inertial Regularized HPE-Type Methods for Solving Monotone Inclusions problems

In this work we propose alternating-inertial, regularized HPE-type methods for solving monotone inclusion problems (MIP).
HPE (Hybrid Proximal Extragradient) methods are inexact variants of the Proximal Point Method (PPM) that use a relative hybrid error criterion.
Recent works have introduced static and dynamic regularization strategies into HPE methods that, by exploiting linear convergence under strong monotonicity, improve complexity estimates.
Inertial variants of the PPM incorporate extrapolation (inertial) steps to improve practical performance.
In contrast to the general inertial PPM, linear convergence of the alternating-inertial PPM (alternates inertial and non-inertial steps) under strong monotonicity has been established.
In this work we prove an analogous result for the alternating-inertial HPE.
Building on this fact, we introduce static and dynamic regularized variants of the alternating-inertial HPE and, under standard assumptions, establish global convergence and pointwise iteration-complexity bounds that match or improve classical estimates (up to logarithmic factors).
Finally, we discuss implementable variants based on previous applications of the HPE framework.

Room: B103

Chair: Evelin Heringer Manoel Krulikovski

10:30 — Evelin Heringer Manoel Krulikovski/UFPR (Faculty / Researcher)

Continuous Augmented Lagrangian Approach for Cardinality-Constrained Multi-objective Optimization

In this work, we propose an extension of the continuous reformulation of the cardinality constraint for multiobjective optimization problems. We present an algorithm combining recent ideas from the literature, such as the multiobjective augmented Lagrangian and the vector projected gradient method, aiming to address the non-convex Hadamard constraint and obtain Pareto optimal solutions. We present feasibility and optimality results for the algorithm. Furthermore, we present two penalty approaches, one quadratic and the other exponential. In our experiments, we perform applications to portfolio problems, using real data from financial market indices implemented in the Python language.

11:00 — Gilson do Nascimento Silva/Federal University of Piauí (Faculty / Researcher)

A Regularized Hessian-Free Newton-Type Method with Global $\mathcal{O}(k^{-2})$ Convergence

In this talk, we present a regularized Hessian-free Newton-type method for minimizing smooth convex functions with Lipschitz continuous Hessians.
The algorithm constructs an approximate Hessian by finite differences and selects the regularization parameter through an adaptive criterion that ensures sufficient decrease and gradient control.
We prove that the method achieves a global $\mathcal{O}(k^{-2})$ convergence rate, matching the best known bound for second-order methods.
A modified variant incorporating the exact Hessian when available enjoys local quadratic convergence under standard assumptions.
Despite its simplicity, this variant is computationally faster than the \emph{Regularized Newton Method} of Mishchenko (2024) across several convex benchmark problems.
Our analysis also provides explicit bounds on the regularization sequence and a worst-case iteration complexity of order $\mathcal{O}(\varepsilon^{-2})$.
The proposed framework thus unifies regularized and Hessian-free Newton-type schemes, offering a theoretically sound and practically efficient alternative for smooth convex optimization.

11:30 — José Luis Romero/IMIT-CONICET; FACENA-UNNE (Postdoc)

On the solution of the subproblem in the smoothed sharp augmented Lagrangian method

In this work we analyze in detail how to solve the unconstrained subproblem arising at each iteration of the smoothed sharp augmented Lagrangian method, recently proposed for nonlinear programming problems with equality constraints. This method avoids the global minimization of a nonsmooth function by replacing it with the computation of stationary points of a smoothed function depending on two variables. Within this framework, we propose two algorithmic variants: one in which the smoothing parameter is kept fixed, and another one in which the smoothing parameter is treated as a variable and updated jointly with the primal variables.

For each algorithmic variant, we study a reformulation of the problem. The first reformulation is based on a classical transformation that prevents numerical ill-conditioning as the penalty parameter grows, while the second one makes the smoothing parameter explicit as an auxiliary variable. In both cases, the structure of the smoothed function is exploited. We focus on the latter to construct a specialized quasi-Newton method with descent properties and convergence guarantees under reasonable assumptions.

Numerical experiments show that the algorithm with a variable smoothing parameter exhibits comparable performance when the subproblem is solved either by the proposed specialized quasi-Newton method or by GENCAN, the core solver of the ALGENCAN package, a well-established and widely used augmented Lagrangian software in nonlinear optimization. These results support the practical relevance of the proposed approach for solving the subproblem, which plays a key role in the implementation of the overall method.

Room: B105

Chair: Maria Eduarda Pinheiro

10:30 — Philip Thompson/FGV EMAp (Faculty / Researcher)

Robust Mean Estimation in High Dimensions: A Statistical Optimization Landscape

Modern data sets are vulnerable to outliers and rare events, making classical statistical methods, like mean estimation under IID assumptions, unreliable. This issue spurred the development of Robust Statistics, which introduced resilient estimators such as medians and trimmed means. However, many robust methods, like the Tukey median, are computationally infeasible in high dimensions. Between 2010 and 2015, progress was made with the development of new statistical guarantees and computational techniques, notably by Diakonikolas et al., which led to the field of Algorithmic Robust Statistics. Despite this, practical challenges remain: most methods require knowledge of the population covariance, rely on unknown distributional properties, and cannot robustly estimate covariance matrices in spectral norm—possibly an inherently hard problem. We revisit this problem with a novel lens based on notions from optimization, showing some advancement on these limitations via a precise “optimization landscape” for the robust mean and covariance estimation problem—specifically, a relative inexact Polyak–Łojasiewicz property.

11:00 — Vladmir Pestov/Universidade Federal de Santa Catarina (Faculty / Researcher)

On the consistency of k nearest neighbour classifiers optimized over variable metrics

In data science there is ample empirical evidence that the accuracy of the k nearest neighbour learning rules for supervised classification depends on the good choice of a metric fitting the geometry of the data, see the survey [1]. The choice of such optimal metric is based on minimizing either the empirical
misclassification error of the rule, or a more complicated target function as in the popular LMNN classifier [3]. At the same time, we are unaware of many theoretical results showing the universal consistency of such variable metric learning rules, that is, the asympotic convergence of the misclassification error to the smallest error possible (the Bayes error) for any data distribution, when the sample size goes to infinity. In this talk, we propose to address the consistency problem using the algorithm stability considerations

11:30 — Maria Eduarda Pinheiro/Federal University of Santa Catarina (UFSC) (Postdoc)

An Optimization-Based Algorithm for Fair and Calibrated Synthetic Data Generation

Agent-based microsimulations, such as those used in epidemiological modeling during COVID-19, need a realistic base population. In addition to demographics, health-related variables are important. In certain regions, health surveys are small, which leads to imbalanced classes and limited data for sensitive groups. To address this, we propose a mixed-integer linear optimization model that creates health variables from class probabilities while ensuring fairness across groups, such as older individuals. The model is unimodular and includes a preprocessing step. Our tests show better classification than a random forest, especially across age groups. This allows us to generate data for large populations, such as Germany’s population of over 80 million.