Polynomial optimization: Tightening RLT-based branch-and-bound schemes with conic constraints

This paper explores the potential of (nonlinear) conic constraints to tighten the relaxations of spatial branch-and-bound algorithms. More precisely, we contribute to the literature on the use of conic optimization for the efficient solution, to global optimality, of nonconvex polynomial optimization problems. Taking as baseline an RLT-based algorithm, we present different families of well-known conic-driven constraints: linear SDP-cuts, second-order cone constraints, and SDP constraints. We integrate these constraints in the baseline algorithm and present a thorough computational study to assess their performance, both with respect to each other and with respect to the standard RLT relaxations for polynomial optimization problems. Our main finding is that the different variants of nonlinear constraints (second-order cone and semidefinite) are the best performing ones in around 50% of the instances in widely used test sets. Additionally, we discuss how one can benefit from the use of machine learning to decide on the most suitable constraints to add to a given instance. The computational results show that the machine learning approach significantly outperforms each of the individual approaches.

Citation

González-Rodríguez, Ghaddar et al. (2025). Polynomial optimization: Tightening RLT-based branch-and-bound schemes with conic constraints

Authors from IE Research Datalab