Introduction To Linear Optimization Dimitris Bertsimas Pdf.39 HOT!

CLICK HERE --->>> https://urloso.com/2t86Hj

Optimization methods not only rely on careful parameter tuning, but also on efficient heuristics. For example, branch-and-bound (B&B) involves several heuristic decisions about the branching behavior that are hand-tuned into the solvers. However, heuristics tuning can be very complex because it concerns several aspects of the problem that are not known a priori. Khalil et al. (2016) propose to learn the branching rules showing performance improvements over commercial hand-tuned algorithms. Similarly, Alvarez et al. (2017) approximate strong branching rules with learning methods. Machine learning has been useful also to select reformulations and decompositions for mixed-integer optimization (MIO). Bonami et al. (2018) learn in which cases it is more efficient to solve mixed-integer quadratic optimization problem (MIQO) by linearizing or not the cost function. They model it as a classification problem showing advantages compared to how this choice is made heuristically inside state-of-the-art solvers. Kruber et al. (2017) propose a similar method applied to decomposition selection for MIO.

Constraint programming is a paradigm to model and solve combinatorial optimization problems very popular in the computer science community. The first works on applying machine learning to automatically configure constraint programs (CPs) date back to the 1990s (Minton 1996). Later, Clarke et al. (2002) used Decision Trees to replace computationally hard parts of counterexample guided SAT solving algorithms. More recently, Xu et al. (2008) describe SATzilla, an automated approach for learning which candidate solvers are best on a given instance. SATzilla won several SAT solver competitions because if its ability to adapt and pick the best algorithm for a problem instance. Given a SAT instance, instead of solving it with different algorithms, SATzilla relies on a empirical hardness model to predict how long each algorithm should take. This model consists of a ridge regressor (Hastie et al. 2009) after nonlinear transformation of the problem features. Selsam et al. (2019) applied recent advances in NN archtectures to CPs by directly predicting the solution or infeasibility. Even though this approach did not give as good results as state-of-the-art methods, it introduces a new research direction for solving CPs. Therefore, the constraint programming community is also working on data-driven methods to improve the performance and understanding of solution algorithms.

The study of how changes in the problem parameters affect the optimal solution has long been studied in sensitivity analysis, see Bertsimas and Tsitsiklis (1997, Chapter 5) and Boyd and Vandenberghe (2004, Sect. 5.6) for introductions on the topic. While sensitivity analysis is related to our work as it analyzes the effects of changes in problem parameters, it is fundamentally different both in philosophy and applicability. In sensitivity analysis the problem parameters are uncertain and the goal is to understand how their perturbations affect the optimal solution. This aspect is important when, for example, the problem parameters are not known with high accuracy and we would like to understand how the solution would change in case of perturbations. In this work instead, we consider problems without uncertainty and use previous data to learn how the problem parameters affect the optimal solution. Therefore, our problems are deterministic and we are not considering perturbations around any nominal value. As a matter of fact, the data we use for training are not restricted to lie close to any nominal point. In addion, sensitivity analysis usually studies continuous optimization problems since it relies on the dual variables at the optimal solution to determine the effect of parameter perturbations. This is why there has been only limited work on sensitivity analysis for MIO. In contrast, we show that our method can be directly applied to problems with integer variables.

For non-degenerate problems, the tight constraints correspond to the support constraints, i.e., the set of constraints that, if removed, would allow a decrease in \(f(\theta , x^\star (\theta ))\) (Calafiore 2010, Definition 2.1). In the case of linear optimization problems (LOs) the support constraints are the linearly independent constraints defining a basic feasible solution (Bertsimas and Tsitsiklis 1997, Definition 2.9). An important property of support constraints is that they cannot be more than the dimension n of the decision variable (Hoffman 1979, Proposition 1), (Calafiore 2010, Lemma 2.2). This fact plays a key role in our method to reduce the complexity to predict the solution of parametric optimization problems. The benefits are more evident when the number of constraints is much larger than the number of variables, i.e., \(n \ll m\).

Similarly to Sect. 2.1, in case of mixed-integer linear optimization problem (MILO) and MIQO when the cost f is linear or quadratic and the constraints g are all linear, the solution of (3) corresponds to solving a linear system of equations defined by the KKT conditions (Boyd and Vandenberghe 2004, Sect. 10.2). This means that we can solve these problems online without needing any optimization solver (Bertsimas and Stellato 2019).

We used the Gurobi Optimizer (Gurobi Optimization 2020) to find the tight constraints because it provides a good tradeoff between solution accuracy and computation time. Note that from Sect. 2, in case of LO, MILO, QO and MIQO when the cost f is linear or quadratic and the constraints g are all linear, the online solution corresponds to solving a linear system of equations defined by the KKT conditions (Boyd and Vandenberghe 2004, Sect. 10.2) on the reduced subproblem. This means that we can solve those parametric optimization problems without the need to apply any optimization solver. 2b1af7f3a8