2022 Optimization Days
HEC Montréal, Québec, Canada, 16 — 18 May 2022
MB7  Machine learning for combinatorial optimization
May 16, 2022 03:30 PM – 05:10 PM
Location: Quebecor (yellow)
Chaired by Quentin Cappart
4 Presentations

03:30 PM  03:55 PM
Combining Neural Networks and Marginal Probabilities Derived from Constraint Programming Models to Perform Structured Prediction Tasks
Despite their many successes on various challenging tasks, Deep Neural Networks still struggle on structured prediction problems, which feature multiple discrete outputs that are interdependent, i.e. between which constraints should hold. Constraint Programming (CP) is all about structure: it enjoys a long and successful history of identifying oftenrecurring combinatorial structures and of designing sophisticated algorithms to extract information from such structures. Of particular interest here is the relative frequency of a given variablevalue assignment in a combinatorial structure over a set of variables. We revisit two recent papers from the CP community that propose ways to combine Machine Learning (ML) and Constraint Programming during training or inference. We propose instead to use a CPBP solver, which generalizes CP by propagating the above relative frequencies in order to approximate the marginal probabilities over each of the variables. We show that it offers more natural opportunities for combinations with ML and provide empirical evidence that it also boosts performance.

03:55 PM  04:20 PM
Dynamic routing and wavelength assignment using graph neural networks and deep reinforcement learning
We propose a novel method based on Deep Reinforcement Learning for the routing and wavelength assignment problem in alloptical wavelengthdecisionmultiplexing networks. We consider dynamic incoming requests, in which their arrival and holding times are not known in advance. The objective is to devise a strategy that minimizes the number of rejected packages due to the lack of resources in the long term. We employ Graph Neural Networks to capture promising information from a graphstructured input. A deep reinforcement learning agent is then trained to find the optimal strategy. The proposed algorithm selects a route and a wavelength simultaneously for each incoming traffic connection as they arrive. We show that this algorithm outperforms the baselines to solve the dynamic RWA problem.

04:20 PM  04:45 PM
A learning approach for constraint customization in optimization models
This study focuses on embedding linear side constraints in some known classical mixed integer linear programs (MILP). Industrial customers use optimization software packages to optimize their planning and decision making. However, Normally, in real applications, they customize the optimal solutions obtained by softwares based on some implicit internal operational rules. To access to a onestep decision support system and to avoid relying on experts’ knowledge of each particular domain, we propose a framework to find such rules from the historical data implemented by customers. We use some machine learning methods like clustering and decision tree to infer the rules and embed them as constraints in the optimization models

04:45 PM  05:10 PM
Reinforcement Learning for RideHailing: Dynamic Vehicle Fleet Management
We study the use of reinforcement learning for ridehailing with centralized control. The problem consists in managing a fleet of vehicles that have to be assigned to arising demands and relocated dynamically in order to maximize overall profit. Traditionally tackled by heuristics and mathematical optimization, this problem can be addressed by using machine learning to forecast demand ; reinforcement learning allows to learn from previous assignment and relocation decisions to improve the fleet behavior. Results are presented for the pyhailing ridehailing simulator.