Optimization Days 2024

HEC Montréal, Québec, Canada, 6 — 8 May 2024

Schedule Authors My Schedule

TA10 - Machine Learning

May 7, 2024 10:30 AM – 12:10 PM

Location: PWC (green)

Chaired by Julien Pallage

4 Presentations

  • 10:30 AM - 10:55 AM

    Solving Combinatorial Optimization Problems with Machine Learning-driven Local Search

    • Maryam Abazari, presenter,
    • Quentin Cappart, Polytechnique Montréal

    This study presents a methodology for integrating machine learning techniques into an iterated local search for solving combinatorial optimization problems. Our primary goal is to enable the search to navigate solution spaces more strategically and efficiently. To do so, we propose to formulate the local search decisions as a classification task. To achieve this, we use Graph Neural Networks (GNNs) as the representation model to extract feature representation of our combinatorial optimization problem. We validate the efficacy of our method to the Permutation Flowshop Scheduling Problem (PFSP).

  • 10:55 AM - 11:20 AM

    Transmission Neural Networks: Naive Mean Field Approximation and Immune States

    • Shuang Gao, presenter, Polytechnique Montréal

    Transmission Neural Networks (TransNNs) are recently proposed to model virus spread dynamics over large networks, and they are shown to be naturally connected with neural networks with tuneable activation function. Furthermore, this connection leads to the discovery of three new activation functions with tunable parameters. Moreover, TransNNs with a single hidden layer and a fixed non-zero bias term are universal function approximators.

    This paper presents in detail the approximation employed in TransNNs with respect to an associated stochastic agent-based Markovian SIS model with 2^n states where n is the number of nodes on the network, and establishes the properties of such an approximation. In addition, an extended model of TransNNs with immune states is formulated, and the associated continuous time epidemic spread model over networks is established.

  • 11:20 AM - 11:45 AM

    Using Machine Learning to Predict the On-Time Performance of a Flight Schedule

    • Pascale Batchoun, Air Canada
    • Quan Anh Bach, Air Canada
    • Lamine Baghriche, Air Canada
    • Nitesh Baswal, Air Canada
    • Vadlamudi Bhargav, Air Canada
    • Philippe Branchini, Air Canada
    • Zohreh Hajabedi, Air Canada
    • Marc Levangie, Air Canada
    • Trang Minh Nguyen, Air Canada
    • Narges Sereshti, Air Canada
    • Navjot Singh, Air Canada
    • Jiachen Zhang, Air Canada
    • Haiyang Jiang, presenter, Air Canada
    • Gitimoni Saikia, Air Canada
    • Anurag Kaushik, Air Canada
    • Shailendra Pathak, Air Canada
    • Si Chen, Air Canada
    • Alexandre Vincart-Emard, Air Canada
    • Tristan Waldie, Air Canada

    Airlines prioritize optimizing on-time performance (OTP) to enhance customer satisfaction. Achieving a resilient flight schedule requires accurate OTP prediction and evaluation of scheduling and operational changes. Methodologies from the literature did not yield improvements in OTP in practice. We propose a novel solution that employs advanced predictive modeling and simulation techniques, utilizing machine learning to predict block and turn durations. Simulating future schedules with user-defined parameters and multi-replica scenarios enables us to capture diverse operational realities from a blue-sky to an irregular day-of-departure. This approach, implemented with over 95% accuracy in a major airline, enhances OTP predictability. Additionally, robust visualization techniques offer insights into schedule performance, including line of flight (LOF) and delays. This multifaceted methodology empowers airlines to optimize schedules effectively, ensuring improved OTP and overall passenger experience.

  • 11:45 AM - 12:10 PM

    Wasserstein Distributionally Robust Shallow Convex Neural Network

    • Julien Pallage, presenter, Polytechnique Montreal, Mila & GERAD
    • Antoine Lesage-Landry, Polytechnique Montréal

    Neural networks have been traditionally overlooked in critical sectors, e.g., energy, healthcare, and finance. Even though they tend to be successful nonlinear predictors, their lack of interpretability, limited performance guarantees, complex training procedures as well as high susceptibility to data corruption and other adversarial attacks have labelled them as unreliable. Many solutions have been proposed in the past decade to guide and frame their training as much as possible, e.g., hyperparameter optimization, regularization, adversarial defense procedures, post-training verification frameworks, and tight MLOps lifecycle management. Still, these advances add up to training complexity and don't offer clear theoretical guarantees. In this work, we leverage recent results from distributionally robust optimization and learning to propose a new Wasserstein Distributionally Robust Shallow Convex Neural Network (WaDiRo-SCNN) with provable out-of-sample performance guarantees and simple low-stochasticity training. The training is formulated as a convex optimization problem efficiently solvable with open-source solvers. We showcase our model with a numerical study in a controlled synthetic environment and a real energy system application.

Back