MA8 - CP and Machine Learning
May 11 2026 10:30 – 12:10
Location: Jean-Guertin (green)
Chaired by Gilles Pesant
4 Presentations
Data-Driven Cost-Based Filtering for Constraint Programming
Constraint Programming is a powerful paradigm for obtaining exact solutions to combinatorial optimization problems. Those problems are widely used in industries but remain a challenge to solve due to their complexity resulting in long solving time. Recent data-driven methods have shown promising results. Most of these approaches focus on enhancing the branching strategies or strengthening the primal bounds to accelerate the solving time.
This work instead focuses on the propagation mechanism of certain global constraints. Some of these constraints require cost-based filtering, which relies on the computation of bounds via lagrangian relaxation. The tightness of these bounds directly impacts the efficiency of the filtering and, consequently, the overall solving time. Since the bound’s quality depends on the lagrangian multipliers used in the relaxation, computing appropriate multipliers is essential. However, they are obtained from an iterative process which at each step solves the relaxation resulting in a computationally expensive method.
To address this issue, we propose a data-driven method focused on predicting multipliers. Graph Neural Networks are used to model the relationship between instances and appropriate multipliers. This type of neural network is particularly well-suited to the task as most combinatorial problems can be modeled as a graph. Consequently, they can capture the structural complexity of the instances. In addition, their flexibility with respect to the input size provides a generalizable framework for this task.
The proposed approach was integrated into a general constraint programming solver and evaluated on different problems. The experimental results showed consistent, albeit modest, improvements in the solving time, indicating that learning-based multiplier prediction can improve the cost-based filtering of global constraints.
This approach is fully independent of the problems and can be integrated into the propagators of certain global constraints in general constraint programming solvers. Although the current time reduction is modest, further research on improving the training strategy and the model architecture is expected to yield greater performance gains.
Iterative Neural Heuristics for Constraint Satisfaction: From Self-Supervised Transformers to Neural Large Neighbourhood Search
Constraint Satisfaction Problems (CSPs) arise in many real-world applications, motivating interest in accelerating their solution with machine learning. Existing learning-based methods, however, often rely on feasible solutions for supervision or expensive reinforcement learning pipelines. We present ConsFormer, a self-supervised, Transformer-based framework for solving CSPs iteratively. By devising differentiable approximations to the discrete constraints of a CSP to guide model training, ConsFormer learns to improve a solution in a single step. When deployed iteratively at test time, ConsFormer can tackle out-of-distribution instances never seen during training. Building on this foundation, we establish a connection between iterative neural heuristics and Large Neighbourhood Search (LNS). By decomposing the neural procedure into destroy and repair operators, we can incorporate both classical and novel prediction-guided neighbourhood selection strategies alongside greedy or sampling-based repair. Empirically, ConsFormer performs strongly on out-of-distribution instances, and the Neural LNS framework further improves competitiveness against classical and neural baselines. These findings highlight the promise of neural methods as heuristics for constraint satisfaction, as well as the value of integrating them with classical algorithms to exploit problem structure for improved performance.
Computing a Differentiable Semantic Loss from CP Models to Train Deep Neural Networks
This work explores the integration of combinatorial logic into Deep Learning models. While neural networks excel at continuous pattern recognition, they often struggle with the strict rules of discrete optimization problems. We address this by introducing a differentiable Semantic Loss function that is automatically derived from a Constraint Programming model stating such rules. That loss function measures the probability of a Deep Learning model's output satisfying a set of logical constraints by leveraging weighted counting algorithms within Constraint Programming. We report experimental results using the Transformer-based framework ConsFormer which solves Constraint Satisfaction Problems iteratively.
Neurosymbolic Large Neighbourhood Search
Recent advances in ai have spurred interest in NeSy architectures that integrate neural and symbolic
methods. In particular, combining a Constraint Programming (CP) model with a language model for
constrained sequence generation tasks allows the neural component to capture domain knowledge
while CP enforces structural constraints. In this paper we propose combining CP with a Masked
Language Model (MLM) to perform Large Neighbourhood Search (LNS). Unlike conventional left-
to-right Large Language Models, MLMs can complete sequences with gaps in arbitrary positions,
making them well-suited for this task. Meanwhile, lns provides a CP-based iterative framework to
explore constrained subspaces whenever searching the whole space would be intractable. We evaluate
NeSyLNS on tasks in constrained text generation and molecule discovery. Our experiments show
that it can quickly generate many high-quality sentences and molecules, even for highly- constrained
tasks.
