Detailed program

Click on the title to join the webminar 

 


 
Monday 23 November 2020 - 10h30-11h00 CET

 
pierre.jpeg 
Pierre Schaus, UC Louvain
 
 
Monday 23 November  2020 - 11h00-13h00 CET
 
Content: The success the MiniSAT solver has largely contributed to the dissemination of (CDCL) SAT solvers. The MiniSAT solver has a neat and minimalist architecture that is well documented. We believe the CP community is currently missing such a solver that would permit new-comers to demystify the internals of CP technology. We introduce Mini-CP a white-box bottom-up teaching framework for CP implemented in Java. Mini-CP is voluntarily missing many features that you would find in a commercial or complete open-source solver. The implementation, although inspired by state-of-the-art solvers is not focused on efficiency but rather on readability to convey the concepts as clearly as possible. Mini-CP is small and well tested.
 
 

pierre.jpeg
 
Monday 23 November 2020 - 14h00-19h00 CET
 
 
Content: Will you take up the challenge of being the first to find a constraint programming model able to solve the mystery problem ? 
 
Organized by Valentin Antuori (LAAS-CNRS, Renault), Julien Ferry (LAAS-CNRS), Emmanuel Hébrard (LAAS-CNRS), Carla Juvin (LAAS-CNRS), Tom Portoleau (LAAS-CNRS, IRIT), Louis Rivière (LAAS-CNRS, IRIT, ANITI), Pierre Schaus (UC Louvain)
 
The Hackathon is going on till Wednesday 25th 12PM GMT+1
 
 

 
Tuesday 24 November 2020 - 10h30-11h00 CET
 
Christian Artigues (GDR RO), Nicholas Asher (ANITI), Emmanuel Hébrard (ACP Executive Commitee), Marie-José Huguet (LAAS-CNRS), Sébastien Konieczny (GDR IA), Philippe Lacomme (GDR RO, GT2L), Caroline Prodhon (GDR RO, GT2L).
 
Download the GDR IA slides (Sébastien Konieczny)
Download the GT2L slides (Caroline Prodhon)

 Daniele.jpeg

Daniele Vigo - University of Bologna 

Integrating Machine Learning into state of the art Vehicle Routing Heuristics [CLICK TO WATCH THE VIDEO]

Tuesday 24 November 2020 - 11h00-13h00 CET
 
Content: The Vehicle Routing Problem (VRP) is one of the most studied combinatorial optimization problems for which hundreds of innovative heuristic and exact algorithms have been developed. Recently some attempts were performed towards the integration of Machine Learning into heuristics to enhance their performance and guide their design. We report some initial attempts we performed in this direction and highlight the difficulties and promising research directions which merit further investigation.
 

simon.jpeg  thomas.jpeg
Simon de Givry and Thomas Schiex, INRAE Toulouse, ANITI chair DIL, Toulouse
 

Tuesday 24 November 2020 - 14h30-16h00 CET (Part 1) - 16h30-18h30 (Part 2)
 
 
ContentCost Function Networks (CFN) generalize Constraint Networks, by reasoning on numerical functions instead of constraints. In this tutorial, we will see how CP algorithms (tree search, local consistencies, global constraints,...) can be extended and solve the corresponding "Weighted Constraint Satisfaction Problem" (WCSP). This leads to a generalized version of local consistency that has intriguingly tight connections with Linear Programming. This capacity of handling numerical information also opens the door to new capacities in terms of learning: it becomes possible to learn a CFN from historical solutions in a scalable manner. One can then integrate the information extracted from data into an existing model (if any). 

The tutorial will be in two parts, with 1:30 for describing algorithms and associated results and 2:00 for a practical applying the toulbar2 WCSP solver on various combinatorial optimization problems, see how we can model them or learn them from data, and compare the results with other approaches, including constraint programming and integer programming. 

Download the slides 

Instructions for practical sessions: Before the ACP school even starts,  we advise you to 

  1. install, on the computer you'll use to follow the school, the VirtualBox software (see https://www.virtualbox.org/wiki/Downloads if needed),
  2. Download the 4GB VirtualBox disk available here. You can use an Alternative link
  3. Uncompress it using 7z (https://www.7-zip.org/download.html). It will expand to 13GB.
  4. Start VirtualBox
  5. Create a "New" virtual machine VM of type "Debian 64 bits" with the suitable amount of RAM/CPU (half of your computer resources for example)
  6. Attach the "ACPAIOR.vdi" disk above to it (in Storage).
  7. Start the VM

You won't need any password to log in, but the user (acpaior) has the password "none" (if you need to become superuser). The default keyboard is QWERTY but you'll find an AZERTY keyboard also available top right of the screen and you can add your favorite the Settings menu (top right).

NB: in some cases, VirtualBox will complain that virtualization is not possible because of a poorly configured BIOS. You'll have to dig and find which settings to change in your BIOS. Google or its colleagues can be nice friends here.

 


tias_guns.jpg
Tias Guns - Vrije Universiteit Brussel 
 
Wednesday 25 November 2020 - 11h00-13h00 CET
 
Wednesday 25 November 2020 - 14h00-15h30 CET
 
Content: Increasingly, the knowledge needed to formulate constraint solving problems is not explicitly written down. Instead, we have to infer part of the knowledge implicitely from the users, (such as preferences), or through perception (image recognition) or the environment (price predictions). In this talk I motivate this change in requirements and the challenges it poses with respect to modeling and (re)formulating such data-driven constraint solving problems, as well as different techniques to hybredize and integrate the learning, the predictions and the reasoning, #HybridAI.
 
Download the slides :
 
Instructions for pratical session
 
Material for the practical of tomorrow is at the following links:
 
part 0: installation information and checking
 
part 1: practical 1
 
part 2: partical 2
 
Or clone from the git repo: https://github.com/tias/nn_learn_solve
 
Installation instructions:
if (you do not have 'conda' or 'pip' on your machine):
    install miniconda as per https://docs.conda.io/en/latest/miniconda.html
if (yo do not have 'classic Jupyter Notebook' on your machine):
    'conda install -c conda-forge notebook', see https://jupyter.org/install
download the p0 file to a folder, run 'jupyter notebook' in that folder, open p0, read it and test...
 
If this does not work, shout out on the hackathon discord. A few of my PhD students and myself are on there too, as are fellow participants that might have encountered the same hurdles (if any).

 
Wednesday 25 November 2020 - 15h30-16h00 CET

joao.jpeg
Joao Marques-Silva, IRIT and ANITI Chair DeepLEVER Toulouse

Machine Learning Meets Automated Reasoning: Explainability, Fairness, Robustness and Model Learning [CLICK TO WATCH THE VIDEO]

Wednesday 25 November 2020 - 16h30-18h30 CET
 
Content: The last decade witnessed impressive achievements in the use of Machine Learning (ML). These days, ML-based systems pervade our daily lives. Nevertheless, the continued success of ML hinges on systems that are robust in their operation, and such that the decisions they make are fair, they can be understood, and can be trusted in their operation. This lecture overviews recent efforts on applying automated reasoning (AR) tools for analyzing and for learning ML models. Concretely, the lecture will detail existing rigorous approaches for explainability and for fairness. The lecture will also overview solution for learning optimal ML models. Finally, the lecture will overview rigorous approaches for assessing the robustness of ML models.
 

 

axel.jpeg
 
Axel Parmentier - CERMICSEcole des Ponts Paristech
 
Content: This lecture will briefly introduce the main notions of structured learning, the branch of supervised learning that aims at making predictions on combinatorially large sets. We will notably introduce several approaches to the formulation of the structured learning problem, including the maximum likelihood approach and the loss minimization approach. We will then focus on how structured learning can be used to derive mathematical programming algorithms for difficult operations research problems.


 
Capture_d_e_cran_2020_10_08_a_15.37.48.png Capture_d_e_cran_2020_10_08_a_15.38.56.png
Nadjib LazaarLIRMM Univ. Montpellier 
Samir Loudni -  LS2N-CNRS, TASC, IMT Atlantique, Nantes  

Constraint acquisition and declarative data mining

Thursday 26 November 2020 - 14h30-16h00 CET (Part 1) - 16h30-18h30 (Part 2)

Content
:
 
Declarative Data mining (S. Loudni) [CLICK TO WATCH THE VIDEO]: Despite the popularity of data mining today, it remains challenging to develop applications and software that incorporates data mining techniques. This is because data mining have focused on developing specialized algorithms for solving particular tasks rather than on developing general principles and techniques. This lecture will show how constraint programming methodology can be exploited to specify data mining problems as constraint satisfaction and optimization problems. I will overview our recent works on applying constraint programming for some data mining tasks.
 
 
Constraint acquisition (N. Lazaar) [CLICK TO WATCH THE VIDEO]: Constraint programming is used to model and to solve complex combinatorial problems. The modeling task requires some expertise in constraint programming. This requirement is a bottleneck to the broader uptake of constraint technology. Several approaches have been proposed to assist the non-expert user in the modeling task. In this talk, I will present the constraint acquisition problem. Then, I will show how to learn constraint networks by asking the user partial queries. Finally , I will show how to make constraint acquisition more efficient in practice with new kind of queries, the use of some background knowledge, etc.
 
 

 Christine.jpeg
Christine Solnon - CITI, INSA de Lyon

Experimental evaluation: Some good practices and pitfalls to avoid [CLICK TO WATCH THE VIDEO]

Friday 27 November 2020 - 11h00-13h00 CET 

 
Content: Experimental evaluation aims at providing insight into properties of algorithms by running them on benchmark instances and analysing the observed results. This lecture will describe the steps of an experimental process, and focus on some good practices and pitfalls to avoid when experimentally evaluating algorithms that solve NP-hard problems. We will more particularly address the following questions: How to choose benchmark instances and performance measures? How to analyse and present results? How to tune algorithm parameters?
 

 Denis.jpeg
Denis Trystram - LIG, Grenoble INP
 
Friday 27 November 2020 - 14h30-16h00 CET 
 
Content: The purpose of this talk is to report several experiences of using of Machine Learning (ML) for better scheduling decisions. We will present mainly two contexts of applications, namely batch scheduling in HPC clusters and job allocation in data centers and edge computing platforms. We will discuss both positive and negative sides with a special emphasis of the compute intensive execution times in regard to the gain (if any). Finally, some directions of developing well-dimensioned AI will be discussed.

The session will welcome several external contributors who wish to share their own experience related to the use of ML
in the field of scheduling.

Online user: 10 Privacy
Loading...