L’Association Française pour l’Intelligence Artificielle (AFIA) et le groupe de travail Modèles Hybrides d’IA (MHyIA) du groupement de recherche Raisonnement, Apprentissage, et Décision en Intelligence Artificielle (GDR RADIA) organisent conjointement une première demi-journée commune sur le thème « IA NEURO-SYMBOLIQUE » au sein de la conférence « European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty » (ECSQARU 2023).


Date : 19 septembre 2023

Outil : lien zoom envoyé par email aux inscrits

Lieu : la salle des colloques, Bâtiment “La Maison de la Recherche”, Université d’Artois, Rue Maurice Schuman, 62000 Arras


Programme

  • 9h00 : Ouverture par Fatiha Saïs (représentant l’AFIA), et Zied Bouraoui (représentant le GDR RADIA).
  • 9h15 : « Aligning embeddings with symbolic knowledge: towards a tight integration of learning and reasoning » par Steven Schockaert (Cardiff University).
    Abstract
    « Most approaches to neurosymbolic AI rely on a relatively loose coupling between learning and reasoning. To enable a tighter integration between these components, we need some kind of alignment between vector space representations and symbolic knowledge. In this talk, I will discuss two strategies that can be pursued towards this end. The first strategy builds on the idea that predicates can be represented as convex regions in some vector space. Symbolic knowledge, e.g. in the form of existential rules, can then be encoded in terms of constraints on the spatial arrangement of these regions. The second strategy corresponds to the idea that vectors represent epistemic states. Symbolic reasoning can then be carried out by manipulating these vectors in a well-defined way. I will provide an overview of theoretical results about the principles underpinning these strategies, as well as their limitations. »
  • 10h15 : Pause
  • 10h30 :: « Interpretable Neural-Symbolic Concept Reasoning » par Giuseppe Marra (KU Leuven).
    Abstract
    « Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust. Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts. However, state-of-the-art concept-based models rely on high-dimensional concept embedding representations which lack a clear semantic meaning, thus questioning the interpretability of their decision process. To overcome this limitation, we propose the Deep Concept Reasoner (DCR), the first interpretable concept-based model that builds upon concept embeddings. In DCR, neural networks do not make task predictions directly, but they build syntactic rule structures using concept embeddings. DCR then executes these rules on meaningful concept truth degrees to provide a final interpretable and semantically-consistent prediction in a differentiable manner. Our experiments show that DCR: (i) improves up to +25% w.r.t. state-of-the-art interpretable concept-based models on challenging benchmarks (ii) discovers meaningful logic rules matching known ground truths even in the absence of concept supervision during training, and (iii), facilitates the generation of counterfactual examples providing the learnt rules as guidance. »
  • 11h30 : « Integrating Combinatorial Solvers and Neural Models » par Pasquale Minervini (University of Edinburgh).
    Abstract
    « Neural models – including language models such as ChatGPT – can exhibit remarkable abilities; paradoxically, they also struggle with algorithmic tasks where much simpler models excel. To solve these issues, we propose Implicit Maximum Likelihood Estimation (IMLE), a framework for end-to-end learning of models combining algorithmic combinatorial solvers and differentiable neural components, which allows us to incorporate planning and reasoning algorithms in neural architectures by just adding a simple decorator [1, 2].
    [1] Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions. , NeurIPS 2021;
    [2] Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models. , AAAI 2023″
  • 12h30 : Clôture.

Organisation

Cet événement est organisé par Zied Bouraoui (CRIL, Université d’Artois), Pierre Monnin (I3S, Université Côte d’Azur), Fatiha Saïs (LISN, Université Paris Saclay)


Inscription

La participation est gratuite mais l’inscription est obligatoire, à effectuer à partir de ce lien, pour recevoir le lien Zoom de la demi-journée