Mnemosyne
Inria
Brainnetome

Associate Team "Meng Po" (Memory ENGineering for Problem sOlving)

(Meng Po is the goddess of forgetfulness in Chinese mythology, who serves soup on the Bridge of Forgetfulness.
This soup wipes the memory of the person so they can reincarnate into the next life without the burdens of the previous life.)

Principal Investigators:

Frederic Alexandre

Director of Research Inria
Mnemosyne Team
Inria Bordeaux Sud-Ouest, Bordeaux, France
Frederic.Alexandre@inria.fr

Shan Yu

Professor
Brainnetome Center and National Laboratory of Pattern Recognition
Chinese Academy of Sciences, Beijing, P. R. China
shan.yu@nlpr.ia.ac.cn

Scientific Positioning:

Artificial Intelligence (AI) has been built on the opposition between symbolic problem solving that should be addressed by explicit models of planning, and digital learning that should be obtained by neural networks (Dreyfus & Dreyfus, 1991; Sun & Alexandre, 2013). But it is clear that in ecological conditions, our cognition has to mix both capabilities and this is nicely carried out by our brains. Our behavior is sometimes described as a simple dichotomy between Goal-Directed (explicit deliberation and knowledge manipulation for planning) and habitual (automatic Stimulus-Response association) approaches. Recent results rather report more general strategies, including the hybrid combinations of both (Dolan and Dayan, 2013). Importantly, they highlight key mechanisms, corresponding to detect explicitly contexts in which the strategy should be modified and to adapt simple Stimulus-Response associations to these contexts.

The research expertise of the Inria-CAS joint team will synergize to provide a unique leverage to address this important issue. On the Chinese side, connectionist models like deep neural networks are adapted to avoid so-called catastrophic forgetting and to facilitate context-based information processing (Zeng et al., 2019). This is done by a clever mechanism of weight modification to protect previously learned associations, and by a module learning to detect and reuse corresponding contexts to flexibly alter the Stimulus- Response association learned by the neural networks. On the French side, models in computational neuroscience explore the capacity of neuronal structures like the hippocampus to categorize contexts (Kassab & Alexandre, 2018) and investigate the role of the prefrontal cortex (Hinaut & Dominey, 2011), known to modulate behavioral activity depending on the context. We propose here to associate our experiences to develop a more general framework for adapting neural networks to problem solving, thus augmenting their usability in AI and the understanding of brain reasoning mechanisms.

Current and past activities:

The associate team has been accepted beginning of 2020. Due to the pandemic, the communication between partners has been done initially by mail. We have organized some visio meetings for common presentations.

A postdoc fellow, Jianyong Xue, has been hired by Inria to foster our activities, from December 2020 to September 2022.

Visio meetings are organized regularly on wednesday 9-11am CET/ 3-5pm CST for common presentations and common work. As the team had limited activities (and no visit possible), it was proposed to renew it for 2022-2025.

A French-Chinese workshop has been organized on September 14th 2021.

From end of 2023, travels have been made possible again. A first visit has been organized on December 4-8 2023, with F. Alexandre visiting CASIA in Beijing.

Objectives:

Objective 1: Specifying the general architecture.

Based on general architectures proposed in Reinforcement Learning and connectionist modelling (for example the Dyna algorithm, Sutton, 1990) and in Cognitive Science and computational neuroscience (for example the Task Set model, Domenech and Koechlin, 2015), proposing strategies to select, train and adapt elementary Stimulus-Response associations, one important goal will be to analyze the characteristics, bio-plausibility and performances of existing solutions and specify one architecture, suitable for the problem solving tasks considered here.

Objective 2: Implementing computing mechanisms.

These general architectures share some computing mechanisms, the principles of which can be studied independently to prepare their implementation. First, several Stimulus-Response associations must co-exist, to avoid catastrophic forgetting. This can be done by learning context-dependent associations (Zeng et al., 2019) or by defining attentional mechanisms for a adapted competition between several associations (O’Reilly et al., 2002). From a biological point of view, the ventral and dorsal parts of the lateral Prefrontal Cortex (Blumenfeld et al., 2007) have been reported to modify attentional activity in the sensory and motor cortex in that aim. Second, the ability to efficiently extract pertinent contextual information from complex, noisy environments to guide the choice/attention towards the proper Stimulus-Response association will be considered. In this regard, the joint team will examine the mechanism used by the hippocampus and explore how to apply related mechanisms to artificial neural networks. Third, analysis of performance must be exploited to learn and select the right strategies, as it has also been investigated in connectionist and bio-inspired solutions, though with less robust solutions for the moment. For all these mechanisms, it will be important to share and compare experiences in both teams to propose more efficient solutions.

Objective 3: Designing a task for evaluation.

In order not to remain at the conceptual level, it will be important to choose tasks to help specify and evaluate the developed models. Related to problem solving, the tasks must select ways to reach well-specified goals by learning adapted procedures and ways to organize them toward the goals. This is consequently beyond classical identification and control and must include such dimensions as conceptualisation and organisation of behavior, thus providing original contributions in the domain of neural networks.

Objective 4: Sharing and disseminating.

One important objective of the associate team will be to train young researchers to these kinds of neural networks and to their exploitation in an AI framework. Reconciling the problem solving and learning sides of AI is a major aim in research today and the associate team, if successful, will gain a high visibility allowing for publications with high impact. We also plan to organize specialized workshops in Europe and in China.

Publications :

Jianyong Xue and Frédéric Alexandre. Developmental Modular Reinforcement Learning. In ESANN2022 - 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, October 2022.

Jianyong Xue and Frédéric Alexandre. Multi-task learning with modular reinforcement learning. In SAB 2022 - 16th International Conference on the Simulation of Adaptive Behavior, Cergy-Pontoise, France, September 2022.

References:

Alexandre, F. (2017). A Behavioral Framework for Information Representation in the Brain. In Computational Models of Brain and Behavior (pp. 401–412), Wiley.

Baldassarre, G., Lord, W., Granato, G., & Santucci, V. G. (2019). An Embodied Agent Learning Affordances With Intrinsic Motivations and Solving Extrinsic Tasks With Attention and One-Step Planning. Frontiers in Neurorobotics, 13.

Blumenfeld, R. S., & Ranganath, C. (2007). Prefrontal Cortex and Long-Term Memory Encoding: An Integrative Review of Findings from Neuropsychology and Neuroimaging. Neuroscientist, 13(3), 280–291.

Dolan, R. J., & Dayan, P. (2013). Goals and Habits in the Brain. Neuron, 80(2), 312–325. https://doi.org/10.1016/j.neuron.2013.09.007

Domenech, P., & Koechlin, E. (2015). Executive control and decision-making in the prefrontal cortex. Current Opinion in Behavioral Sciences, 1, 101–106. https://doi.org/10.1016/j.cobeha.2014.10.007

Dreyfus H.L., Dreyfus S.E. (1991) Making a Mind Versus Modelling the Brain: Artificial Intelligence Back at the Branchpoint. In: Negrotti M. (eds) Understanding the Artificial: On the Future Shape of Artificial Intelligence. Artificial Intelligence and Society. Springer, London.

Drumond, T. F., Viéville, T., & Alexandre, F. (2019). Bio-inspired Analysis of Deep Learning on Not-So-Big Data Using Data-Prototypes. Frontiers in Computational Neuroscience, 12.

Hinaut, X., & Dominey, P. F. (2011). A Three-Layered Model of Primate Prefrontal Cortex Encodes Identity and Abstract Categorical Structure of Behavioral Sequences. Journal of Physiology-Paris.

Hu G, Huang X, Jiang T, Yu S. (2019) Multi-Scale Expressions of One Optimal State Regulated by Dopamine in the Prefrontal Cortex. Frontiers in Physiology 10, 113

Hu G, Cui B, Yu S. (2019) Skeleton-based action recognition with synchronous local and non-local spatio-temporal learning and frequency attention. IEEE International Conference on Multimedia and Expo (ICME), 1216-1221

Kassab, R., & Alexandre, F. (2018). Pattern separation in the hippocampus: Distinct circuits under different conditions. Brain Structure and Function.

Nallapu, B.T., Alexandre, F. (2018). Cognitive Architecture and Software Environment for the Design and Experimentation of Survival Behaviors in Artificial Agents, in "IJCCI 2018 - 10th International Joint Conference on Computational Intelligence", Seville, Spain.

O’Reilly, R. C., Noelle, D. C., Braver, T. S., & Cohen, J. D. (2002). Prefrontal cortex and dynamic categorization tasks: Representational organization and neuromodulatory control. Cereb Cortex, 12(3), 246–257.

Strock, A., Hinaut, X., & Rougier, N. P. (2019). A Robust Model of Gated Working Memory. BioRxiv, 589564.

Sun, R. & Alexandre, F., eds. (2013) Connectionist-Symbolic Integration : from Unified to Hybrid Approaches. Taylor and Francis.

Sutton, R. (1990) Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning, Austin, TX. Morgan Kaufmann.

Zeng, G., Chen, Y., Cui, B., & Yu, S. (2019). Continual learning of context-dependent processing in neural networks. Nature Machine Intelligence, 1(8), 364–372.

Zeng G, Huang X, Jiang T, Yu S. (2019) Short-term synaptic plasticity expands the operational range of long-term synaptic changes in neural networks. Neural Networks 118, 140-147