NICOLAS.ROUGIER@INRIA.FR | RÉSUMÉ | GITHUB | HAL | ORCID | NEWS

I’m a researcher in computational cognitive neuroscience at Inria and the Institute of Neurodegenerative Diseases (Bordeaux, France). I’m investigating decision making, learning and cognition using computational models of the brain and distributed, numerical and adaptive computing, a.k.a. artificial neural networks and machine learning. My research aims to irrigate the fields of philosophy with regard to the mind-body problem, medicine to account for the normal and pathological functioning of the brain and the digital sciences to offer alternative computing paradigms.

Beside neuroscience and philosophy, I’m also interested in open and reproducible science (I’ve co-founded ReScience C with Konrad Hinsen and wrote the article Transforming Code into Scientific Contribution), scientific visualization (I’ve created glumpy, co-created VisPy and authored the popular Ten Simple Rules for Better Figures article), Science outreach (e.g. The Conversation) and computer graphics (especially digital typography).

Keywords: Neuroscience, Cognition, Enaction, Behavior, Embodiment, Learning, Model, Decision Making, Distributed Computing, Neural Networks, Machine Learning, Artificial Intelligence, Scientific Computation, Scientific Visualization, Scientific Modelling, Open Science, Reproducible Science.

Selected articles

A Natural History of Skills (Progress in Neurobiology, 2018) In this review, we propose to re-evaluate the function of the basal ganglia-cortical network in light of the current experimental evidence concerning the anatomy and physiology of the basal ganglia-cortical circuits in vertebrates. We briefly review the current theories and show that they could be encompassed in a broader framework of skill learning and performance.

A Computational Model of Dual Competition (eNeuro, 2018) We propose a model that includes interactions between the cortex, the basal ganglia and the thalamus based on a dual competition that endows the model with two regimes. One is driven by reinforcement learning and the other by Hebbian learning. The final decision is made according to a combination of these two mechanisms with a gradual transfer from the former to the latter.

Sustainable Computational Science (PeerJ, 2017) Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true.

A Neural Field Model of the Somatosensory Cortex (PLOS One, 2012) We investigated the formation and maintenance of ordered topographic maps in the primary somatosensory cortex as well as the reorganization of representations after sensory deprivation or cortical lesion. We hypothesized that feed-forward thalamocortical connections are an adequate site of plasticity while cortico-cortical connections are believed to drive a competitive mechanism that is critical for learning.

Rules Without Symbols (PNAS, 2005) Human cognitive control is uniquely flexible and has been shown to depend on prefrontal cortex (PFC). But exactly how the biological mechanisms of the PFC support flexible cognitive control remains a profound mystery. We show how this can occur when a set of PFC-specific neural mechanisms interact with breadth of experience to self organize abstract rule-like PFC representations that support flexible generalization in novel tasks.

→ See my résumé for full bibliography.

Selected books

Scientific Visualization — Python & Matplotlib (2020) An open access book on scientific visualization using python and matplotlib to be released during summer 2020 (hopefully). Sources will be available on GitHub, the PDF book will be open-access and the printed book will cost 50$. If you want to support the book, you can tip a few euros. If you want to have access to the private repository during the writing, you can tip a few more euros (and let me know abour your github handle). If you’re a company, you can also sponsor me. Note that in any case, the GitHub repository will be made public at the end of the writing and the PDF will be available for free.

Towards Reproducible Research (2019) This book takes a current perspective onto a number of potentially dangerous situations and practices, to examplify and highlight the symptoms of non-reproducibility in research. Each time, it provides efficient solutions ranging from good-practices that are easily and immediately implementable to more technical tools, all of which are free and have been put to the test by the authors themselves.

From Python to Numpy(2017) There are a lot of techniques that you don’t find in books and such techniques are mostly learned through experience. The goal of this book is precisely to explain some of these techniques and to provide an opportunity for making this experience in the process.

→ See other books on my résumé.

Why your cat is lousy at chess yet way smarter than even the most advanced AI If you share your home with a dog or a cat, look at it carefully and you will get a good overview of everything we don’t know how to do in artificial intelligence. “But my cat does nothing all day except sleep, eat and wash herself,” you may think. And yet your cat knows how to walk, run, jump (and land on her feet), hear, see, watch, learn, play, hide, be happy, be sad, be afraid, dream, hunt, eat, fight, flee, reproduce, educate her kittens – and the list is still very long…

Silicon soul: The vain dream of electronic immortality If we want to “upload our brain” without going insane, it’s imperative for the uploaded brain to be connected to an artificial body that can perceive the outside world and act on it. But what kind of artificial body do we have today? Robotic bodies where retinas are replaced by cameras and muscles by motors?

Why you’ll never be able to upload your brain If we consider the whole central nervous system, we are facing an average of 86 billion neurons, and each of these neurons contacts an average of 10,000 other neurons, representing a grand total of approximately 860 trillion connections. This is really huge…

News

Last updated on 5 December 2019 - Made with Jekyll and Jekyll-Scholar