SARASWATI | NEWS | CONTACT

Saraswati (Hindu goddess of knowledge, music, art, wisdom, and learning) is an associated team between Nicolas P. Rougier (PI) (Inria Bordeaux Sud-Ouest, Talence, France), Raju Surampudi Bapi (PI) (Cognitive Science Lab, IIIT, Hyderabad, India), Thomas Boraud (Institute of Neurodegenerative Diseases, Bordeaux, France) and Srinivasa Chakravarthy V. (Computational Neuroscience Lab, IIIT Madras, Chennai, India). The objectives for the three years is to design a generic machine learning architecture mixing Hebbian learning and reinforcement learning and to compare this new architecture to more classical approaches (supervised learning). Based on experimental evidences on the French sides (newts, rodents and non- human primates) as well as behavioral investigation on humans (IIT-H), the two teams will explore computational models that can give account on behavior and will also confront their respective hypothesis.

Context

Skill learning or skill acquisition is the learning of a sequence of actions. A skill is learned or improved when executed multiple number of times. An example of this could be playing a sport like long jump or bicycling. The stages of motor learning and the exact neural substrates of skill acquisition is a hotly debated topic, with several rival theories to choose from. Starting from the early theories [Fitts and Posner 1967] to the recent proposals [Graybiel 2008; Verwey, Shea, and Wright 2014], two main stages are emphasized – a slow, deliberate, intital cognitive stage and a fast, automatic, late motor stage. One theory proposes that the basal ganglia are active during early stages of motor learning, whereas the motor cortex takes over as the subject acquires the skill [Jueptner, Stephan, Frith, Brooks, Frackowiak, and Passingham 1997; Hikosaka, Nakahara, Rand, Sakai, Lu, Nakamura, Miyachi, and Doya 1999; Bapi, Miyapuram, Graydon, and Doya 2006]. In the experimental paradigms emphasizing Goal-directed action selection and habit/routine acquisition, it is suggested that there is a slow and incremental transfer from the action-outcome (A-O) to the stimulus-response (S-R) system such that after extensive training, the S-R system takes control of behavior [Packard and Knowlton 2002; Seger and Spiering 2011] all these paradigms, very little is known on the exact mechanism underlying such transfer between stages and there are many different hypotheses about this. One difficult question that immediately arises is when and how the brain switches from a flexible action selection system to a more static (habitual) one.

In the domain of Artificial Intelligence, agents can be made to autonomously learn complex sequential behaviours using the framework of Reinforcement learning (RL). In the recent times RL, along with deep neural networks, has been utilized for game playing, autonomous vehicle navigation and other interesting robotic applications (for example, [Mnih, Kavukcuoglu, Silver, Rusu, Veness, Bellemare, Graves, Riedmiller, Fidjeland, Ostrovski, Petersen, Beattie, Sadik, Antonoglou, King, Kumaran, Wierstra, Legg, and Hassabis 2015; Silver, Huang, Maddison, Guez, Sifre, van den Driessche, Schrittwieser, Antonoglou, Panneershelvam, Lanctot, Dieleman, Grewe, Nham, Kalchbrenner, Sutskever, Lillicrap, Leach, Kavukcuoglu, Graepel, and Hassabis 2016]). One serious drawback of these RL algorithms is that they need extensive amount of self-play experience to attain reasonable performance and this might not be feasible when the training examples are to be derived from physical systems. Crux of the problem here is to devise schemes that combine pure reward-based learning with schemes that learn based on the model of environment and be able to switch between these depending on the learning context [Pong, Gu, Dalal, and Levine 2018]. Here again we are faced with the same problem of flexibly combining two systems just as we outlined above in the case of biological agents. Thus there are rich possibilities of borrowing solutions from cognitive neuroscience to machine learning and vice versa in the context of sequential skill learning problems.

News

References

  1. Fitts, P.M. and Posner, M.I. 1967. Human Performance. Belmont, Calif: Brooks/Cole Pub. Co.
  2. Graybiel, A.M. 2008. Habits, Rituals, and the Evaluative Brain. Annual Review of Neuroscience 31, 1, 359–387. DOI URL
  3. Verwey, W.B., Shea, C.H., and Wright, D.L. 2014. A cognitive framework for explaining serial processing and sequence execution strategies. Psychonomic Bulletin & Review 22, 1, 54–77. DOI URL
  4. Jueptner, M., Stephan, K.M., Frith, C.D., Brooks, D.J., Frackowiak, R.S.J., and Passingham, R.E. 1997. Anatomy of Motor Learning. I. Frontal Cortex and Attention to Action. Journal of Neurophysiology 77, 3, 1313–1324. DOI URL
  5. Hikosaka, O., Nakahara, H., Rand, M.K., Sakai, K., Lu, X., Nakamura, K., Miyachi, S., and Doya, K. 1999. Parallel neural networks for learning sequential procedures. Trends in Neurosciences 22, 10, 464–471. DOI URL
  6. Bapi, R.S., Miyapuram, K.P., Graydon, F.X., and Doya, K. 2006. fMRI investigation of cortical and subcortical networks in the learning of abstract and effector-specific representations of motor sequences. NeuroImage 32, 2, 714–727. DOI URL
  7. Packard, M.G. and Knowlton, B.J. 2002. Learning and Memory Functions of the Basal Ganglia. Annual Review of Neuroscience 25, 1, 563–593. DOI URL
  8. Seger, C.A. and Spiering, B.J. 2011. A Critical Review of Habit Learning and the Basal Ganglia. Frontiers in Systems Neuroscience 5. DOI URL
  9. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540, 529–533. DOI URL
  10. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Driessche, G. van den, Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587, 484–489. DOI URL
  11. Pong, V., Gu, S., Dalal, M., and Levine, S. 2018. Temporal Difference Models: Model-Free Deep RL for Model-Based Control. CoRR abs/1802.09081. URL

Last updated on 25 February 2024 - Made with Jekyll and Jekyll-Scholar