Virtualization and allostatic control for aware virtual agents

OPEN CALL
INTERDISCIPLINARY WORKSHOP

 

Virtualization and allostatic control for aware virtual agents

14:00 CET, 23 April 2024 | Duration: 2 hours - Online

 

DESCRIPTION

In this workshop, participants will be introduced to the role of internal needs, such as hunger, thirst, or thermoregulation, in shaping behaviour, cognition and consciousness. Through interactive coding and exercises, a biologically constrained model of the core behaviour system of the brain stem will be developed, controlling an artificial agent to orchestrate self-regulatory behaviours. In addition, a second coding session will introduce methods from deep learning that show how compressed representations of the environment can be built to aid in decision-making and self-regulation.

 

SCHEDULE

  1. Introductory talk: Neural basis of self and self-
  2. Reactive control: Allostatic orchestration of motivated
  3. Introductory talk: Compressed representations in the mammalian hippocampus and neural
  4. Adaptive control: Convolutional autoencoders for data
  5. Putting it all together in Awareness Architectures

 

SCIENTIFIC COORDINATION

Scientific team of CAVAA (EIC Awareness Inside project: Counterfactual Assessment and Valuation for Awareness Architecture) coordinated by Prof. Paul Verschure, Donders Centre for Neuroscience – Neurophysics, Radboud.

Co-funded by the European Commission. Further info on the CORDIS EU research platform.

 

LECTURES

Prof. Paul Verschure, (Donders Centre for Neuroscience – Neurophysics, Radboud University)
Oscar Guerrero Rosado (Ph.D. student in Cognitive Robotics, Donders Centre for Neuroscience – Neurophysics, Radboud University)
Adrian F. Amil (Ph.D. student in NeuroAI, Donders Centre for Neuroscience – Neurophysics, Radboud University)

 

PARTICIPANTS

Researchers (postgraduate); undergraduate students (3rd year or higher) with a background relevant to the topics: Cognitive Sciences, Computational Neurosciences, Cognitive Robotics, Neuro AI, or similar.

 

REQUIREMENTS

  • University degree (or enrollment) in an affiliated field
  • Basic level of Python programming
  • Access to Google Colab.
  • Intermediate proficiency in the English language
  • Enthusiasm and interest in the topic and the desire to actively engage during the session

 

REGISTRATION
Fill out the registration form and upload your CV and short motivation letter HERE before 30.03.2024. Participation is open (free of charge).

Based on their CVs and motivations, selected participants will be informed via mail by 07.04.2024.

 

RECOMMENDED LITERATURE:

  • Verschure, F. (2016). Synthetic consciousness: the distributed adaptive control perspective. Philosophical Transactions of the Royal Society B: Biological Sciences, 371(1701), 20150448
  • Rosado, G., Amil, A. F., Freire, I. T., & Verschure, P. F. (2022). Drive competition underlies effective allostatic orchestration. Frontiers in Robotics and AI, 9, 1052998.
  • Santos-Pata, D., Amil, A. F., Raikov, I. G., Rennó-Costa, C., Mura, A., Soltesz, I., & Verschure, P. F. (2021). Epistemic autonomy: self-supervised learning in the mammalian hippocampus. Trends in Cognitive Sciences, 25(7), 582-595.

 

For any inquiries, please contact Oscar Guerrero Rosado at oscar.guerrerorosado@donders.ru.nl

Efficient Episodic Control and Virtualization in Reinforcement Learning Agents

OPEN CALL
INTERDISCIPLINARY WORKSHOP

 

Efficient Episodic Control and Virtualization in Reinforcement Learning Agents

16:15 CET, 23 April 2024 | Duration: 2 hours - Online

 

DESCRIPTION

In this course, participants will delve into the challenge of sample efficiency in Reinforcement Learning (RL), beginning with introducing the issue and exploring innovative solutions grounded in the organisation and function of the mammalian hippocampus and the overall architecture in which it is embedded. We will look specifically at state-of-the- art solutions such as Sequential Episodic Control and Prioritized Replay methods. The curriculum merges theoretical insights with hands-on coding exercises, emphasising the importance of long-term memory and virtualisation mechanisms in decision-making. Learners will creatie RL agents that incorporate advanced concepts such as epistemic reward valuation and episodic control. This course provides a distinctive blend of brain and cognitive science and artificial intelligence, equipping participants with tools to address the sample efficiency problem in RL effectively.

 

SCHEDULE

  1. Introductory talk: The sample efficiency problem in Reinforcement
  2. Contextual control: Sequential Episodic Control for sample-efficient decision-
  3. Introductory talk: Hippocampal replay methods in Reinforcement
  4. Virtual control: Model-based Reinforcement learning with hippocampal replay and epistemic reward

 

SCIENTIFIC COORDINATION

Scientific team of CAVAA (EIC Awareness Inside project: Counterfactual Assessment and Valuation for Awareness Architecture) coordinated by Prof. Paul Verschure, Donders Centre for Neuroscience – Neurophysics, Radboud University.

Co-funded by the European Commission. Further info on the CORDIS EU research platform.

 

LECTURERS

Prof. Paul Verschure, (Donders Centre for Neuroscience – Neurophysics, Radboud University)
Ismael T. Freire (Ph.D. candidate in NeuroAI, Donders Centre for Neuroscience – Neurophysics, Radboud University)
Erik Németh (Ph.D. student in NeuroAI, Institut des Systèmes Intelligents et de Robotique – Sorbonne Université)

 

PARTICIPANT PROFILE

Researchers (postgraduate); undergraduate students (3rd year or higher) with a background relevant to the topics: Cognitive Sciences, Computational Neurosciences, Cognitive Robotics, Neuro AI, or similar.

 

REQUIREMENTS

  • University degree (or enrollment) in an affiliated field
  • Basic level of Python programming
  • Access to Google Colab.
  • Intermediate proficiency in the English language
  • Interest in the topic and the desire to actively engage during the session

 

REGISTRATION
Fill out the registration form and upload your CV and short motivation letter HERE before 30.03.2024. Participation is open (free of charge).

Based on the CV and motivation evaluation, selected participants will be informed via mail within 07.04.2024.

 

RECOMMENDED LITERATURE:

  • Verschure, F. (2016). Synthetic consciousness: the distributed adaptive control perspective. Philosophical Transactions of the Royal Society B: Biological Sciences, 371(1701), 20150448.
  • Freire, T., Amil, A. F., & Verschure, P. F. (2021). Sequential episodic control. arXiv preprint arXiv:2112.14734.
  • Massi, , Barthélemy, J., Mailly, J., Dromnelle, R., Canitrot, J., Poniatowski, E., ... & Khamassi, M. (2022). Model-Based and Model-Free Replay Mechanisms for Reinforcement Learning in Neurorobotics. Frontiers in Neurorobotics, 16, 864380.

 

For any inquiries, please contact Ismael T. Freire at ismael.freire@donders.ru.nl