Skip to content

Multiple T Task with Intra Hippocampal Connectivity Learning and Replay

Martin Llofriu edited this page Dec 20, 2016 · 2 revisions

Short Description

This is a model for Blodgett's concept of Latent Learning. A multiple T maze is layed out, with the animat starting in one end, and food being place on the other. The hypothesis of this experiment is that non-rewarded preexposure to the environment can help the animat speed up learning in the posterior rewarded trials, when compared to a non exposed individual.

XML and Parameters

The XML file is in multiscalemodel/src/edu/usf/ratsim/experiment/xml/multipleTexperiment.xml.

Some important parameters:

  • discountFactor: the reinforcement learning discount factor
  • learningRate: the reinforcement learning learning rate
  • wTransitionLR: the connectivity matrix learning rate
  • cantReplay: number of simulated replay events after the animat reaches the reward
  • replayThres: the activity threshold to consider a replay event as finished

Experiment Groups and Trials

The trials are the following:

  • Habituation: no food is put in the maze, the animat is allowed to explore for 2000 simulation cycles.
  • Learning: food is placed in the maze. 40 episodes are executed, which end on timeout (2000 cycles) or when the animat reaches the reward.

The groups are the following:

  • Control: executes Habituation and Learning
  • NoHab: only executes Learning
  • NoReplay: same as NoHab, but replay is disabled (cantReplay = 0).