Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

version 0.5.0 to main #100

Merged
merged 19 commits into from
Apr 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/CI_full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ jobs:
- x64
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v1
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/CI_small.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ jobs:
- x64
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v1
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
runs-on: macOS-latest
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v1
- uses: julia-actions/setup-julia@v2
with:
version: '1'
- uses: julia-actions/julia-buildpkg@v1
Expand Down
7 changes: 5 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
name = "ActionModels"
uuid = "320cf53b-cc3b-4b34-9a10-0ecb113566a3"
authors = ["Peter Thestrup Waade <peter@waade.net>"]
version = "0.4.3"
authors = ["Peter Thestrup Waade ptw@cas.au.dk",
"Anna Hedvig Møller hedvig.2808@gmail.com",
"Jacopo Comoglio jacopo.comoglio@gmail.com",
"Christoph Mathys chmathys@cas.au.dk"]
version = "0.5.0"

[deps]
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Find premade agent, and define agent with default parameters
````@example Introduction
premade_agent("help")

agent = premade_agent("premade_binary_rw_softmax")
agent = premade_agent("premade_binary_rescorla_wagner_softmax")
````

Set inputs and give inputs to agent
Expand Down
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Glob = "c27321d9-0574-5035-807b-f59d2c89b15c"
JuliaFormatter = "98e50ef6-434e-11e9-1051-2b60c6c9e899"
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Expand Down
47 changes: 47 additions & 0 deletions docs/src/Conceptual_introduction/agent_and_actionmodel.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@

# Agents, states and action models

As a very general structue we can illustrate the relation between input and action/response as below.

![Image1](../images/action_input.png)
*We can generate actions based on inputs. How we use the input to generate actions happens according to the action model*

An action model is a *way* of assuming how an agents actions are generated from inputs. An action model can be modulated according to a specific experimental use or how we assume a specific agent operates.

The mapping between inputs and actions can be extended. We will add the following elements which all action models operate with: parameters and states. The new expanded version is seen below.

![Image2](../images/structure_with_action_model.png)
*We can extend the action model arrow with parameters and states. Parameters are stable and contribute as constants to the system. States change and evolve according to input and parameters (the way states change happens accordingly to the structure of the action model).*

When defining an agent, you also have to define an action model for the agent to use. you also define the states, parameters, and an optional substruct (see advanced usage of the agent).

We will introduce a very standard reinforcement learning action model, the binary Rescorla-Wagner softmax. The parameters in this model are learning rate and action precision. The states of the agent, who produces actions according to the Rescorla-Wagner softmax action model, are "value", "transformed value" and "action probability".

----SOMETHING MORE ON RW ----- ?

When initializing an agent, we may in some cases need a starting value for certain states. These initial values are categorized as an 'initial state parameter'. In the premade Rescorla-Wagner agent the "value" state is initialized at 0 by default. The learning rate parameter and the action presicion are both set to 1.

The transformed value is calculated based on the input value (in the first run the initial state parameter for the "value" state) as seen in the equation below.


$$ \hat{q}_{n-1} =\frac{1}{1+exp(-q_{n-1})} $$

From this we can compute the new value from which an action probability can be calculated.

$$ q_n = \hat{q}_{n-1}+ \alpha \cdot (u_n-\hat{q}_{n-1})$$

$$ p= \frac{1}{1+exp(-\beta \cdot q_n)} $$


The last state "action probability" is the mean of an Bernoulli distribution from which an action can be sampled.

An agent is defined with a set of states and parameters. The action model defines through equations with how the states change according to inputs, updates the history of an agent (its states), and returns an action distribution.

The process described above is what we define as simulating actions. We know the inputs, parameters and states of the agent and returns actions. We can reverse this process and infer the parameters of an agent based on their actions in response to inputs.

This will be further elaborated in [fitting vs. simulating](./fitting_vs_simulating.md)





13 changes: 13 additions & 0 deletions docs/src/Conceptual_introduction/fitting_vs_simulating.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Fitting and simulating

The actionmodels.jl package can among other features fit models and simulate actions.

As seen in the image below, the difference between fitting and simulating is essentially if we want to infer one or more parameters, or infer actions.

![Image1](../images/fitting_vs_simulation.png)

Fitting is an obvious path if you have action data from a participant, and you want to infer the parameters from which the specific actions could arise from. See using the package for how to do fit models.

Both simulating and fitting with the actionmodels.jl package is straight forward, and will be elaborated further on in the fitting a model and simulating with an agent section.


60 changes: 60 additions & 0 deletions docs/src/Using_the_package/Introduction.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@


# # Welcome to the ActionModels.jl package!

# ActionModels.jl is a powerfull and novel package for computational modelling of behavior and cognition. The package is developed with a intention to make computaitonal modelling intuitive, fast and easily adaptive to your experimental and simulation needs.

# With ActionModels.jl you can define a fully customizable behavioral model and easily fit them to experimental data and used for computational modelling.

# we provide a consice introduction to this framework for computational modelling of behvior and cognition and its accompanying terminology.

# After this introduction, you will be presented with a detailed step-by-step guide on how to use ActionModels.jl to make your computational model runway-ready.


# ## Getting started

# Defning a premade agent
using ActionModels

# Find premade agent, and define agent with default parameters
premade_agent("help")

agent = premade_agent("premade_binary_rescorla_wagner_softmax")

# Set inputs and give inputs to agent

inputs = [1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1]
actions = give_inputs!(agent, inputs)

using StatsPlots
plot_trajectory(agent, "action_probability")

# Fit learning rate. Start by setting prior

using Distributions
priors = Dict("learning_rate" => Normal(0.5, 0.5))

# Run model
chains = fit_model(agent, priors, inputs, actions, n_chains = 1, n_iterations = 10)

# Plot prior and posterior
plot_parameter_distribution(chains, priors)

# Get posteriors from chains

get_posteriors(chains)


# ## Computaitonal modelling of behavior

# In computational behavioral modelling the aim is to produce mechanistic models of the processes underlying observed behavior. Importantly, to validate and test models, or to use them to distinguish groups of individuals, models must be fitted to emprircal data, an often complicated process that this package is designed to make as simple as possible.

# You can imagine an experimental setting where a participant is to press one of two buttons. One of the buttons elict a reward and the other elict a small electrical chock. The experimenter has set up a system, so that the probability of choosing the right button shifts over time. You can imagine this as being 80/20 on left being reward-button in the first 10 trials, and then the probability shifts to 20/80 on left being rewarding in the last 10 trials.

# The participant wishes to press the reward button and avoid the electrical chock. During the first 10 trials the participant would, through feedback, alter their behavior to select the left button the most due to the reward distribution between buttons. In the last 10 trials the preffered button press would swich.

# After the experiement the hypothetical data could be the participants button choice. We can model the behvaior of the participant in the experiment with an agent and an action model. We can with the action model approximate how information is processed in the participant (in modelling the participant becomes the agent), and how the agent produces actions with a specific model. We try to create an artificially mapping between input and action that can explain a persons behavior

# The agent contains a set of "states" and "parameters". The parameters in an agent are analogous to some preconcieved belief of the agent that can't be changed during the experiment. A very caricatured example could be if the participant had a strong color preference for one of the buttons which influences their decisions on a constant level. When we model behvaior and set these parameters they are related to some theoretically grounded elements from the action model. We will later build an action model from scratch where real parameters will show up.

# The states in an agent change over time, and the way they change depend on the action model. This structure will be elaborated more on in the next section where we go into depth with agents and action models.
205 changes: 205 additions & 0 deletions docs/src/Using_the_package/Simulation_with_an_agent.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,205 @@
# # Simulating actions with an agent

# We will in this section introduce action simulation with the Actionmodels.jl package.

# ### contents of this section
# - [The give_inputs() function](#Giving-Inputs-To-Agent)
# - [Giving a single input to the agent and retrieving history](#Give-a-single-input)
# - [Resetting the agent to initial state values](#Reset-Agent)
# - [Give_inputs() with a vector of inputs](#Multiple-Inputs)
# - [Plot State Trajectories](#Plotting-Trajectories-of-states)

# ## Giving Inputs To Agent

# With the ActionModels package you can, once you have defined your agent, use the function give_inputs to simulate actions.

#give_inputs(agent::Agent, inputs::Real)

# When you give inputs to the agent, it produces actions according to the action model it is defined with.

# As can be seen in the figure below, when we know all parameter values, states and the inputs we can simulate actions

# ![Image1](../images/fitting_vs_simulation.png)

# The type of inputs you can give to your agent depends on the agent and the action it generates depends on the corresponding action model.

# Let us define our agent and use the dedault parameter configurations
using ActionModels

agent = premade_agent("premade_binary_rescorla_wagner_softmax")

# ## Give a single input
# we can now give the agent a single input with the give_inputs!() function. The inputs for the Rescorla-Wagner agent are binary, so we input the value 1.
give_inputs!(agent, 1)

# The agent returns either "false" or "true" which translates to an action of either 0 or 1. Given that we have evolved the agent with one input the states of the agent are updated. Let's see how we recover the history and the states of the agent after one run on input. We can do this with the get_history() function. With get_history we can get one or more target states or get the history of all states.

# Let us have a look at the history from all states:

get_history(agent)

# You can see in the "value" state contains two numbers. The first number is the initial state parameter which is set in the agent's configurations (see "Creating your agent" for more on the parameters and states). The second value in the "value" state is updated by the input.
# The three other states are initialized with "missing" and evolve as we give it inputs. The states in the agent are updated according to which computations the action model does with the input. For more information on the Rescorla-Wagner action model, check out the [LINK TO CHAPTER]

# ## Reset Agent
# We would like to reset the agent to its default values with the reset!() function:

reset!(agent)

# As you can see below, we have cleared the history of the agent.

get_history(agent)

# ## Multiple Inputs
# We will now define a sequence of inputs to the agent.

inputs = [1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0]

# Let's give the inputs to the agent, and see which type of actions it will generate based on the inputs

actions = give_inputs!(agent, inputs)

# We can in the same manner get the history of the agent's states. We will have a look at the action state:
get_history(agent, "action_probability")

# ## Plotting Trajectories of states

# we can visualize the different types of states using the function:

#plot_trajectory(agent::Agent, target_state::Union{String,Tuple}; kwargs...)

# The default title when using plot_trajectory() is "state trajectory". This can be changed by adding a title-call as below. We can plot the actions and the action probability of the agent in two seperate plots:
using Plots
using StatsPlots

plot_trajectory(agent, "action", title = "actions")

# We can change the state to plotting the action_probability

plot_trajectory(agent, "action_probability", title = "acton probability")

# We can add a new linetype for the plot:

plot_trajectory(agent, "action_probability", title = "acton probability", linetype = :path)

# We can layer the plots by adding "!" at the function call. We can add the actions plot to prior action probability plot:

plot_trajectory!(agent, "action", title = "action probability and action")


# ## If your agent computes more actions

# If you wish to set up an agent who produces multiple actions, e.g. reaction time and a choice,
# you can use the "multiple_actions()" function. When setting up this type of agent, you define the different action models you want to use for each one of the wanted actions.
# Currently in the ActionModels.jl package we have not yet predefined actionmodels for different actions. For multiple actions you should define your own action models (see the advanced usage for how to do this)

# we define our two action models. A continuous and binary Rescorla Wagner:

using ActionModels
using Distributions
# Binary Rescorla Wagner
function custom_rescorla_wagner_softmax(agent, input)

## Read in parameters from the agent
learning_rate = agent.parameters["learning_rate"]
action_precision = agent.parameters["action_precision"]

## Read in states with an initial value
old_value = agent.states["value_binary"]

##We dont have any settings in this model. If we had, we would read them in as well.
##-----This is where the update step starts -------

## Sigmoid transform the value
old_value_probability = 1 / (1 + exp(-old_value))

##Get new value state
new_value = old_value + learning_rate * (input - old_value_probability)

##Pass through softmax to get action probability
action_probability = 1 / (1 + exp(-action_precision * new_value))

##-----This is where the update step ends -------
##Create Bernoulli normal distribution our action probability which we calculated in the update step
action_distributions = Distributions.Bernoulli(action_probability)

##Update the states and save them to agent's history

agent.states["value_binary"] = new_value
agent.states["transformed_value"] = 1 / (1 + exp(-new_value))
agent.states["action_probability"] = action_probability

push!(agent.history["value_binary"], new_value)
push!(agent.history["transformed_value"], 1 / (1 + exp(-new_value)))
push!(agent.history["action_probability"], action_probability)

## return the action distribution to sample actions from
return action_distributions
end


# Continuous Rescorla Wagner
function continuous_rescorla_wagner_softmax(agent, input)

## Read in parameters from the agent
learning_rate = agent.parameters["learning_rate"]

## Read in states with an initial value
old_value = agent.states["value_cont"]

##We dont have any settings in this model. If we had, we would read them in as well.
##-----This is where the update step starts -------

##Get new value state
new_value = old_value + learning_rate * (input - old_value)

##-----This is where the update step ends -------
##Create Bernoulli normal distribution our action probability which we calculated in the update step
action_distributions = Distributions.Normal(new_value, 0.3)

##Update the states and save them to agent's history

agent.states["value_cont"] = new_value
agent.states["input"] = input

push!(agent.history["value_cont"], new_value)
push!(agent.history["input"], input)

## return the action distribution to sample actions from
return action_distributions
end


# Define an agent

parameters = Dict(
"learning_rate" => 1,
"action_precision" => 1,
("initial", "value_cont") => 0,
("initial", "value_binary") => 0,
)

# We set the initial state parameter for "value" state because we need a starting value in the update step.

# Let us define the states in the agent:
states = Dict(
"value_cont" => missing,
"value_binary" => missing,
"input" => missing,
"transformed_value" => missing,
"action_probability" => missing,
)


agent = init_agent(
[continuous_rescorla_wagner_softmax, custom_rescorla_wagner_softmax],
parameters = parameters,
states = states,
)



inputs = [1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0]


#multiple_actions(agent, inputs)
7 changes: 7 additions & 0 deletions docs/src/Using_the_package/complicated_custom_agents.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# # Complicated custom agents: using substructs and sub modules

# As an addition to an agent, you can implement a so-called substruct.

# A subtruct is a method to extend the computational power of an agent. Instead of all update steps are present in the action model, we can add a level of updates through the substruct.

# In a subtruct you can work with more states, define more parameters and in general fit more complex phenomena.
Loading
Loading