Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc polish #101

Merged
merged 3 commits into from
Apr 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Glob = "c27321d9-0574-5035-807b-f59d2c89b15c"
HierarchicalGaussianFiltering = "63d42c3e-681c-42be-892f-a47f35336a79"
JuliaFormatter = "98e50ef6-434e-11e9-1051-2b60c6c9e899"
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Expand Down
4 changes: 2 additions & 2 deletions docs/src/Using_the_package/Simulation_with_an_agent.jl
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ function custom_rescorla_wagner_softmax(agent, input)

## Read in parameters from the agent
learning_rate = agent.parameters["learning_rate"]
action_precision = agent.parameters["softmax_action_precision"]
action_precision = agent.parameters["action_precision"]

## Read in states with an initial value
old_value = agent.states["value_binary"]
Expand Down Expand Up @@ -174,7 +174,7 @@ end

parameters = Dict(
"learning_rate" => 1,
"softmax_action_precision" => 1,
"action_precision" => 1,
("initial", "value_cont") => 0,
("initial", "value_binary") => 0,
)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/Using_the_package/creating_own_action_model.jl
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ function custom_rescorla_wagner_softmax(agent, input)

## Read in parameters from the agent
learning_rate = agent.parameters["learning_rate"]
action_precision = agent.parameters["softmax_action_precision"]
action_precision = agent.parameters["action_precision"]

## Read in states with an initial value
old_value = agent.states["value"]
Expand Down Expand Up @@ -156,7 +156,7 @@ end
#Set the parameters:

parameters =
Dict("learning_rate" => 1, "softmax_action_precision" => 1, ("initial", "value") => 0)
Dict("learning_rate" => 1, "action_precision" => 1, ("initial", "value") => 0)

# We set the initial state parameter for "value" state because we need a starting value in the update step.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ get_posteriors(fitted_model)

# Add an extra prior in the Dict
multiple_priors =
Dict("learning_rate" => Normal(1, 0.5), "softmax_action_precision" => Normal(0.8, 0.2))
Dict("learning_rate" => Normal(1, 0.5), "action_precision" => Normal(0.8, 0.2))

multiple_fit = fit_model(agent, multiple_priors, inputs, actions)

Expand Down
4 changes: 2 additions & 2 deletions docs/src/Using_the_package/premade_agents_and_models.jl
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ set_parameters!(
agent,
Dict(
"learning_rate" => 0.79,
"softmax_action_precision" => 0.60,
"action_precision" => 0.60,
("initial", "value") => 1,
),
)
Expand All @@ -74,7 +74,7 @@ agent_custom_parameters = premade_agent(
"premade_binary_rescorla_wagner_softmax",
Dict(
"learning_rate" => 0.7,
"softmax_action_precision" => 0.8,
"action_precision" => 0.8,
("initial", "value") => 1,
),
)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/Using_the_package/variations_of_util.jl
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ get_parameters(agent, [("initial", "value"), "learning_rate"])
set_parameters!(agent, ("initial", "value"), 1)

# Setting multiple parameters in an agent
set_parameters!(agent, Dict("learning_rate" => 3, "softmax_action_precision" => 0.5))
set_parameters!(agent, Dict("learning_rate" => 3, "action_precision" => 0.5))

# See the parameters we have set uising get_parameters function
get_parameters(agent)
Expand Down Expand Up @@ -89,7 +89,7 @@ actions = give_inputs!(agent, inputs)
# Set a prior for the parameter we wish to fit
using Distributions
priors =
Dict("softmax_action_precision" => Normal(1, 0.5), "learning_rate" => Normal(1, 0.1))
Dict("action_precision" => Normal(1, 0.5), "learning_rate" => Normal(1, 0.1))

# Fit the model
fitted_model = fit_model(agent, priors, inputs, actions)
Expand Down
96 changes: 43 additions & 53 deletions docs/src/julia_src_files/creating_own_action_model.jl
Original file line number Diff line number Diff line change
Expand Up @@ -45,32 +45,43 @@ using Plots, StatsPlots

# Rescorla Wagner continuous

function continuous_rescorla_wagner_gaussian(agent, input)
#-- Read in parameters and states --#

function continuous_rescorla_wagner_gaussian(agent::Agent, input::Real)

## Read in parameters from the agent
learning_rate = agent.parameters["learning_rate"]
action_noise = agent.parameters["action_noise"]
action_noise = agent.parameters["action_noise"]

## Read in states with an initial value
old_value = agent.states["value"]

#-- Update the Rescolar Wagner's expected value --#
##We dont have any settings in this model. If we had, we would read them in as well.
##-----This is where the update step starts -------

##Get new value state
new_value = old_value + learning_rate * (input - old_value)

#-- Sample an action with Gaussian noise --#
action_distribution = Distributions.Normal(new_value, action_noise)

#-- Update states for next round --#
update_states!(agent, "value", new_value)
update_states!(agent, "input", new_value)
##-----This is where the update step ends -------
##Create Bernoulli normal distribution our action probability which we calculated in the update step
action_distribution = Distributions.Normal(new_value, action_noise)

##Update the states and save them to agent's history
update_states!(agent, Dict(
"value" => new_value,
"input" => input,
))

#-- return the distribution to sample actions from
## return the action distribution to sample actions from
return action_distribution
end


#-- define parameters and states --#
parameters = Dict(
"learning_rate" => 0.8,
"action_noise" => 1,
("initial", "value") => 0)
InitialStateParameter("value") => 0)

states = Dict(
"value" => missing,
Expand Down Expand Up @@ -102,60 +113,36 @@ plot!(actions, linetype = :scatter, label = "action")







plot_trajectory(agent, "action")
plot_trajectory!(agent, "input")

reset!(agent)

# With binary inputs
inputs = [0, 1, 0, 0, 1, 1, 1, 0, 0, 1]
give_inputs!(agent, inputs)
#-
plot_trajectory(agent, "action")
plot(inputs)

# ## A Binary Rescorla-Wagner
function binary_rescorla_wagner_softmax(agent::Agent, input::Union{Bool,Integer})

function custom_rescorla_wagner_softmax(agent, input)

## Read in parameters from the agent
#Read in parameters
learning_rate = agent.parameters["learning_rate"]
action_precision = agent.parameters["softmax_action_precision"]
action_precision = agent.parameters["action_precision"]

## Read in states with an initial value
#Read in states
old_value = agent.states["value"]

##We dont have any settings in this model. If we had, we would read them in as well.
##-----This is where the update step starts -------

## Sigmoid transform the value
#Sigmoid transform the value
old_value_probability = 1 / (1 + exp(-old_value))

##Get new value state
#Get new value state
new_value = old_value + learning_rate * (input - old_value_probability)

##Pass through softmax to get action probability
#Pass through softmax to get action probability
action_probability = 1 / (1 + exp(-action_precision * new_value))

##-----This is where the update step ends -------
##Create Bernoulli normal distribution our action probability which we calculated in the update step
#Create Bernoulli normal distribution with mean of the target value and a standard deviation from parameters
action_distribution = Distributions.Bernoulli(action_probability)

##Update the states and save them to agent's history
#Update states
update_states!(agent, Dict(
"value" => new_value,
"value_probability" => 1 / (1 + exp(-new_value)),
"action_probability" => action_probability,
"input" => input,
))

agent.states["value"] = new_value
agent.states["transformed_value"] = 1 / (1 + exp(-new_value))
agent.states["action_probability"] = action_probability

push!(agent.history["value"], new_value)
push!(agent.history["transformed_value"], 1 / (1 + exp(-new_value)))
push!(agent.history["action_probability"], action_probability)

## return the action distribution to sample actions from
return action_distribution
end

Expand All @@ -169,19 +156,22 @@ end
#Set the parameters:

parameters =
Dict("learning_rate" => 1, "softmax_action_precision" => 1, ("initial", "value") => 0)
Dict("learning_rate" => 1,
"action_precision" => 1,
InitialStateParameter("value") => 0)

# We set the initial state parameter for "value" state because we need a starting value in the update step.

# Let us define the states in the agent:
states = Dict(
"value" => missing,
"transformed_value" => missing,
"value_probability" => missing,
"action_probability" => missing,
"input" => missing
)

# And lastly the action model:
action_model = custom_rescorla_wagner_softmax
action_model = binary_rescorla_wagner_softmax

# We can now initialize our agent with the action model, parameters and states.
agent = init_agent(action_model, parameters = parameters, states = states)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/julia_src_files/fitting_an_agent_model_to_data.jl
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ get_posteriors(fitted_model)

# Add an extra prior in the Dict
multiple_priors =
Dict("learning_rate" => Normal(1, 0.5), "softmax_action_precision" => Normal(0.8, 0.2))
Dict("learning_rate" => Normal(1, 0.5), "action_precision" => Normal(0.8, 0.2))

multiple_fit = fit_model(agent, multiple_priors, inputs, actions)

Expand Down
4 changes: 2 additions & 2 deletions docs/src/julia_src_files/premade_agents_and_models.jl
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ set_parameters!(
agent,
Dict(
"learning_rate" => 0.79,
"softmax_action_precision" => 0.60,
"action_precision" => 0.60,
("initial", "value") => 1,
),
)
Expand All @@ -74,7 +74,7 @@ agent_custom_parameters = premade_agent(
"binary_rescorla_wagner_softmax",
Dict(
"learning_rate" => 0.7,
"softmax_action_precision" => 0.8,
"action_precision" => 0.8,
("initial", "value") => 1,
),
)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/julia_src_files/simulation_with_an_agent.jl
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ function custom_rescorla_wagner_softmax(agent, input)

## Read in parameters from the agent
learning_rate = agent.parameters["learning_rate"]
action_precision = agent.parameters["softmax_action_precision"]
action_precision = agent.parameters["action_precision"]

## Read in states with an initial value
old_value = agent.states["value_binary"]
Expand Down Expand Up @@ -174,7 +174,7 @@ end

parameters = Dict(
"learning_rate" => 1,
"softmax_action_precision" => 1,
"action_precision" => 1,
("initial", "value_cont") => 0,
("initial", "value_binary") => 0,
)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/julia_src_files/variations_of_util.jl
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ get_parameters(agent, [("initial", "value"), "learning_rate"])
set_parameters!(agent, ("initial", "value"), 1)

# Setting multiple parameters in an agent
set_parameters!(agent, Dict("learning_rate" => 3, "softmax_action_precision" => 0.5))
set_parameters!(agent, Dict("learning_rate" => 3, "action_precision" => 0.5))

# See the parameters we have set uising get_parameters function
get_parameters(agent)
Expand Down Expand Up @@ -89,7 +89,7 @@ actions = give_inputs!(agent, inputs)
# Set a prior for the parameter we wish to fit
using Distributions
priors =
Dict("softmax_action_precision" => Normal(1, 0.5), "learning_rate" => Normal(1, 0.1))
Dict("action_precision" => Normal(1, 0.5), "learning_rate" => Normal(1, 0.1))

# Fit the model
fitted_model = fit_model(agent, priors, inputs, actions)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/markdowns/creating_own_action_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ function custom_rescorla_wagner_softmax(agent::Agent, input)

# Read in parameters from the agent
learning_rate = agent.parameters["learning_rate"]
action_precision = agent.parameters["softmax_action_precision"]
action_precision = agent.parameters["action_precision"]

# Read in states with an initial value
old_value = agent.states["value"]
Expand Down Expand Up @@ -90,7 +90,7 @@ We can define the agent now. Let us do it with the init_agent() function. We nee

parameters = Dict(
"learning_rate" => 1,
"softmax_action_precision" => 1,
"action_precision" => 1,
("initial", "value") => 0,)
````

Expand Down
2 changes: 1 addition & 1 deletion docs/src/markdowns/fitting_an_agent_model_to_data.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ By adding multiple parameter priors you can autimatically fit them with fit\_mod
Add an extra prior in the Dict

````@example fitting_an_agent_model_to_data
multiple_priors = Dict("learning_rate" => Normal(1, 0.5),"softmax_action_precision"=> Normal(0.8,0.2))
multiple_priors = Dict("learning_rate" => Normal(1, 0.5),"action_precision"=> Normal(0.8,0.2))

multiple_fit = fit_model(agent, multiple_priors, inputs, actions)
````
Expand Down
4 changes: 2 additions & 2 deletions docs/src/markdowns/premade_agents_and_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ If you wish to change multiple parameters in the agent, you can define a dict of

````@example premade_agents_and_models
set_parameters!(agent, Dict("learning_rate" => 0.79,
"softmax_action_precision" => 0.60,
"action_precision" => 0.60,
("initial", "value") => 1))
````

Expand All @@ -89,7 +89,7 @@ If you know which parameter values you wish to use when defining your agent, you

````@example premade_agents_and_models
agent_custom_parameters = premade_agent("premade_binary_rescorla_wagner_softmax", Dict("learning_rate" => 0.7,
"softmax_action_precision" => 0.8,
"action_precision" => 0.8,
("initial", "value") => 1)
)

Expand Down
4 changes: 2 additions & 2 deletions docs/src/markdowns/variations_of_util.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ set_parameters!(agent,("initial", "value"), 1 )
Setting multiple parameters in an agent

````@example variations_of_util
set_parameters!(agent, Dict("learning_rate" => 3, "softmax_action_precision"=>0.5))
set_parameters!(agent, Dict("learning_rate" => 3, "action_precision"=>0.5))
````

See the parameters we have set uising get_parameters function
Expand Down Expand Up @@ -136,7 +136,7 @@ Set a prior for the parameter we wish to fit

````@example variations_of_util
using Distributions
priors = Dict("softmax_action_precision" => Normal(1, 0.5), "learning_rate"=> Normal(1, 0.1))
priors = Dict("action_precision" => Normal(1, 0.5), "learning_rate"=> Normal(1, 0.1))
````

Fit the model
Expand Down
3 changes: 2 additions & 1 deletion src/ActionModels.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@ module ActionModels
using ReverseDiff, Turing, Distributions, DataFrames, RecipesBase, Logging, Distributed
using Turing: DynamicPPL, AutoReverseDiff
#Export functions
export Agent, RejectParameters, SharedParameter, Multilevel
export Agent, RejectParameters, GroupedParameters, Multilevel
export init_agent, premade_agent, warn_premade_defaults, multiple_actions, check_agent
export create_agent_model, fit_model
export plot_parameter_distribution,
plot_predictive_simulation, plot_trajectory, plot_trajectory!
export get_history, get_states, get_parameters, set_parameters!, reset!, give_inputs!, single_input!
export get_posteriors, update_states!, set_save_history!
export InitialStateParameter, ParameterGroup

#Load premade agents
function __init__()
Expand Down
Loading
Loading