Unity ML-Agents (Gym) wrapper using PyCall.
- Unity ML-Agents - Release 2
- Unity ML-Agents Python Interface (Envs) - v0.16.1
- Unity ML-Agents Gym Wrapper - v0.16.1 (same limitations)
- Environment Executables
- Create/ close environment:
Environment(path; nographics=true, usevisual=false, uint8visual=false, multipleobs=false, kwargs...)
# logFile path to prevent unity terminal output
env = Environment("envs/Basic", logFile=pwd()*"/envs/logs/Basic.log")
close!(env)
- Environment interaction
s = reset!(env) # needs to be called after (done == true) step
s, r, done, info = step!(env) # random action
s, r, done, info = step!(env, action) # eltype(action) == eltype(env.actionspace)
- Environment data (check examples & tests)
# Required action array size & eltype (model output)
size(env.actionspace)
length(env.actionspace)
eltype(env.actionspace)
sample(env.actionspace) # random action
# Discrete/ MultiDiscrete actionspace
# Max ranges for each dim
actiondim1 = env.actionspace.actions[1]
actiondim2 = env.actionspace.actions[2]
...
# Box actionspace
# Range tuples for each dim
l, h = env.actionspace.actions[1]
...
# State/ Observation size & eltype (model input, single or multiple observations)
env.observationspace::Union{ObservationSpace, Vector{ObservationSpace}}
size(observationspace)
length(observationspace)
eltype(observationspace)
l, h = env.actionspace.vals[1]