Skip to content

Examples of how to use NVIDIA Omniverse Isaac Sim to solve Reinforcement Learning Games (rl-games)

License

Notifications You must be signed in to change notification settings

KhaledSharif/omniverse-gym

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

omniverse-gym

Examples of how to use NVIDIA Omniverse Isaac Sim for to solve Reinforcement Learning Games (RL-Games)

Installation

Follow the Isaac Sim documentation to install the latest Isaac Sim release (2023.1.1)

To install omniisaacgymenvs, first clone this repository:

git clone https://github.com/KhaledSharif/omniverse-gym.git

Once cloned, locate the python executable in Isaac Sim. By default, this should be python.sh. We will refer to this path as PYTHON_PATH.

To set a PYTHON_PATH variable in the terminal that links to the python executable, we can run a command that resembles the following. Make sure to update the paths to your local path. For Linux:

alias PYTHON_PATH=~/.local/share/ov/pkg/isaac_sim-2023.1.1/python.sh

Install the repository and its dependencies:

PYTHON_PATH -m pip install -e .

To run a simple form of PPO from rl_games, use the single-threaded training script:

PYTHON_PATH run.py task=Cartpole

The result is saved to the current working directory in a new directory called runs.

You can now evaluate your model by running the same environment in test (inference) mode using the saved model checkpoint.

PYTHON_PATH run.py task=Cartpole test=True checkpoint=runs/Cartpole/nn/Cartpole.pth

About

Examples of how to use NVIDIA Omniverse Isaac Sim to solve Reinforcement Learning Games (rl-games)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages