A Gymnasium env for training reinforcement learning agents to navigate mazes.
Random moves are used for this demo.
Random moves are used for this demo.
Random moves are used for this demo.
import gymnasium as gym
import mazegym
env_9x9_random = gym.make('Maze9x9Random-v0')
env_35x15_random = gym.make('Maze35x15Random-v0')
env_5x5_fixed = gym.make('Maze5x5Fixed-v0')
env_3x7_fixed = gym.make('Maze3x7Fixed-v0')from mazegym import MazeEnvironment
#Random env
env_random = MazeEnvironment(width=10, height=5)
# Fixed env
fixed_grid = np.ones((3, 7), dtype=np.int8)
fixed_grid[1, :] = 0
fixed_grid[1, 0] = 2
fixed_grid[1, 6] = 3
env_fixed = MazeEnvironment(grid=grid)import gymnasium as gym
env_35x15_random = gym.make('Maze35x15Random-v0')
# Reset the environment
observation, info = env_35x15_random.reset()
# Make a random valid move
valid_moves = info.get("valid_moves")
move = random.choice(valid_moves)
observation, reward, done, truncated, info = env_35x15_random.step(move)
# Render the environment. The only render mode is 'human' which renders visual output.
env_35x15_random.render()
# Close the environment
env_35x15_random.close()- width: width of maze.
- height: height of maze.
- grid: User for custom mazes.
- vision_range: Range of tiles your agent can see forward. Agent only sees forward and remembers previously visited tiles. If vision_range is not specified all map is visible.
- wall_path_swap: Tuple accepting two elements. Allows environment randomness making a wall become a path and a path become a wall. First value is the transformation chance. The second value is the frequency of transformations. No effects if tuple value is None.
- max_steps: Maximum steps until episode terminates. Defaults to:
(3 × width × height).
Either width and height or grid is required. Width ang height are used for random mazes while grid is used for custom mazes.
- Action Space: Discrete(4) - Four possible actions:
0(up),1(right),2(down),3(left). Invalid moves (moving into walls) results in an error. - Observation Space:
Box(0, 3, (height, width), int8). Contains values:0for empty paths,1for walls,2for the agent,3for the goal. - Reward:
100if the goal is reached,-1for each step taken.-2for an illegal move. - Done:
Trueif the agent reaches the goal,Falseotherwise. - Truncated:
Trueif maximum steps are exceeded,Falseotherwise.


