Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NotImplementedError: abstract #775

Closed
Ljferrer opened this issue Nov 18, 2017 · 21 comments

Comments

@Ljferrer
Copy link

@Ljferrer Ljferrer commented Nov 18, 2017

Having trouble with gym.make().render()

I'm running Windows 10. This issue did not exist when I was working on python 3.6.3, but now that I downgraded to 3.5.2 for MuJoCo, this code (taken from another comment):

import gym
import random
import numpy as np
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
from statistics import median, mean
from collections import Counter

LR = 1e-3
env = gym.make("CartPole-v0")
env.reset()
goal_steps = 500
score_requirement = 50
initial_games = 10000

def some_random_games_first():
    for episode in range(20):
        env.reset()
        for t in range(200):
            env.render()                        # if I comment out this line, everything runs fine
            action = env.action_space.sample()
            observation, reward, done, info = env.step(action)
            if done:
                break

some_random_games_first()

Produces this error:

Traceback (most recent call last):
File "C:\Users\LF\Documents\OpenAI Gym\CartPole\test.py", line 39, in
some_random_games_first()
File "C:\Users\LF\Documents\OpenAI Gym\CartPole\test.py", line 26, in some_random_games_first
env.render()
File "C:\Users\LF\Python35\lib\site-packages\gym\core.py", line 150, in render
return self._render(mode=mode, close=close)
File "C:\Users\LF\Python35\lib\site-packages\gym\core.py", line 286, in _render
return self.env.render(mode, close)
File "C:\Users\LF\Python35\lib\site-packages\gym\core.py", line 150, in render
return self._render(mode=mode, close=close)
File "C:\Users\LF\Python35\lib\site-packages\gym\envs\classic_control\cartpole.py", line 116, in render
self.viewer = rendering.Viewer(screen_width, screen_height)
File "C:\Users\LF\Python35\lib\site-packages\gym\envs\classic_control\rendering.py", line 51, in init
self.window = pyglet.window.Window(width=width, height=height, display=display)
File "C:\Users\LF\Python35\lib\site-packages\pyglet\window_init
.py", line 504, in init
screen = display.get_default_screen()
File "C:\Users\LF\Python35\lib\site-packages\pyglet\canvas\base.py", line 73, in get_default_screen
return self.get_screens()[0]
File "C:\Users\LF\Python35\lib\site-packages\pyglet\canvas\base.py", line 65, in get_screens
raise NotImplementedError('abstract')
NotImplementedError: abstract

Also, pyglet is up to date (1.3.0)

I apologize if a solution can be found elsewhere, but I have not found another issue like this on gym.

@FirefoxMetzger

This comment has been minimized.

Copy link
Contributor

@FirefoxMetzger FirefoxMetzger commented Nov 18, 2017

I am by no means an expert with pyglet, so perhaps someone with more experience can help you better.

It looks like it can not open a window because it fails to find a display to render to.
Have you made sure you have an OpenGL install your python 3.5 can access? I know that you can use python-opengl on Ubuntu, but I am not sure if it is platform independent, but could be a place to start.

Another lead could be a bug in pyglet1.3rc1 . There are some folks that report running into the same error in a different context here and say it worked with pyglet 1.2.4

@Ljferrer

This comment has been minimized.

Copy link
Author

@Ljferrer Ljferrer commented Nov 18, 2017

Thank you for the response! Downgrading to pyglet 1.2.4 has solved the issue.

@FirefoxMetzger

This comment has been minimized.

Copy link
Contributor

@FirefoxMetzger FirefoxMetzger commented Nov 18, 2017

@Ljferrer Happy to help, if the problem is solved you can consider closing the issue to keep things nice and tidy =)

@Ljferrer Ljferrer closed this Nov 18, 2017
@akuchotrani

This comment has been minimized.

Copy link

@akuchotrani akuchotrani commented Dec 3, 2017

@Ljferrer I was struggling from hours to fix the issue. Your comment really helped me fix the error. Thanks a lot!

@MikeDoho

This comment has been minimized.

Copy link

@MikeDoho MikeDoho commented Dec 5, 2017

How do you downgrade? Sorry I am new to ubuntu....and just this in general haha.

@Ljferrer

This comment has been minimized.

Copy link
Author

@Ljferrer Ljferrer commented Dec 5, 2017

I believe the command is
pip install pyglet==1.2.4

@akuchotrani

This comment has been minimized.

Copy link

@akuchotrani akuchotrani commented Dec 5, 2017

@MikeDoho Hey Mike, even I am not an expert. But I followed the following steps on windows(on anaconda cmd prompt). I hope it helps:-

  1. Open commandline(anaconda cmd) and uninstall the current version of pyglet
    pip uninstall pyglet
  2. Download the whl file of pyglet 1.2.4. If you are on python 3 select the py3 version
    link: https://pypi.python.org/pypi/pyglet/1.2.4
  3. Copy this whl file into the location where python is installed I believe python/scripts
  4. then copy the file name and open the command prompt and type
    pip install filename

Also if you are using any IDE like Spyder close the IDE and restart it again.
I hope it works.

Also there were some things just make sure that are installed. I had to install them.
1)Swig (if you want to run make any other environment except the cartpole)
2)Wheel

@MikeDoho

This comment has been minimized.

Copy link

@MikeDoho MikeDoho commented Dec 5, 2017

Thank you everyone so much. I can see cartpole now.

@ylyhlh

This comment has been minimized.

Copy link

@ylyhlh ylyhlh commented May 26, 2018

after some checking.
python3.6/site-packages/pyglet/__init__.py

change

if 'sphinx' in sys.modules:
    setattr(sys, 'is_epydoc', True)

to

if 'sphinx' in sys.modules:
    setattr(sys, 'is_epydoc', False)

pyglet has problem with jupyter, jupyter imports sphinx be default
but sphinx imported will lead pyglet thinks it is generating document? so cannot find display correctly.

@brindha87

This comment has been minimized.

Copy link

@brindha87 brindha87 commented Jun 14, 2018

Downgrading to pyglet 1.2.4 worked for me. Thanks!

@kongwenjia

This comment has been minimized.

Copy link

@kongwenjia kongwenjia commented Jun 15, 2018

pyglet 1.2.4 worked for me ! Thanks a lot!

@navneetjuneja26

This comment has been minimized.

Copy link

@navneetjuneja26 navneetjuneja26 commented Jul 10, 2018

It too worked for me thanks a lot

@foobarbecue

This comment has been minimized.

Copy link

@foobarbecue foobarbecue commented Aug 16, 2018

I had this problem on Windows 10 with pyglet 1.3.2 . I used the fix that @ylyhlh proposed, and it works now!

@Vyachez

This comment has been minimized.

Copy link

@Vyachez Vyachez commented Aug 16, 2018

had no pyglet installed - now it works like a charm.

@MadhanVibeeshanan

This comment has been minimized.

Copy link

@MadhanVibeeshanan MadhanVibeeshanan commented Oct 31, 2018

Downgrade the pyglet to 1.2.4 by running the following command:
pip install pip install pyglet==1.2.4
And restart your IDE. In my case, I'm using Spyder. Restarting spyder helped me resolving the error.

Thanks

@mgkumar138

This comment has been minimized.

Copy link

@mgkumar138 mgkumar138 commented Nov 14, 2018

Hi guys, i did the pyglet downgrade and included env.close but the NotImplementedError continues to come up in my main script. However, if i did the same process with just env.render(), it works. Is there something wrong in the way i am calling env.render in my script?

Main script:

import gym
import numpy as np
from keras.layers import Dense, Input
from keras.models import Model
import matplotlib.pyplot as plt
from matplotlib import style

#Plots #
style.use('fivethirtyeight')
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)

# Environment#
env = gym.make('NChain-v0')
num_a = env.action_space.n
if env.observation_space.shape == ():
    num_s = env.observation_space.n
else:
    num_s = env.observation_space.shape[0]

#Model #

data_input = Input(shape=(num_s,), name='data_input') # Keras functional API codes

h1 = Dense(10, activation='relu')(data_input)
prediction_output = Dense(2, activation='linear', name='prediction_output')(h1)

model = Model(inputs=data_input, outputs=prediction_output)
model.compile(optimizer='adam',
              loss='mse', # loss function is mean square error between target and current Q value
              metrics=['mae'])

# DQN #
num_iteration = 10
num_episodes = 10
iterations = []
reward_ite = []

for k in range(num_iteration):
    y = 0.95
    lr = 0.9
    eps = 0.5
    decay_factor = 0.99
    r_avg_list = []
    episodes = []
    reward_epi = []
    loss = []
    loss_log = []

    for i in range(num_episodes):
        s = env.reset()
        eps *= decay_factor
        done = False
        r_sum = 0
        hist = 0
        done_count = 0
        while not done:
            env.render()
            done_count +=1
            if np.random.random() < eps:
                a = np.random.randint(0, num_a) # explore more with increasing episodes
            else:
                target_vec = model.predict(np.identity(num_s)[s:s + 1])[0] # [0] to choose 1st object
                a = np.argmax(target_vec)

            new_s, r, done, info = env.step(a)
            # sum total reward gained from experienced state-action reward
            r_sum += r

            new_s_rewards = model.predict(np.identity(num_s)[new_s:new_s + 1])
            target = r + y*np.max(new_s_rewards)

            target_vec = model.predict(np.identity(num_s)[s:s + 1])[0]  # [0] to choose 1st object
            target_vec[a] = target

            history = model.fit(np.identity(num_s)[s:s + 1], target_vec.reshape(-1, 2), epochs=1, verbose=0)
            # update current state with new state for next cycle of training
            s = new_s
            hist += history.history["loss"][0]

        r_avg_list.append(r_sum/1000) # find reward per game, normalise to 1000 while loop
        loss_log.append(hist/1000)
        print("Avg Reward = {} for Episode {} of Iteration {}".format(r_avg_list[-1], i + 1, k + 1))

        episodes.append(i+1)
        reward_epi.append(r_avg_list[-1])
        loss.append(loss_log[-1])

        ax1.plot(episodes,reward_epi)
        ax1.set_title('Average Rewards every episode')
        ax1.set_xlabel('Episodes')
        ax1.set_ylabel('Reward')
        plt.pause(0.001)
        env.viewer = None
        env.close()

    iterations.append(k+1)
    reward_ite.append(r_avg_list[-1])

    ax2.plot(iterations,reward_ite)
    ax2.set_title('Average Rewards in game iteration')
    ax2.set_xlabel('Iteration')
    ax2.set_ylabel('Reward')
    plt.pause(0.001)

plt.savefig('KerasRLit'+str(num_iteration)+'ep'+str(num_episodes)+'.png')
plt.show()

env script that works:

import gym

env = gym.make('CartPole-v0')
env.reset()
# Show the window
env.render()
# Close it
env.viewer = None
env.close()
# Show the window again
env.render()

Any help will be greatly appreciated!

@anuj1560

This comment has been minimized.

Copy link

@anuj1560 anuj1560 commented Dec 26, 2018

for _ in range(20):
observation = env.reset()
for t in range(100):
env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()

on running python3 cart.py, it gives
Segmentation fault(core dumped).

Thanks

@sprakasdash

This comment has been minimized.

Copy link

@sprakasdash sprakasdash commented Dec 28, 2018

@anuj1560 This link right here explains why the Segmentation fault (core dumped) error is coming. And in most of the cases its because your code using a lot of RAM (which is btw the 2nd answer given in the link). You have not posted the code in the correct format but as I have written here:

env = gym.make('CartPole-v0')
for _ in range(20):
    observation = env.reset()
    for t in range(100):
        time.sleep(0.02)
        env.render()
        print(observation)
        action = env.action_space.sample()
        observation, reward, done, info = env.step(action)
if done:
    print("Episode finished after {} timesteps".format(t+1))

It gives me output normally and the video plays played just fine.
Could you post your full code so I could figure it out?

@anuj1560

This comment has been minimized.

Copy link

@anuj1560 anuj1560 commented Jan 2, 2019

@anuj1560 This link right here explains why the Segmentation fault (core dumped) error is coming. And in most of the cases its because your code using a lot of RAM (which is btw the 2nd answer given in the link). You have not posted the code in the correct format but as I have written here:

env = gym.make('CartPole-v0')
for _ in range(20):
    observation = env.reset()
    for t in range(100):
        time.sleep(0.02)
        env.render()
        print(observation)
        action = env.action_space.sample()
        observation, reward, done, info = env.step(action)
if done:
    print("Episode finished after {} timesteps".format(t+1))

It gives me output normally and the video plays played just fine.
Could you post your full code so I could figure it out?

@anuj1560

This comment has been minimized.

Copy link

@anuj1560 anuj1560 commented Jan 2, 2019

I tried also your code and given example in gym but all are giving segmentation fault(core dumped).

@anuj1560

This comment has been minimized.

Copy link

@anuj1560 anuj1560 commented Jan 2, 2019

$vi cart.py
import gym
import faulthandler
faulthandler.enable()
env = gym.make('CartPole-v0')
for i_episode in range(20):
observation = env.reset()
for t in range(100):
env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break

env.close()

Output
$python3 cart.py
Fatal Python error: Segmentation fault
Current thread 0x0000007fa6799010 (most recent call first):
File "/home/nvidia/.local/lib/python3.6/site-packages/pyglet/gl/lib_glx.py", line 74 in link_GL
File "/home/nvidia/.local/lib/python3.6/site-packages/pyglet/gl/glx.py", line 440 in
File "", line 219 in _call_with_frames_removed
File "", line 678 in exec_module
File "", line 665 in _load_unlocked
File "", line 955 in _find_and_load_unlocked
File "", line 971 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 1023 in _handle_fromlist
File "/home/nvidia/.local/lib/python3.6/site-packages/pyglet/gl/xlib.py", line 16 in
File "", line 219 in _call_with_frames_removed
File "", line 678 in exec_module
File "", line 665 in _load_unlocked
File "", line 955 in _find_and_load_unlocked
File "", line 971 in _find_and_load
File "/home/nvidia/.local/lib/python3.6/site-packages/pyglet/gl/init.py", line 221 in
File "", line 219 in _call_with_frames_removed
File "", line 678 in exec_module
File "", line 665 in _load_unlocked
File "", line 955 in _find_and_load_unlocked
File "", line 971 in _find_and_load
File "/home/nvidia/packages/openai/gym/gym/envs/classic_control/rendering.py", line 23 in
File "", line 219 in _call_with_frames_removed
File "", line 678 in exec_module
File "", line 665 in _load_unlocked
File "", line 955 in _find_and_load_unlocked
File "", line 971 in _find_and_load
File "", line 219 in _call_with_frames_removed
File "", line 1023 in _handle_fromlist
File "/home/nvidia/packages/openai/gym/gym/envs/classic_control/cartpole.py", line 150 in render
File "/home/nvidia/packages/openai/gym/gym/core.py", line 275 in render
File "cart.py", line 8 in
Segmentation fault (core dumped)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.