Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenGL failed to open X11 server connection #890

Closed
hackenjoe opened this issue Jan 14, 2021 · 18 comments
Closed

OpenGL failed to open X11 server connection #890

hackenjoe opened this issue Jan 14, 2021 · 18 comments

Comments

@hackenjoe
Copy link

Unfortunately I dont get any video input from my usb camera since OpenGL fails:

grafik

I have tested the my-detection.py script and wanted to forward the camera stream to another computer via RTP. What did I miss?

grafik

@dusty-nv
Copy link
Owner

Hi @hackenjoe , OpenGL failing to open the window is expected if you don't have a display attached, and will not end the program.

Can you try RTP with video-viewer or detectnet first? You would run it like this:

$ video-viewer /dev/video0 rtp://<remote-ip>:1234 
$ detectnet /dev/video0 csi://0 rtp://<remote-ip>:1234 

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#transmitting-rtp

To add this to my-detection.py, you would change this line:

display = jetson.utils.videoOutput("rtp://192.168.1.100:1234")    # set your PC's IP here

@hackenjoe
Copy link
Author

hackenjoe commented Jan 14, 2021

It works with the normal video-viewer with:
video-viewer /dev/video0 rtp://<remote-ip>:1234
I get the frames transmitted to my remote computer and also I can view them in real-time.

However, when I am executing my-detection.py with your suggestion I still get the two OpenGL errors and the program exits. You are right. I have not plugged in a monitor and use headless mode on the jetson nano.

Current code with your change:

import jetson.inference
import jetson.utils

net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.4)
camera = jetson.utils.videoSource("/dev/video0")      
display = jetson.utils.videoOutput("rtp://192.168.1.85:1234") 

while display.IsStreaming():
	img = camera.Capture()
	detections = net.Detect(img)
	display.Render(img)
	display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))

And executing with python my-detection.py.

Is there a workaround or trick so that it works also in headless mode?

@dusty-nv
Copy link
Owner

I would just try it with detectnet.py because it is similar to my-detection.py, just with better command line parsing:

detectnet.py /dev/video0 rtp://<remote-ip>:1234

I'm not sure why it would have taken effect in your edited my-detection.py - can you post the log?

@hackenjoe
Copy link
Author

It's working with detectnet.py but strangely it is not working with my-detection.py. Do you mean the executing output or where can I find the log?

@dusty-nv
Copy link
Owner

Hmm - are you sure the my-detection.py you are running is the copy that you updated with the rtp string?

Regarding the output, I meant the console output when you run it. You can capture it by running python3 my-detection.py | tee log.txt

@hackenjoe
Copy link
Author

Sry for late answer but here we go:

jetson.inference -- detectNet loading build-in network 'ssd-mobilenet-v2'

detectNet -- loading detection network model from:
          -- model        networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
          -- input_blob   'Input'
          -- output_blob  'NMS'
          -- output_count 'NMS_1'
          -- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
          -- threshold    0.400000
          -- batch_size   1

[TRT]    TensorRT version 7.1.3
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - UFF  (extension '.uff')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
[TRT]    loading network plan from engine cache... networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
[TRT]    device GPU, loaded networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    Deserialize required 3128930 microseconds.
[TRT]
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       116
[TRT]       -- maxBatchSize 1
[TRT]       -- workspace    0
[TRT]       -- deviceMemory 35449856
[TRT]       -- bindings     3
[TRT]       binding 0
                -- index   0
                -- name    'Input'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  3
                -- dim #0  3 (SPATIAL)
                -- dim #1  300 (SPATIAL)
                -- dim #2  300 (SPATIAL)
[TRT]       binding 1
                -- index   1
                -- name    'NMS'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  100 (SPATIAL)
                -- dim #2  7 (SPATIAL)
[TRT]       binding 2
                -- index   2
                -- name    'NMS_1'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  1 (SPATIAL)
                -- dim #2  1 (SPATIAL)
[TRT]
[TRT]    binding to input 0 Input  binding index:  0
[TRT]    binding to input 0 Input  dims (b=1 c=3 h=300 w=300) size=1080000
[TRT]    binding to output 0 NMS  binding index:  1
[TRT]    binding to output 0 NMS  dims (b=1 c=1 h=100 w=7) size=2800
[TRT]    binding to output 1 NMS_1  binding index:  2
[TRT]    binding to output 1 NMS_1  dims (b=1 c=1 h=1 w=1) size=4
[TRT]
[TRT]    device GPU, networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT]    W = 7  H = 100  C = 1
[TRT]    detectNet -- maximum bounding boxes:  100
[TRT]    detectNet -- loaded 91 class info entries
[TRT]    detectNet -- number of object classes:  91
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- found v4l2 device: USB 2.0 Camera
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"USB\ 2.0\ Camera", v4l2.device.bus_info=(string)usb-70090000.xusb-2.1, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 11 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)5/1;
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [5] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [6] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [7] image/jpeg, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [8] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [9] image/jpeg, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [10] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] gstCamera -- selected device profile:  codec=mjpeg format=unknown width=1280 height=720
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video]  created gstCamera from v4l2:///dev/video0
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: v4l2:///dev/video0
     - protocol:  v4l2
     - location:  /dev/video0
  -- deviceType: v4l2
  -- ioType:     input
  -- codec:      mjpeg
  -- width:      1280
  -- height:     720
  -- frameRate:  30.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[gstreamer] gstEncoder -- codec not specified, defaulting to H.264
[gstreamer] gstEncoder -- pipeline launch string:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! omxh264enc bitrate=4000000 ! video/x-h264 !  rtph264pay config-interval=1 ! udpsink host=192.168.178.93 port=1234 auto-multicast=true
[video]  created gstEncoder from rtp://192.168.178.93:1234
------------------------------------------------
gstEncoder video options:
------------------------------------------------
  -- URI: rtp://192.168.178.93:1234
     - protocol:  rtp
     - location:  192.168.178.93
     - port:      1234
  -- deviceType: ip
  -- ioType:     output
  -- codec:      h264
  -- width:      0
  -- height:     0
  -- frameRate:  30.000000
  -- bitRate:    4000000
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[OpenGL] failed to open X11 server connection.
[OpenGL] failed to create X11 Window.
nvidia@nvidia-desktop:~/jetson-inference/build/aarch64/bin$

`

@javadan
Copy link

javadan commented Jul 2, 2021

+1 bump

@javadan
Copy link

javadan commented Jul 2, 2021

You ever work this one out? Thanks

@dusty-nv
Copy link
Owner

dusty-nv commented Jul 2, 2021

Try creating the videoOutput interface before the videoSource interface.

@javadan
Copy link

javadan commented Jul 2, 2021

same error, just happens earlier.

[OpenGL] failed to open X11 server connection.
[OpenGL] failed to create X11 Window.

and then

[video] created gstCamera from v4l2:///dev/video0

@dusty-nv
Copy link
Owner

dusty-nv commented Jul 2, 2021

Do you have a display attached to your Jetson? Are you able to run glxgears?

@javadan
Copy link

javadan commented Jul 2, 2021

No, I have a TV I could plug in, if I have to, but would prefer to run headless via SSH, with something like gstreamer, if possible.

$ glxgears
Error: couldn't open display (null)

I tried ssh -X tunnelling first, using the $DISPLAY param, but ran into driver issues because my local machine would apparently need the same nvidia drivers as the jetson. It seems like it's relying on X being there.

@dusty-nv
Copy link
Owner

dusty-nv commented Jul 2, 2021 via email

@javadan
Copy link

javadan commented Jul 3, 2021

I understand that. The problem is that we're using an RTP syntax, expecting it to stream over a gstreamer pipeline, but instead it's trying to display to X11.

camera = jetson.utils.videoSource("/dev/video0")      
display = jetson.utils.videoOutput("rtp://192.168.1.85:1234") 

@opsec-infosec
Copy link

opsec-infosec commented Aug 10, 2021

+1 for this issue.. I even tried the --headless as shown in netdetect.py

So what are we missing on this... if it works on netdetect.py, it should work on my-detection.py

@opsec-infosec
Copy link

opsec-infosec commented Aug 10, 2021

Fixed... the issue on the original code is that the while is checking for the display.isStreaming... and fails... so let it go into the while loop first then check after with the if not camera.IsStreaming() and no display.IsStreaming().

import jetson.inference
import jetson.utils

net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera = jetson.utils.videoSource("/dev/video0")      # '/dev/video0' for V4L2
display = jetson.utils.videoOutput("rtp://10.10.0.186:1234","--headless") # 'my_video.mp4' for file

while True:
    img = camera.Capture()
    detections = net.Detect(img)
    display.Render(img)
    display.SetStatus("Object Detection | Network {:.0f} FPS".format(net.GetNetworkFPS()))
    if not camera.IsStreaming() or not display.IsStreaming():
        break

@lilhoser
Copy link

I believe this is still broken for detection/ssd/detectnet.py. The C version of the program works fine but the same command in python version of the program throws the X11 window error.

Fails:

detectnet.py --model=models/delivery/ssd-mobilenet.onnx --labels=models/delivery/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes rtsp://192.168.1.2:7447/wPXqmXadtJmll8FK display://0
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstDecoder -- creating decoder for 192.168.1.2
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
[gstreamer] gstDecoder -- discovered video resolution: 1280x720  (framerate 24.000000 Hz)
[gstreamer] gstDecoder -- discovered video caps:  video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)main, pixel-aspect-ratio=(fraction)1/1, width=(int)1280, height=(int)720, framerate=(fraction)24/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] rtspsrc location=rtsp://192.168.1.2:7447/wPXqmXadtJmll8FK latency=10 ! queue ! rtph264depay ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv name=vidconv ! video/x-raw ! appsink name=mysink sync=false
[video]  created gstDecoder from rtsp://192.168.1.2:7447/wPXqmXadtJmll8FK
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: rtsp://192.168.1.2:7447/wPXqmXadtJmll8FK
     - protocol:  rtsp
     - location:  192.168.1.2
     - port:      7447
  -- deviceType: ip
  -- ioType:     input
  -- codec:      H264
  -- codecType:  v4l2
  -- width:      1280
  -- height:     720
  -- frameRate:  24
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- latency     10
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  3840x2160
[OpenGL] glDisplay -- X window resolution:    3840x2160
[OpenGL] failed to create X11 Window.
Traceback (most recent call last):
  File "/usr/local/bin/detectnet.py", line 50, in <module>
    output = videoOutput(args.output, argv=sys.argv)
Exception: jetson.utils -- failed to create videoOutput device

Works:

detectnet --model=models/delivery/ssd-mobilenet.onnx --labels=models/delivery/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes rtsp://192.168.1.2:7447/wPXqmXadtJmll8FK display://0

@dusty-nv
Copy link
Owner

detectnet.py --model=models/delivery/ssd-mobilenet.onnx --labels=models/delivery/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes rtsp://192.168.1.2:7447/wPXqmXadtJmll8FK display://0

@lilhoser try running detectnet.py without display://0, and it should only try to create the X11/XGL window (and if it fails to, should gracefully continue). On the other hand, if you explicitly state display://0, it will not proceed if it can't create it. It will always try to create the window unless you use the --headless flag.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants