Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error occurs when I run it on Mac Os - NSInternalInconsistencyException #17

Closed
mesutpiskin opened this issue Mar 27, 2019 · 2 comments
Closed

Comments

@mesutpiskin
Copy link

- python3 elg_demo.py

but I get the following error
How can I resolve it ?

>27/03 12:51 WARNING From /usr/local/lib/python3.7/site-packages/tensorflow/python/profiler/internal/flops_registry.py:243: tensor_shape_from_node_def_name (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.remove_training_nodes
Parsing Inputs...
27/03 12:51 INFO ------------------------------
27/03 12:51 INFO  Approximate Model Statistics
27/03 12:51 INFO ------------------------------
27/03 12:51 INFO FLOPS per input: 205,323,382.0
27/03 12:51 INFO Trainable Parameters: 150,813
27/03 12:51 INFO ------------------------------
2019-03-27 12:51:20.286 Python[8185:261347] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'NSWindow drag regions should only be invalidated on the Main Thread!'
*** First throw call stack:
(
	0   CoreFoundation                      0x00007fff483aeecd __exceptionPreprocess + 256
	1   libobjc.A.dylib                     0x00007fff7446a720 objc_exception_throw + 48
	2   CoreFoundation                      0x00007fff483c895d -[NSException raise] + 9
	3   AppKit                              0x00007fff458c9c8e -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 324
	4   AppKit                              0x00007fff458c707c -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1488
	5   AppKit                              0x00007fff4598795b -[NSPanel _initContent:styleMask:backing:defer:contentView:] + 50
	6   AppKit                              0x00007fff458c6aa6 -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45
	7   AppKit                              0x00007fff45987910 -[NSPanel initWithContentRect:styleMask:backing:defer:] + 64
	8   QtGui                               0x000000010c107028 -[QCocoaPanel initWithContentRect:styleMask:backing:defer:] + 74
	9   QtGui                               0x000000010c10dbbf -[NSWindow(QWidgetIntegration) qt_initWithQWidget:contentRect:styleMask:] + 81
	10  QtGui                               0x000000010c0fee1c _ZL20qt_mac_create_windowP7QWidgetjmRK5QRect + 427
	11  QtGui                               0x000000010c0feaa1 _ZN14QWidgetPrivate18qt_create_root_winEv + 65
	12  QtGui                               0x000000010c1008d3 _ZN14QWidgetPrivate10create_sysElbb + 1079
	13  QtGui                               0x000000010c196d25 _ZN7QWidget6createElbb + 587
	14  QtGui                               0x000000010c1959f6 _ZN14QWidgetPrivate4initEP7QWidget6QFlagsIN2Qt10WindowTypeEE + 362
	15  QtGui                               0x000000010c1957e0 _ZN7QWidgetC2EPS_6QFlagsIN2Qt10WindowTypeEE + 122
	16  QtGui                               0x000000010c118cca _ZN14QDesktopWidgetC2Ev + 32
	17  QtGui                               0x000000010c15c183 _ZN12QApplication7desktopEv + 53
	18  QtGui                               0x000000010c114a05 _Z9flipPointRK7CGPoint + 32
	19  QtGui                               0x000000010c0f4961 _ZN7QCursor3posEv + 47
	20  QtGui                               0x000000010c16c0c5 _ZN11QMouseEventC2EN6QEvent4TypeERK6QPointN2Qt11MouseButtonE6QFlagsIS6_ES7_INS5_16KeyboardModifierEE + 79
	21  QtGui                               0x000000010c681426 _ZN20QGraphicsViewPrivateC2Ev + 340
	22  QtGui                               0x000000010c6849d5 _ZN13QGraphicsViewC2EP7QWidget + 37
	23  cv2.cpython-37m-darwin.so           0x000000010715ed3f _ZN15DefaultViewPortC2EP8CvWindowi + 31
	24  cv2.cpython-37m-darwin.so           0x000000010715a3ad _ZN8CvWindowC2E7QStringi + 397
	25  cv2.cpython-37m-darwin.so           0x00000001071523d3 _ZN11GuiReceiver12createWindowE7QStringi + 227
	26  cv2.cpython-37m-darwin.so           0x000000010715222c cvNamedWindow + 540
	27  cv2.cpython-37m-darwin.so           0x0000000107154961 _ZN11GuiReceiver9showImageE7QStringPv + 161
	28  cv2.cpython-37m-darwin.so           0x000000010715480c cvShowImage + 572
	29  cv2.cpython-37m-darwin.so           0x000000010714cf4b _ZN2cv6imshowERKNS_6StringERKNS_11_InputArrayE + 475
	30  cv2.cpython-37m-darwin.so           0x000000010674d6a4 _ZL18pyopencv_cv_imshowP7_objectS0_S0_ + 404
	31  Python                              0x0000000105a9c344 _PyMethodDef_RawFastCallKeywords + 545
	32  Python                              0x0000000105a9b8af _PyCFunction_FastCallKeywords + 44
	33  Python                              0x0000000105b31b2b call_function + 636
	34  Python                              0x0000000105b2a771 _PyEval_EvalFrameDefault + 7016
	35  Python                              0x0000000105b32432 _PyEval_EvalCodeWithName + 1835
	36  Python                              0x0000000105a9b4dd _PyFunction_FastCallDict + 441
	37  Python                              0x0000000105b2aa88 _PyEval_EvalFrameDefault + 7807
	38  Python                              0x0000000105a9bc8a function_code_fastcall + 112
	39  Python                              0x0000000105b31ba0 call_function + 753
	40  Python                              0x0000000105b2a758 _PyEval_EvalFrameDefault + 6991
	41  Python                              0x0000000105a9bc8a function_code_fastcall + 112
	42  Python                              0x0000000105b31ba0 call_function + 753
	43  Python                              0x0000000105b2a758 _PyEval_EvalFrameDefault + 6991
	44  Python                              0x0000000105a9bc8a function_code_fastcall + 112
	45  Python                              0x0000000105a9c60d _PyObject_Call_Prepend + 150
	46  Python                              0x0000000105a9b9bd PyObject_Call + 136
	47  Python                              0x0000000105b982ca t_bootstrap + 71
	48  libsystem_pthread.dylib             0x00007fff7572c305 _pthread_body + 126
	49  libsystem_pthread.dylib             0x00007fff7572f26f _pthread_start + 70
	50  libsystem_pthread.dylib             0x00007fff7572b415 thread_start + 13
)
libc++abi.dylib: terminating with uncaught exception of type NSException
[1]    8185 abort      python3 elg_demo.py
@fbobital
Copy link

fbobital commented Mar 29, 2019

I had the same problem in my laptop (Python 3.7.3 + OpenCV 4.0.1 + macOS Mojave 10.14.3).
This error happens when the code try to execute the highgui commands from OpenCV (e.g. cv.imshow() and cv.waitKey()) within threads.
You are ONLY allowed to work with user interfaces in the main thread.
You have to remove the highgui commands from the "visualize_thread".

Take a look in my modification for the file elg_demo.py:

#!/usr/bin/env python3
"""Main script for gaze direction inference from webcam feed."""
import argparse
import os
import queue
import threading
import time

import coloredlogs
import cv2 as cv
import numpy as np
import tensorflow as tf

from datasources import Video, Webcam
from models import ELG
import util.gaze

finish_thread = False
is_shown = False
image = None

if __name__ == '__main__':

    # Set global log level
    parser = argparse.ArgumentParser(description='Demonstration of landmarks localization.')
    parser.add_argument('-v', type=str, help='logging level', default='info',
                        choices=['debug', 'info', 'warning', 'error', 'critical'])
    parser.add_argument('--from_video', type=str, help='Use this video path instead of webcam')
    parser.add_argument('--record_video', type=str, help='Output path of video of demonstration.')
    parser.add_argument('--fullscreen', action='store_true')
    parser.add_argument('--headless', action='store_true')

    parser.add_argument('--fps', type=int, default=60, help='Desired sampling rate of webcam')
    parser.add_argument('--camera_id', type=int, default=0, help='ID of webcam to use')

    args = parser.parse_args()
    coloredlogs.install(
        datefmt='%d/%m %H:%M',
        fmt='%(asctime)s %(levelname)s %(message)s',
        level=args.v.upper(),
    )

    # Check if GPU is available
    from tensorflow.python.client import device_lib
    session_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
    gpu_available = False
    try:
        gpus = [d for d in device_lib.list_local_devices(config=session_config)
                if d.device_type == 'GPU']
        gpu_available = len(gpus) > 0
    except:
        pass

    # Initialize Tensorflow session
    tf.logging.set_verbosity(tf.logging.INFO)
    with tf.Session(config=session_config) as session:

        # Declare some parameters
        batch_size = 2

        # Define webcam stream data source
        # Change data_format='NHWC' if not using CUDA
        if args.from_video:
            assert os.path.isfile(args.from_video)
            data_source = Video(args.from_video,
                                tensorflow_session=session, batch_size=batch_size,
                                data_format='NCHW' if gpu_available else 'NHWC',
                                eye_image_shape=(108, 180))
        else:
            data_source = Webcam(tensorflow_session=session, batch_size=batch_size,
                                 camera_id=args.camera_id, fps=args.fps,
                                 data_format='NCHW' if gpu_available else 'NHWC',
                                 eye_image_shape=(36, 60))

        # Define model
        if args.from_video:
            model = ELG(
                session, train_data={'videostream': data_source},
                first_layer_stride=3,
                num_modules=3,
                num_feature_maps=64,
                learning_schedule=[
                    {
                        'loss_terms_to_optimize': {'dummy': ['hourglass', 'radius']},
                    },
                ],
            )
        else:
            model = ELG(
                session, train_data={'videostream': data_source},
                first_layer_stride=1,
                num_modules=2,
                num_feature_maps=32,
                learning_schedule=[
                    {
                        'loss_terms_to_optimize': {'dummy': ['hourglass', 'radius']},
                    },
                ],
            )

        # Record output frames to file if requested
        if args.record_video:
            video_out = None
            video_out_queue = queue.Queue()
            video_out_should_stop = False
            video_out_done = threading.Condition()

            def _record_frame():
                global video_out
                last_frame_time = None
                out_fps = 30
                out_frame_interval = 1.0 / out_fps
                while not video_out_should_stop:
                    frame_index = video_out_queue.get()
                    if frame_index is None:
                        break
                    assert frame_index in data_source._frames
                    frame = data_source._frames[frame_index]['bgr']
                    h, w, _ = frame.shape
                    if video_out is None:
                        video_out = cv.VideoWriter(
                            args.record_video, cv.VideoWriter_fourcc(*'H264'),
                            out_fps, (w, h),
                        )
                    now_time = time.time()
                    if last_frame_time is not None:
                        time_diff = now_time - last_frame_time
                        while time_diff > 0.0:
                            video_out.write(frame)
                            time_diff -= out_frame_interval
                    last_frame_time = now_time
                video_out.release()
                with video_out_done:
                    video_out_done.notify_all()
            record_thread = threading.Thread(target=_record_frame, name='record')
            record_thread.daemon = True
            record_thread.start()

        # Begin visualization thread
        inferred_stuff_queue = queue.Queue()

        def _visualize_output():
            global is_shown
            global image

            last_frame_index = 0
            last_frame_time = time.time()
            fps_history = []
            all_gaze_histories = []

            while True:
                # If no output to visualize, show unannotated frame
                if inferred_stuff_queue.empty():
                    next_frame_index = last_frame_index + 1
                    if next_frame_index in data_source._frames:
                        next_frame = data_source._frames[next_frame_index]
                        if 'faces' in next_frame and len(next_frame['faces']) == 0:
                            if not args.headless:
                                image = next_frame['bgr'].copy()
                                is_shown = True
                            if args.record_video:
                                video_out_queue.put_nowait(next_frame_index)
                            last_frame_index = next_frame_index

                    continue

                # Get output from neural network and visualize
                output = inferred_stuff_queue.get()
                bgr = None
                for j in range(batch_size):
                    frame_index = output['frame_index'][j]
                    if frame_index not in data_source._frames:
                        continue
                    frame = data_source._frames[frame_index]

                    # Decide which landmarks are usable
                    heatmaps_amax = np.amax(output['heatmaps'][j, :].reshape(-1, 18), axis=0)
                    can_use_eye = np.all(heatmaps_amax > 0.7)
                    can_use_eyelid = np.all(heatmaps_amax[0:8] > 0.75)
                    can_use_iris = np.all(heatmaps_amax[8:16] > 0.8)

                    start_time = time.time()
                    eye_index = output['eye_index'][j]
                    bgr = frame['bgr']
                    eye = frame['eyes'][eye_index]
                    eye_image = eye['image']
                    eye_side = eye['side']
                    eye_landmarks = output['landmarks'][j, :]
                    eye_radius = output['radius'][j][0]
                    if eye_side == 'left':
                        eye_landmarks[:, 0] = eye_image.shape[1] - eye_landmarks[:, 0]
                        eye_image = np.fliplr(eye_image)

                    # Embed eye image and annotate for picture-in-picture
                    eye_upscale = 2
                    eye_image_raw = cv.cvtColor(cv.equalizeHist(eye_image), cv.COLOR_GRAY2BGR)
                    eye_image_raw = cv.resize(eye_image_raw, (0, 0), fx=eye_upscale, fy=eye_upscale)
                    eye_image_annotated = np.copy(eye_image_raw)
                    if can_use_eyelid:
                        cv.polylines(
                            eye_image_annotated,
                            [np.round(eye_upscale*eye_landmarks[0:8]).astype(np.int32)
                                                                     .reshape(-1, 1, 2)],
                            isClosed=True, color=(255, 255, 0), thickness=1, lineType=cv.LINE_AA,
                        )
                    if can_use_iris:
                        cv.polylines(
                            eye_image_annotated,
                            [np.round(eye_upscale*eye_landmarks[8:16]).astype(np.int32)
                                                                      .reshape(-1, 1, 2)],
                            isClosed=True, color=(0, 255, 255), thickness=1, lineType=cv.LINE_AA,
                        )
                        cv.drawMarker(
                            eye_image_annotated,
                            tuple(np.round(eye_upscale*eye_landmarks[16, :]).astype(np.int32)),
                            color=(0, 255, 255), markerType=cv.MARKER_CROSS, markerSize=4,
                            thickness=1, line_type=cv.LINE_AA,
                        )
                    face_index = int(eye_index / 2)
                    eh, ew, _ = eye_image_raw.shape
                    v0 = face_index * 2 * eh
                    v1 = v0 + eh
                    v2 = v1 + eh
                    u0 = 0 if eye_side == 'left' else ew
                    u1 = u0 + ew
                    bgr[v0:v1, u0:u1] = eye_image_raw
                    bgr[v1:v2, u0:u1] = eye_image_annotated

                    # Visualize preprocessing results
                    frame_landmarks = (frame['smoothed_landmarks']
                                       if 'smoothed_landmarks' in frame
                                       else frame['landmarks'])
                    for f, face in enumerate(frame['faces']):
                        for landmark in frame_landmarks[f][:-1]:
                            cv.drawMarker(bgr, tuple(np.round(landmark).astype(np.int32)),
                                          color=(0, 0, 255), markerType=cv.MARKER_STAR,
                                          markerSize=2, thickness=1, line_type=cv.LINE_AA)
                        cv.rectangle(
                            bgr, tuple(np.round(face[:2]).astype(np.int32)),
                            tuple(np.round(np.add(face[:2], face[2:])).astype(np.int32)),
                            color=(0, 255, 255), thickness=1, lineType=cv.LINE_AA,
                        )

                    # Transform predictions
                    eye_landmarks = np.concatenate([eye_landmarks,
                                                    [[eye_landmarks[-1, 0] + eye_radius,
                                                      eye_landmarks[-1, 1]]]])
                    eye_landmarks = np.asmatrix(np.pad(eye_landmarks, ((0, 0), (0, 1)),
                                                       'constant', constant_values=1.0))
                    eye_landmarks = (eye_landmarks *
                                     eye['inv_landmarks_transform_mat'].T)[:, :2]
                    eye_landmarks = np.asarray(eye_landmarks)
                    eyelid_landmarks = eye_landmarks[0:8, :]
                    iris_landmarks = eye_landmarks[8:16, :]
                    iris_centre = eye_landmarks[16, :]
                    eyeball_centre = eye_landmarks[17, :]
                    eyeball_radius = np.linalg.norm(eye_landmarks[18, :] -
                                                    eye_landmarks[17, :])

                    # Smooth and visualize gaze direction
                    num_total_eyes_in_frame = len(frame['eyes'])
                    if len(all_gaze_histories) != num_total_eyes_in_frame:
                        all_gaze_histories = [list() for _ in range(num_total_eyes_in_frame)]
                    gaze_history = all_gaze_histories[eye_index]
                    if can_use_eye:
                        # Visualize landmarks
                        cv.drawMarker(  # Eyeball centre
                            bgr, tuple(np.round(eyeball_centre).astype(np.int32)),
                            color=(0, 255, 0), markerType=cv.MARKER_CROSS, markerSize=4,
                            thickness=1, line_type=cv.LINE_AA,
                        )
                        # cv.circle(  # Eyeball outline
                        #     bgr, tuple(np.round(eyeball_centre).astype(np.int32)),
                        #     int(np.round(eyeball_radius)), color=(0, 255, 0),
                        #     thickness=1, lineType=cv.LINE_AA,
                        # )

                        # Draw "gaze"
                        # from models.elg import estimate_gaze_from_landmarks
                        # current_gaze = estimate_gaze_from_landmarks(
                        #     iris_landmarks, iris_centre, eyeball_centre, eyeball_radius)
                        i_x0, i_y0 = iris_centre
                        e_x0, e_y0 = eyeball_centre
                        theta = -np.arcsin(np.clip((i_y0 - e_y0) / eyeball_radius, -1.0, 1.0))
                        phi = np.arcsin(np.clip((i_x0 - e_x0) / (eyeball_radius * -np.cos(theta)),
                                                -1.0, 1.0))
                        current_gaze = np.array([theta, phi])
                        gaze_history.append(current_gaze)
                        gaze_history_max_len = 10
                        if len(gaze_history) > gaze_history_max_len:
                            gaze_history = gaze_history[-gaze_history_max_len:]
                        util.gaze.draw_gaze(bgr, iris_centre, np.mean(gaze_history, axis=0),
                                            length=120.0, thickness=1)
                    else:
                        gaze_history.clear()

                    if can_use_eyelid:
                        cv.polylines(
                            bgr, [np.round(eyelid_landmarks).astype(np.int32).reshape(-1, 1, 2)],
                            isClosed=True, color=(255, 255, 0), thickness=1, lineType=cv.LINE_AA,
                        )

                    if can_use_iris:
                        cv.polylines(
                            bgr, [np.round(iris_landmarks).astype(np.int32).reshape(-1, 1, 2)],
                            isClosed=True, color=(0, 255, 255), thickness=1, lineType=cv.LINE_AA,
                        )
                        cv.drawMarker(
                            bgr, tuple(np.round(iris_centre).astype(np.int32)),
                            color=(0, 255, 255), markerType=cv.MARKER_CROSS, markerSize=4,
                            thickness=1, line_type=cv.LINE_AA,
                        )

                    dtime = 1e3*(time.time() - start_time)
                    if 'visualization' not in frame['time']:
                        frame['time']['visualization'] = dtime
                    else:
                        frame['time']['visualization'] += dtime

                    def _dtime(before_id, after_id):
                        return int(1e3 * (frame['time'][after_id] - frame['time'][before_id]))

                    def _dstr(title, before_id, after_id):
                        return '%s: %dms' % (title, _dtime(before_id, after_id))

                    if eye_index == len(frame['eyes']) - 1:
                        # Calculate timings
                        frame['time']['after_visualization'] = time.time()
                        fps = int(np.round(1.0 / (time.time() - last_frame_time)))
                        fps_history.append(fps)
                        if len(fps_history) > 60:
                            fps_history = fps_history[-60:]
                        fps_str = '%d FPS' % np.mean(fps_history)
                        last_frame_time = time.time()
                        fh, fw, _ = bgr.shape
                        cv.putText(bgr, fps_str, org=(fw - 110, fh - 20),
                                   fontFace=cv.FONT_HERSHEY_DUPLEX, fontScale=0.8,
                                   color=(0, 0, 0), thickness=1, lineType=cv.LINE_AA)
                        cv.putText(bgr, fps_str, org=(fw - 111, fh - 21),
                                   fontFace=cv.FONT_HERSHEY_DUPLEX, fontScale=0.79,
                                   color=(255, 255, 255), thickness=1, lineType=cv.LINE_AA)
                        if not args.headless:
                            image = bgr.copy()
                            is_shown = True
                        last_frame_index = frame_index

                        # Record frame?
                        if args.record_video:
                            video_out_queue.put_nowait(frame_index)

                        if finish_thread:
                            return

                        # Print timings
                        if frame_index % 60 == 0:
                            latency = _dtime('before_frame_read', 'after_visualization')
                            processing = _dtime('after_frame_read', 'after_visualization')
                            timing_string = ', '.join([
                                _dstr('read', 'before_frame_read', 'after_frame_read'),
                                _dstr('preproc', 'after_frame_read', 'after_preprocessing'),
                                'infer: %dms' % int(frame['time']['inference']),
                                'vis: %dms' % int(frame['time']['visualization']),
                                'proc: %dms' % processing,
                                'latency: %dms' % latency,
                            ])
                            print('%08d [%s] %s' % (frame_index, fps_str, timing_string))

        visualize_thread = threading.Thread(target=_visualize_output, name='visualization')
        visualize_thread.daemon = True
        visualize_thread.start()

        if args.fullscreen:
            cv.namedWindow('vis', cv.WND_PROP_FULLSCREEN)
            cv.setWindowProperty('vis', cv.WND_PROP_FULLSCREEN, cv.WINDOW_FULLSCREEN)

        # Do inference forever
        infer = model.inference_generator()
        while True:

            output = next(infer)
            for frame_index in np.unique(output['frame_index']):
                if frame_index not in data_source._frames:
                    continue
                frame = data_source._frames[frame_index]
                if 'inference' in frame['time']:
                    frame['time']['inference'] += output['inference_time']
                else:
                    frame['time']['inference'] = output['inference_time']
            inferred_stuff_queue.put_nowait(output)

            if is_shown:
                cv.imshow('vis', image)
                is_shown = False
                if cv.waitKey(1) & 0xFF == ord('q'):
                    finish_thread = True

            if not visualize_thread.isAlive():
                break

            if not data_source._open:
                break

        # Close video recording
        if args.record_video and video_out is not None:
            video_out_should_stop = True
            video_out_queue.put_nowait(None)
            with video_out_done:
                video_out_done.wait()

@mesutpiskin
Copy link
Author

@fbobital Thanks for that, its work 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants