Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple cameras support not working for mapping #477

Closed
2 tasks done
psykokwak-com opened this issue Apr 19, 2022 · 8 comments
Closed
2 tasks done

Multiple cameras support not working for mapping #477

psykokwak-com opened this issue Apr 19, 2022 · 8 comments
Labels
bug_report closed_for_stale Issue closed for inactivity Stale

Comments

@psykokwak-com
Copy link

psykokwak-com commented Apr 19, 2022

Preliminary Checks

  • This issue is not a duplicate. Before opening a new issue, please search existing issues.
  • This issue is not a question, feature request, or anything other than a bug report directly related to this project.

Description

I have two ZED2i cameras and I try to use them together (in a rigid body) in a real time mapping process.

Steps to Reproduce

I updated the realtime mapping sample to handle two cameras :

c
/**********************************************************************************
 ** This sample demonstrates how to capture a live 3D reconstruction of a scene  **
 ** as a fused point cloud and display the result in an OpenGL window.           **
 **********************************************************************************/

// ZED includes
#include <sl/Camera.hpp>

// Sample includes
#include "GLViewer.hpp"

#include <opencv2/opencv.hpp>

// Using std and sl namespaces
using namespace std;
using namespace sl;

int main(int argc, char **argv) {
    Camera zed1;
    Camera zed2;

    // Set configuration parameters for the ZED
    InitParameters init_parameters;
    init_parameters.depth_mode = DEPTH_MODE::ULTRA;    
    init_parameters.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // OpenGL's coordinate system is right_handed    

    // Open the camera
    init_parameters.input.setFromCameraID(0);
    zed1.open(init_parameters);

    init_parameters.input.setFromCameraID(1);
    zed2.open(init_parameters);

    auto camera_infos = zed1.getCameraInformation();

    // Point cloud viewer
    GLViewer viewer;

    // Initialize point cloud viewer
    FusedPointCloud map;
    GLenum errgl = viewer.init(argc, argv, camera_infos.camera_configuration.calibration_parameters.left_cam, &map, camera_infos.camera_model);

    // Setup and start positional tracking
    Pose pose;
    POSITIONAL_TRACKING_STATE tracking_state = POSITIONAL_TRACKING_STATE::OFF;
    PositionalTrackingParameters positional_tracking_parameters;
    positional_tracking_parameters.enable_area_memory = false;
    zed1.enablePositionalTracking(positional_tracking_parameters);
    zed2.enablePositionalTracking(positional_tracking_parameters);


    // Set spatial mapping parameters
    SpatialMappingParameters spatial_mapping_parameters;
    // Request a Point Cloud
    spatial_mapping_parameters.map_type = SpatialMappingParameters::SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD;
    // Set mapping range, it will set the resolution accordingly (a higher range, a lower resolution)
    spatial_mapping_parameters.set(SpatialMappingParameters::MAPPING_RANGE::SHORT);
    spatial_mapping_parameters.set(SpatialMappingParameters::MAPPING_RESOLUTION::HIGH);
    // Request partial updates only (only the latest updated chunks need to be re-draw)
    spatial_mapping_parameters.use_chunk_only = true;
    // Start the spatial mapping
    zed1.enableSpatialMapping(spatial_mapping_parameters);
    zed2.enableSpatialMapping(spatial_mapping_parameters);
    
    // Timestamp of the last fused point cloud requested
    chrono::high_resolution_clock::time_point ts_last1;
    chrono::high_resolution_clock::time_point ts_last2;

    // Setup runtime parameters
    RuntimeParameters runtime_parameters;
    // Use low depth confidence avoid introducing noise in the constructed model
    runtime_parameters.confidence_threshold = 50;

    auto resolution = camera_infos.camera_configuration.resolution;

    // Define display resolution and check that it fit at least the image resolution
    Resolution display_resolution(min((int)resolution.width, 720), min((int)resolution.height, 404));

    // Create a Mat to contain the left image and its opencv ref
    Mat image_zed1(display_resolution, MAT_TYPE::U8_C4);
    Mat image_zed2(display_resolution, MAT_TYPE::U8_C4);
    cv::Mat image_zed_ocv1(image_zed1.getHeight(), image_zed1.getWidth(), CV_8UC4, image_zed1.getPtr<sl::uchar1>(MEM::CPU));
    cv::Mat image_zed_ocv2(image_zed2.getHeight(), image_zed2.getWidth(), CV_8UC4, image_zed2.getPtr<sl::uchar1>(MEM::CPU));

    // Start the main loop
    while (viewer.isAvailable()) {
        // Grab a new image
        if (zed1.grab(runtime_parameters) == ERROR_CODE::SUCCESS) {
            // Retrieve the left image
            zed1.retrieveImage(image_zed1, VIEW::LEFT, MEM::CPU, display_resolution);
            // Retrieve the camera pose data
            tracking_state = zed1.getPosition(pose);
            viewer.updatePose(pose, tracking_state);

            if (tracking_state == POSITIONAL_TRACKING_STATE::OK) {                
                auto duration = chrono::duration_cast<chrono::milliseconds>(chrono::high_resolution_clock::now() - ts_last1).count();
                
                // Ask for a fused point cloud update if 500ms have elapsed since last request
                if((duration > 500) && viewer.chunksUpdated()) {
                    // Ask for a point cloud refresh
                    zed1.requestSpatialMapAsync();
                    ts_last1 = chrono::high_resolution_clock::now();
                }
                
                // If the point cloud is ready to be retrieved
                if(zed1.getSpatialMapRequestStatusAsync() == ERROR_CODE::SUCCESS) {                    
                    zed1.retrieveSpatialMapAsync(map);
                    viewer.updateChunks();
                }
            }
            cv::imshow("ZED View 1", image_zed_ocv1);
        }
        if (zed2.grab(runtime_parameters) == ERROR_CODE::SUCCESS) {
          // Retrieve the left image
          zed2.retrieveImage(image_zed2, VIEW::LEFT, MEM::CPU, display_resolution);
          // Retrieve the camera pose data
          tracking_state = zed2.getPosition(pose);
          //viewer.updatePose(pose, tracking_state);

          if (tracking_state == POSITIONAL_TRACKING_STATE::OK) {
            auto duration = chrono::duration_cast<chrono::milliseconds>(chrono::high_resolution_clock::now() - ts_last2).count();

            // Ask for a fused point cloud update if 500ms have elapsed since last request
            if ((duration > 500) && viewer.chunksUpdated()) {
              // Ask for a point cloud refresh
              zed2.requestSpatialMapAsync();
              ts_last2 = chrono::high_resolution_clock::now();
            }

            // If the point cloud is ready to be retrieved
            if (zed2.getSpatialMapRequestStatusAsync() == ERROR_CODE::SUCCESS) {
              zed2.retrieveSpatialMapAsync(map);
              viewer.updateChunks();
            }
          }
          cv::imshow("ZED View 2", image_zed_ocv2);
        }
        cv::waitKey(15);
    }

    // Save generated point cloud
    //map.save("MyFusedPointCloud");

    // Free allocated memory before closing the camera
    image_zed1.free();
    image_zed2.free();

    // Close the ZED
    zed1.close();
    zed2.close();

    return 0;
}

Expected Result

See in OpenGLWindow the realtime mapping with the two cameras and two openCV windows with the left image of each camera

Actual Result

Mapping with only the second camera.
See the openCV window of the second camera
Get errors in loop on the console :

CUDA error at C:\builds\sl\ZEDKit\lib\src\sl_zed\SpatialMappingHandler.cpp:539 code=400(cudaErrorInvalidResourceHandle) "cudaEventRecord(ev_cpu_data, strm)"
CUDA error at C:\builds\sl\ZEDKit\lib\src\sl_zed\SpatialMappingHandler.cpp:539 code=400(cudaErrorInvalidResourceHandle) "cudaEventRecord(ev_cpu_data, strm)"
CUDA error at C:\builds\sl\ZEDKit\lib\src\sl_zed\SpatialMappingHandler.cpp:539 code=400(cudaErrorInvalidResourceHandle) "cudaEventRecord(ev_cpu_data, strm)"
CUDA error at C:\builds\sl\ZEDKit\lib\src\sl_zed\SpatialMappingHandler.cpp:539 code=400(cudaErrorInvalidResourceHandle) "cudaEventRecord(ev_cpu_data, strm)"
CUDA error at C:\builds\sl\ZEDKit\lib\src\sl_zed\SpatialMappingHandler.cpp:539 code=400(cudaErrorInvalidResourceHandle) "cudaEventRecord(ev_cpu_data, strm)"
CUDA error at C:\builds\sl\ZEDKit\lib\src\sl_zed\SpatialMappingHandler.cpp:539 code=400(cudaErrorInvalidResourceHandle) "cudaEventRecord(ev_cpu_data, strm)"
CUDA error at C:\builds\sl\ZEDKit\lib\src\sl_zed\SpatialMappingHandler.cpp:539 code=400(cudaErrorInvalidResourceHandle) "cudaEventRecord(ev_cpu_data, strm)"

ZED Camera model

ZED2i

Environment

Config :
Zed SDK : 3.7.2
Cuda : 11.5.119

Processor : i9-11980HK
RAM : 32Go
GPU : Nvidia RTX3080 16Go

OS : Windows 10 Pro

Anything else?

No response

@psykokwak-com
Copy link
Author

psykokwak-com commented Apr 19, 2022

OK. For an unknown reason, if I compute each zed camera on its own thread, parallele mapping works .... a little.

Here my code :

///////////////////////////////////////////////////////////////////////////
//
// Copyright (c) 2022, STEREOLABS.
//
// All rights reserved.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
///////////////////////////////////////////////////////////////////////////

/**********************************************************************************
 ** This sample demonstrates how to capture a live 3D reconstruction of a scene  **
 ** as a fused point cloud and display the result in an OpenGL window.           **
 **********************************************************************************/

#include <sl/Camera.hpp>
#include <opencv2/opencv.hpp>

#include "GLViewer.hpp"


bool _enableMapping = true;

bool _running = true;
void run(int index, std::mutex &mutex, GLViewer &viewer, sl::FusedPointCloud &map, sl::Transform tf) {
  sl::Camera zed;

  // Set configuration parameters for the ZED
  sl::InitParameters init_parameters;
  init_parameters.depth_mode = sl::DEPTH_MODE::PERFORMANCE;
  init_parameters.coordinate_system = sl::COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // OpenGL's coordinate system is right_handed
  init_parameters.coordinate_units = sl::UNIT::MILLIMETER;
  init_parameters.camera_resolution = sl::RESOLUTION::HD720;
  //init_parameters.camera_fps = 60;
  init_parameters.depth_minimum_distance = 200;
  init_parameters.sdk_verbose = true;


  // Open the camera
  init_parameters.input.setFromCameraID(index);
  zed.open(init_parameters);

  // Setup and start positional tracking
  sl::PositionalTrackingParameters positional_tracking_parameters;
  positional_tracking_parameters.enable_area_memory = true;
  positional_tracking_parameters.enable_imu_fusion = true;

  positional_tracking_parameters.initial_world_transform = tf;
  zed.enablePositionalTracking(positional_tracking_parameters);

  if (_enableMapping) {
    // Set spatial mapping parameters
    sl::SpatialMappingParameters spatial_mapping_parameters;
    // Request a Point Cloud
    spatial_mapping_parameters.map_type = sl::SpatialMappingParameters::SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD;
    // Set mapping range, it will set the resolution accordingly (a higher range, a lower resolution)
    spatial_mapping_parameters.set(sl::SpatialMappingParameters::MAPPING_RANGE::LONG);
    spatial_mapping_parameters.set(sl::SpatialMappingParameters::MAPPING_RESOLUTION::HIGH);
    // Request partial updates only (only the latest updated chunks need to be re-draw)
    spatial_mapping_parameters.use_chunk_only = true;
    // Start the spatial mapping
    zed.enableSpatialMapping(spatial_mapping_parameters);
  }

  std::chrono::high_resolution_clock::time_point ts_last;

  // Setup runtime parameters
  sl::RuntimeParameters runtime_parameters;
  runtime_parameters.confidence_threshold = 50;

  auto camera_infos = zed.getCameraInformation();
  auto resolution = camera_infos.camera_configuration.resolution;

  // Define display resolution and check that it fit at least the image resolution
  sl::Resolution display_resolution(std::min((int)resolution.width, 720), std::min((int)resolution.height, 404));

  // Create a Mat to contain the left image and its opencv ref
  sl::Mat image_zed(display_resolution, sl::MAT_TYPE::U8_C4);
  cv::Mat image_zed_ocv(image_zed.getHeight(), image_zed.getWidth(), CV_8UC4, image_zed.getPtr<sl::uchar1>(sl::MEM::CPU));

  std::cout << "Camera " << index << " Type:" << camera_infos.camera_model << " SN:" << camera_infos.serial_number << std::endl;

  while (_running) {
    if (zed.grab(runtime_parameters) == sl::ERROR_CODE::SUCCESS) {
      // Retrieve the left image
      zed.retrieveImage(image_zed, sl::VIEW::LEFT, sl::MEM::CPU, display_resolution);

      // Retrieve the camera pose data
      sl::Pose pose;
      auto tracking_state = zed.getPosition(pose);
      viewer.updatePose(index, pose, tracking_state);

      if (_enableMapping && tracking_state == sl::POSITIONAL_TRACKING_STATE::OK) {
        auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now() - ts_last).count();
        
        /*
        mutex.lock();
        zed.extractWholeSpatialMap(map);
        viewer.updateChunks();
        mutex.unlock();
        */

        // Ask for a fused point cloud update if 100ms have elapsed since last request
        if ((duration > 0) && viewer.chunksUpdated()) {
          // Ask for a point cloud refresh
          zed.requestSpatialMapAsync();
          ts_last = std::chrono::high_resolution_clock::now();
        }

        // If the point cloud is ready to be retrieved
        if (zed.getSpatialMapRequestStatusAsync() == sl::ERROR_CODE::SUCCESS && viewer.chunksUpdated() && mutex.try_lock()) {
          zed.retrieveSpatialMapAsync(map);
          viewer.updateChunks();
          mutex.unlock();
        }
      }
      cv::imshow("ZED View : " + std::to_string(index) + " : " + std::to_string(camera_infos.serial_number), image_zed_ocv);
    }
    cv::waitKey(10);
  }

  image_zed.free();
  zed.close();
}

int main(int argc, char **argv)
{
  // Initialize point cloud viewer
  sl::FusedPointCloud map;

  // Point cloud viewer
  GLViewer viewer;
  viewer.init(argc, argv, &map, sl::MODEL::ZED2i);

  std::mutex zedmutex;
  std::thread zed1thread(run, 0, std::ref(zedmutex), std::ref(viewer), std::ref(map), sl::Transform(sl::Rotation(), sl::Translation(0, 0, -75)));
  std::thread zed2thread(run, 1, std::ref(zedmutex), std::ref(viewer), std::ref(map), sl::Transform(sl::Rotation(M_PI, sl::Translation(0, 1, 0)), sl::Translation(0, 0, 75)));

  while (viewer.isAvailable())
    sl::sleep_ms(1);

  _running = false;
  if (zed1thread.joinable()) zed1thread.join();
  if (zed2thread.joinable()) zed2thread.join();

  // Save generated point cloud
  //map.save("MyFusedPointCloud");

  return 0;
}

It seems the sl::FusedPointCloud does not support very well multiple retrieveSpatialMapAsync() (or extractWholeSpatialMap()) from different cameras because the updated chunks are corrupted as you can see on the following video :

(Click on the image)
Click Me

@P-yver
Copy link
Member

P-yver commented Apr 21, 2022

Hi,
You should be able to use the first version of the sample if you first enable both Positional tracking and Spatial Mapping for the 1st camera, then enable both tracking and mapping for the 2nd camera.

thank you for reporting this issue, we are working to fix it.

@psykokwak-com
Copy link
Author

Hi all,
I just updated the SDK to the latest (3.7.4) and I still have the issue while the changelog of this version indicates to have fixed it.

  • Fixed spatial mapping invalid behavior when using multiple camera simultaneously

I tested with the two versions of the sample (non threaded and threaded).

Is there something else wrong ?

@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days

@github-actions github-actions bot added Stale and removed Stale labels Jun 20, 2022
@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days

@github-actions github-actions bot added Stale and removed Stale labels Jul 21, 2022
@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days

@github-actions github-actions bot added Stale closed_for_stale Issue closed for inactivity labels Aug 22, 2022
@celikemir
Copy link

hi ! did you try spatial mapping multi option that uploaded 2 month ago. If you have tried already can you please share your configuration file

@ttsesm
Copy link

ttsesm commented Mar 27, 2024

@celikemir did you succeed to run the multi camera spatial mapping without issues?

I am also interested for this configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug_report closed_for_stale Issue closed for inactivity Stale
Development

No branches or pull requests

4 participants