Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,5 @@

#compiled python caches
*.pyc
.vscode/
build/
57 changes: 45 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,50 @@
# Human Pose Estimation Core Library
A library of functions for human pose estimation with event-driven cameras
_A library of functions for human pose estimation with event-driven cameras_

* Compile and link the [core-library](https://github.com/event-driven-robotics/hpe-core/tree/main/core) in your application to use the event-based human pose estimation functions including:
* joint detectors (neural network implementations including OpenPose)
* joint velocity estimation
* asynchronous pose fusion
* [Example applications](https://github.com/event-driven-robotics/hpe-core/tree/main/example) are available to show how to connect the HPE blocks
* [Evaluate](https://github.com/event-driven-robotics/hpe-core/tree/main/evaluation) your performance using the scripts in evaluation
* Scripts to convert datasets are also available. Please contribute to the ever-growing datasets for event-driven HPE! For curated datasets compatible with the example function see [hosted datasts](https://github.com/event-driven-robotics/hpe-core) TODO
Please contribute your event-driven HPE application and datasets to enable comparisons!

```
@INPROCEEDINGS{9845526,
author={Carissimi, Nicolò and Goyal, Gaurvi and Pietro, Franco Di and Bartolozzi, Chiara and Glover, Arren},
booktitle={2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)},
title={Unlocking Static Images for Training Event-driven Neural Networks},
year={2022},
pages={1-4},
doi={10.1109/EBCCSP56922.2022.9845526}}
```

![demo](https://user-images.githubusercontent.com/9265237/216939617-703fc4ef-b4b9-4cbc-aab8-a87c04822be2.gif)

### [Core (C++)](https://github.com/event-driven-robotics/hpe-core/tree/main/core)

Compile and link the core C++ library in your application to use the event-based human pose estimation functions including:
* joint detectors: OpenPose built upon greyscales formed from events
* joint velocity estimation @>500Hz
* asynchronous pose fusion of joint velocity and detection
* event representation methods to be compatible with convolutional neural networks.

### PyCore

Importable python libraries for joint detection
* event-based movenet: MoveEnet built on PyTorch

### [Examples](https://github.com/event-driven-robotics/hpe-core/tree/main/example)

Some example applications are available giving ideas on how to use the HPE-core libraries

### [Evaluation](https://github.com/event-driven-robotics/hpe-core/tree/main/evaluation)

Python scripts can be used to compare different detectors and velocity estimation combinations

### Datasets and Conversion

Scripts to convert datasets into common formats to easily facilitate valid comparisons

### Authors

@arrenglover
@nicolocarissimi
@gaurvigoyal
@francodipietro
> [@arrenglover](https://www.linkedin.com/in/arren-glover/)
> [@nicolocarissimi](https://www.linkedin.com/in/nicolocarissimi/)
> [@gaurvigoyal](https://www.linkedin.com/in/gaurvigoyal/)
> [@francodipietro](https://www.linkedin.com/in/francodipietrophd/)

[Event-driven Perception for Robotics](https://www.edpr.iit.it/research)
4 changes: 0 additions & 4 deletions core/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -52,18 +52,14 @@ if(OpenCV_FOUND)
endif()

set( folder_source
detection_wrappers/detection.cpp
motion_estimation/jointMotionEstimator.cpp
event_representations/representations.cpp
motion_estimation/motion_estimation.cpp
fusion/fusion.cpp
)

set( folder_header
utility/utility.h
detection_wrappers/detection.h
motion_estimation/motion_estimation.h
motion_estimation/jointMotionEstimator.h
event_representations/representations.h
fusion/fusion.h
)
Expand Down
7 changes: 5 additions & 2 deletions core/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,13 @@ C++ src for compiling a library of functions useful for HPE:
* human pose estimation from events
* joint tracking
* skeleton tracking
* fusion
* position and velocity fusion
* on-line visualisation

# Build the library

The library is designed to use CMake to configure and build.

* Clone the repository: e.g. `git clone https://github.com/event-driven-robotics/hpe-core.git`
* `cd hpe-core/core`
* `mkdir build && cd build`
Expand All @@ -25,5 +27,6 @@ Using cmake, add the following to your `CMakeLists.txt`
* `find_package(hpe-core)`
* `target_link_libraries(${PROJECT_NAME} PRIVATE hpe-core::hpe-core)`

# Example


Examples of how to install dependencies, build the library, and link it in your own project can be found [here](https://github.com/event-driven-robotics/hpe-core/tree/main/example/op_detector_example_module). The Dockerfile can be used to automatically build the dependencies required in a isolated container, or if you prefer, your can follow it's instructions to install the dependencies natively on your machine. [Interested to learn more about Docker?](https://www.docker.com/)
40 changes: 0 additions & 40 deletions core/detection_wrappers/detection.cpp

This file was deleted.

40 changes: 0 additions & 40 deletions core/detection_wrappers/detection.h

This file was deleted.

90 changes: 85 additions & 5 deletions core/detection_wrappers/openpose_detector.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,11 @@ bool OpenPoseDetector::init(std::string models_path, std::string pose_model, std
// description of detector's parameters can be found at
// https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/include/openpose/flags.hpp
const auto poseMode = op::flagsToPoseMode(1); // body keypoints detection
const auto poseModel = op::flagsToPoseModel(op::String(pose_model)); // 'BODY_25', 25 keypoints, fastest with CUDA, most accurate, includes foot keypoints
// 'COCO', 18 keypoints
// 'MPI', 15 keypoints, least accurate model but fastest on CPU
// 'MPI_4_layers', 15 keypoints, even faster but less accurate
const auto poseModel = op::flagsToPoseModel(op::String(pose_model));
// 'BODY_25', 25 keypoints, fastest with CUDA, most accurate, includes foot keypoints
// 'COCO', 18 keypoints
// 'MPI', 15 keypoints, least accurate model but fastest on CPU
// 'MPI_4_layers', 15 keypoints, even faster but less accurate

// TODO: set poseJointsNum (get number of joints from openpose mapping)

Expand Down Expand Up @@ -148,4 +149,83 @@ skeleton13 OpenPoseDetector::detect(cv::Mat &input)

pose13 = hpecore::coco18_to_dhp19(pose);
return pose13;
}
}


void openposethread::run()
{
while (true)
{
m.lock();
if (stop)
return;
auto t0 = std::chrono::high_resolution_clock::now();
pose.pose = detop.detect(image);
auto t1 = std::chrono::high_resolution_clock::now();
std::chrono::microseconds ms = std::chrono::duration_cast<std::chrono::microseconds>(t1 - t0);
latency = std::chrono::duration<double>(ms).count() * 1e3;
data_ready = true;
}
}


bool openposethread::init(std::string model_path, std::string model_name, std::string model_size)
{
// initialise open pose
if (!detop.init(model_path, model_name, model_size))
return false;

// make sure the thread won't start until an image is provided
m.lock();

// make sure that providing an image will start things for the first go
data_ready = true;

// start the thread
th = std::thread([this]
{ this->run(); });

return true;
}

void openposethread::close()
{
stop = true;
m.try_lock();
m.unlock();
}

bool openposethread::update(cv::Mat next_image, double image_timestamp, hpecore::stampedPose &previous_result)
{
// if no data is ready (still processing) do nothing
if (!data_ready)
return false;

// else set the result to the provided stampedPose
previous_result = pose;

// set the timestamp
pose.timestamp = image_timestamp;

// and the image for the next detection
static cv::Mat img_u8, img_float;
next_image.copyTo(img_float);
double min_val, max_val;
cv::minMaxLoc(img_float, &min_val, &max_val);
max_val = std::max(fabs(max_val), fabs(min_val));
img_float /= (2 * max_val);
img_float += 0.5;
img_float.convertTo(img_u8, CV_8U, 255, 0);
cv::cvtColor(img_u8, image, cv::COLOR_GRAY2BGR);

// and unlock the procesing thread
m.try_lock();
m.unlock();
data_ready = false;
return true;
}

double openposethread::getLatency()
{
return latency;
}
24 changes: 24 additions & 0 deletions core/detection_wrappers/openpose_detector.h
Original file line number Diff line number Diff line change
Expand Up @@ -46,4 +46,28 @@ class OpenPoseDetector {
void stop();
};

class openposethread
{
private:
std::thread th;
hpecore::OpenPoseDetector detop;
hpecore::stampedPose pose{0.0, -1.0, 0.0};
cv::Mat image;
double latency = 0;

bool stop{false};
bool data_ready{true};
std::mutex m;

void run();

public:
bool init(std::string model_path, std::string model_name,
std::string model_size = "256");
void close();
bool update(cv::Mat next_image, double image_timestamp,
hpecore::stampedPose &previous_result);
double getLatency();
};

}
Loading