The sample code for a large-scale crowd counting dataset, NWPU-Crowd.
-
Updated
Sep 24, 2020 - Python
The sample code for a large-scale crowd counting dataset, NWPU-Crowd.
Collective Knowledge crowd-tuning extension to let users crowdsource their experiments (using portable Collective Knowledge workflows) such as performance benchmarking, auto tuning and machine learning across diverse platforms with Linux, Windows, MacOS and Android provided by volunteers. Demo of DNN crowd-benchmarking and crowd-tuning:
Cross-platform Python client for the CodeReef.ai portal to manage portable workflows, reusable automation actions, software detection plugins, meta packages and dashboards for crowd-benchmarking:
News: we have moved this code to the CK framework:
Add a description, image, and links to the crowd-benchmarking topic page so that developers can more easily learn about it.
To associate your repository with the crowd-benchmarking topic, visit your repo's landing page and select "manage topics."