As described in the docs for foggycam2, you need to specify three parameters to access feeds. Follow the steps in this guide to get access to the configuration for your cameras. You need these three fields: issueToken
, cookies
, and apiKey
.
Get it and install some dependencies (I had some errors so I ran some manually):
git clone https://github.com/nextshell/foggycam2.git
cd foggycam2
pip install -r src/requirements.txt
sudo apt install ffmpeg imagemagick
pip install nvidia-ml-py3
pip install typed-ast
pip install astroid
Copy the template config file and add the three parameters you got previously:
cp _config.json config.json
Amcrest cameras support streaming video and still images. Today Cambanzo uses still images. To enable, set the StillUrl
, User
, and Pass
fields in the config file. Examples:
StillUrl = http://192.168.1.1/cgi-bin/snapshot.cgi?channel=1
User = user
Pass = pass
Also, RTSP stream access may be enabled for Amcrest cameras, though this is not yet used by Cambanzo. For a camera with ip 192.168.1.1
, this is the rtsp stream url for the default channel: rtsp://192.168.1.1/cam/realmonitor?channel=1&subtype=0
.
TODO: Experiment with openipc-firmware for Wyze cameras.
I'm using the darknet neural network library and example code. I have a darknet fork updated to work with OpenCV 4 and configured for use with a GPU and cudNN.
Get it and build it:
git clone git@github.com:andypayne/darknet.git
cd darknet
make
I modified the config file for use in testing. I tried training on my system with several config options, and it always consumes all available GPU memory. My config: yolov3.cfg
Download yolov3.weights.
Use the path to the modified config file and the downloaded weights file.
Running object detection:
darknet detect cfg/yolov3.cfg yolov3.weights ../foggycam2/src/capture/<camera_id>/images/<image_name>.jpg -out ./out_image
The annotated output image will be a file named out_image.jpg
.
cp config_example.ini config.ini
Then edit config.ini
and point it to the locations of the dependencies.