Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix memory leak when using a frame access on Jetson device #25

Closed
denisvmedyantsev opened this issue Nov 8, 2022 · 1 comment
Closed
Assignees
Labels
bug Something isn't working

Comments

@denisvmedyantsev
Copy link
Contributor

There is a memory leak in drawbin element. The problem was found on Xavier NX. Run any module with a source adapter and use htop tool to watch RES column of the running module.

Consider to refactor drawbin element, to make more lightweight (convert to simple pyfunc element, not a bin) and remove location property support

@denisvmedyantsev denisvmedyantsev added the bug Something isn't working label Nov 8, 2022
@denisvmedyantsev denisvmedyantsev self-assigned this Nov 8, 2022
@denisvmedyantsev
Copy link
Contributor Author

There is memory leak on Jetson NX when using pyds.get_nvds_buf_surface, I've posted the issue https://forums.developer.nvidia.com/t/memory-leak-on-xavier-nx/234438

@denisvmedyantsev denisvmedyantsev changed the title Fix memory leak in drawbin on Jetson devices Fix memory leak when using a frame access on Jetson device Nov 16, 2022
@denisvmedyantsev denisvmedyantsev linked a pull request Nov 18, 2022 that will close this issue
tomskikh added a commit that referenced this issue Dec 5, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 15, 2022
tomskikh added a commit that referenced this issue Dec 22, 2022
tomskikh added a commit that referenced this issue Dec 22, 2022
tomskikh added a commit that referenced this issue Dec 23, 2022
tomskikh added a commit that referenced this issue Dec 23, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
tomskikh added a commit that referenced this issue Dec 26, 2022
bwsw added a commit that referenced this issue May 22, 2023
* #20 add video converter to scale frame to the original resolution

* SavantBoost library added as part of the entire framework

* Simplified docker for jetson deepstream 6.1

* #25 draw on frames after nvstreamdemux

* #25 don't retrieve frame image in NvDsPyFuncPlugin

* #25 build pyds with unmap_nvds_buf_surface binding

* #25 use context manager to automatically unmap NvDsBufSurface

* #25 remove drawbin gst element

* #25 update PyFunc documentation

* #37 use hardware jpeg encoder when it's available

* added element name for pyfunc and extended drawbin

* fix bug with division by zero

* fixed bug with adding rbbox to frame meta

* reformat

* extended comment for rendered_objects

* fixed bug after merge

* #43 move building pyds out of separate dockerfile

* #43 add opencv module "savant"

* #43 fix mapping to EGL in DSCudaMemory

* #43 add python bindings for DSCudaMemory in savantboost

* #43 add helpers for cv2.cuda.GpuMat

* Update README.md

* Update architecture.md

* added supporting mpeg stream demuxer (#54)

* added supporting mpeg stream demuxer

* Fixed a grammatical mistake

* #48 add FrameParameters class for processing frame config

* #48 respect rows alignment in CUDA memory

* #48 move drawing on frames before demuxer

* #48 add padding to frame parameters

* #48 fix scaling/shifting bboxes

* Add files via upload

* Update README.md

* Update README.md

* 53 move to ds 62 (#64)

Update DS to 6.2 and Savant to 0.2.0.

* Update README.md

* #43 implement benchmarks for drawing on frames

* implement draw element artist using gpumat opencv (#68)

* added gpumat based artist, added bbox drawing

* added full implementation for opencv artist

* fixed pyfunc config crash

* added cpu blur, fixed roi for gpu blur

* Removed Cairo artist code, added docs

* removed extra reference

* fixed import name

* file rename

* fixed alphacomp mode, fixing overlay+padding wip

* fixed corner cases in add_overlay()

* change back to ghcr registry

* removed outdated build arg

* Frame RoI change (#69)

* Refactor nvds utils.

* Support frame roi property.

* quality and bitrate configuration for savant output frame (#70)

* fixed typo in debug logs

* added docs for output_frame module parameter

* added encoder elements properties lists

* Removed mention of gstreamer encoder elements

* NvDsFrameMeta is extended and returns frame tags (#62)

The "NvDsFrameMeta" has been extended to include frame tags and other video frame metadata information. The pipeline metadata now includes the source metadata, and the source video adapter reads and adds frame metadata to the sent frames.

* Always-On Low Latency Streaming Sink (RTSP) (#74)

* Move draw func before output meta preparation. (#81)

* 57 add an option to avoid scaling the frames back at the end of the pipeline (#78)

* NvDsFrameMeta is extended and returns frame tags

* Taking savant_frame_meta only when accessing tags

* Refactoring and all incoming meta information is transferred to the deepstream meta.

* Source metadata is added to the pipeline metadata, and the source video adapter reads and adds frame metadata to the sent frame.

* fixed bugs with scaling

* fixed convert to srt

* Input and output metadata coordinates is in absolute coordinates

* #59 filter zeromq messages by source ID

* Update README.md

* Update architecture.md

* Update architecture.md

* Update architecture.md

* Update README.md

* Update README.md

* Update architecture.md

* Update publications-samples.md

* Optimize the default selector with numba (#82)

* Fix numpy data types in model postprocessing.

* Add numba.

* Finalize Pub/Sub, Req/Rep, Dealer/Router configurations (#79)

* #51 add dealer/router zeromq sockets

* #51 set default zeromq sockets for source to dealer/router

* #51 add docstring for RoutingIdFilter

* #76 transfer multimedia object outside the avro message (#83)

* Fix numpy data types in rapid converter.

* Disable nvjpegdec on pipeline level.

* Support source EOS event callback in pyfunc.

* Fix EOS event propagation in pyfunc.

* Fixed JSON serialization in console sink. (#92)

* technical preview release demo pipeline (#94)

* Fix RTSP source adapter (#98)

* Filter caps on RTSP source adapters
* Filter out non-IDR frames at the start of the stream

* Drop EOS on nvstreammux when all sources finished (#97)

* Support rounded rect in artist (#101)

* Support rounded rect in artist.

* Review fix.

* Remove Jetson Nano support. (#102)

* Update README.md

* 107 describe provided adapters (#111)

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update README.md

* Update architecture.md

* Create README.md

* Update README.md

* Update README.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update README.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Fix adapters parameters.

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update README.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update architecture.md

* Update architecture.md

* Update architecture.md

* Update architecture.md

* Add files via upload

* Update README.md

* Delete peoplenet-blur-demo.webp

* Add files via upload

* Delete peoplenet-blur-demo.webp

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update github workflows (#114)

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Updated demo pipeline configuration (#121)

Changed live demo to output RGBA frames, added demo_performance pipeline

* Add 0.2.0 environment compatibility test script (#124)

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Create docker compose demo run (#125)

* Add compose file with dGPU images

* Add compose file with jetson images

* Add start delay for rtsp source

---------

Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com>

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add test image removal in env check (#129)

* Create runtime-configuration.md

* Update runtime-configuration.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update runtime-configuration.md

* Fix input objects missing parents (#134)

* Add docker compose config for nvidia_car_classification (#137)

* Rename deepstream_test2 to nvidia_car_classification

* Track jpegs in LFS

* Add peoplenet demo stub image to LFS

* Move stub img

* Add stub image for 720p

* Add docker compose configs for car classification

* Add draw func for car classification

* Change default frame output to raw-rgba

* Add README entry, preview file

* 130 demonstrate mog2 background removal with opencv cuda (#141)

* added background remover sample base on MOG2

---------

Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com>

* Mog2 publishing (#144)

Update docs

* Update person_face_matching.py (#148)

* Update person_face_matching.py

* Add parent assignment on pyds meta level (#140)

* Update nvidia-car-classification (#138)

* Add fullscale webp for nvidia-car-classification

* Update README

* #115 configure sending EOS at the end of each file (#152)

* #136 implement video loop source adapter (#143)

* #126 calculate DTS for encoded frames in RTSP source (#159)

* fixed restart argument to conform #156 (#163)

* ok

* Update README.md

* #128 encode ZeroMQ socket type and bind/connection to endpoint (#162)

* #151 embed mediamtx into always-on-rtsp sink (#167)

* changed d/r to p/s (#170)

* Add line crossing demo module (#157)

* Add line crossing module wip
* Add conditional inference skip, reformat
* Add graphite + graphana stats
* Remove track id from graphite metrics
* Update to count stats
* Update README, docs, stale tracks removing
* Remove dependency on savant-samples image
* Change ROUTER/DEALER to PUB/SUB
* Fix preview file link
* Update samples/line_crossing/README.md

Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com>

---------

Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com>

* Changed main cfg version to be Savant-flavor for car sample (#174)

* #178 skip non-keyframes after each generated EOS in avro_video_demux (#180)

* Update README.md

* Update README.md

* Literal fixes to demo (#183)

* Add yolov8 detector to line crossing demo (#186)

* Add model builder patch
* Add x86 yolov8 module
* Fix base savant image
* Add env file with detector choice
* Update Jetson dockerfile

* Fix line cross demo direction bug (#190)

* Fix direction bug

* Add obj class label to metric name scheme

* Make ExternalFrame.type a string rather than enum (#191)

* Deploy savant as package (#187)

* Fix workflow.

* Fix build-docker workflow.

* 0.2.1 release fixes (#192)

* Add validation for custom format model (#195)

* custom_lib_path should be a file

* engine_create_func_name should be set

* Fix config checker for inference element with engine file specified (#196)

* Add implicit setup of engine options

* Add skip of calib file check for built engine

* Update car classification sample config

* Download file before starting the stream in video-loop-source (#197)

* Update demos to 0.2.1 (#194)

* Fix traffic meter config error (#199)

* Fix nvinfer config bug.

* Change default detector to peoplenet

* WIP: 0.2.1 Documentation (#153)

Initial documentation

---------

Co-authored-by: Denis Medyantsev <denisvmedyantsev@gmail.com>
Co-authored-by: Oleg Abramov <abramov-oleg@users.noreply.github.com>

* Update README.md

* Fix build docs: lfs=true.

* Fix build docs: install git-lfs before checkout.

* Update README.md

* Add per-batch cuda stream completion wait (#205)

* Add per-batch cuda stream completion wait

* Make artist stream argument mandatory

* Add per-batch stream completion wait pyfunc

* Update Artist usage in bg_remover

* Add prepare release script

* Add opencv deb packages build file

* Add installing OpenCV package from savant-data

* Separate release and latest build workflows

* Change savant version to 0.2.2

---------

Co-authored-by: Pavel A. Tomskikh <tomskih_pa@bw-sw.com>
Co-authored-by: Pavel Tomskikh <tomskikh@users.noreply.github.com>
Co-authored-by: bogoslovskiy_nn <bogoslovskiy_nn@bw-sw.com>
Co-authored-by: Nikolay Bogoslovskiy <bogoslovskii@gmail.com>
Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com>
Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com>
Co-authored-by: Denis Medyantsev <denisvmedyantsev@gmail.com>
Co-authored-by: bwsw <bitworks@bw-sw.com>
bwsw added a commit that referenced this issue May 22, 2023
* SavantBoost library added as part of the entire framework

* Simplified docker for jetson deepstream 6.1

* #25 draw on frames after nvstreamdemux

* #25 don't retrieve frame image in NvDsPyFuncPlugin

* #25 build pyds with unmap_nvds_buf_surface binding

* #25 use context manager to automatically unmap NvDsBufSurface

* #25 remove drawbin gst element

* #25 update PyFunc documentation

* #37 use hardware jpeg encoder when it's available

* added element name for pyfunc and extended drawbin

* fix bug with division by zero

* fixed bug with adding rbbox to frame meta

* reformat

* extended comment for rendered_objects

* fixed bug after merge

* #43 move building pyds out of separate dockerfile

* #43 add opencv module "savant"

* #43 fix mapping to EGL in DSCudaMemory

* #43 add python bindings for DSCudaMemory in savantboost

* #43 add helpers for cv2.cuda.GpuMat

* Update README.md

* Update architecture.md

* added supporting mpeg stream demuxer (#54)

* added supporting mpeg stream demuxer

* Fixed a grammatical mistake

* #48 add FrameParameters class for processing frame config

* #48 respect rows alignment in CUDA memory

* #48 move drawing on frames before demuxer

* #48 add padding to frame parameters

* #48 fix scaling/shifting bboxes

* Add files via upload

* Update README.md

* Update README.md

* 53 move to ds 62 (#64)

Update DS to 6.2 and Savant to 0.2.0.

* Update README.md

* #43 implement benchmarks for drawing on frames

* implement draw element artist using gpumat opencv (#68)

* added gpumat based artist, added bbox drawing

* added full implementation for opencv artist

* fixed pyfunc config crash

* added cpu blur, fixed roi for gpu blur

* Removed Cairo artist code, added docs

* removed extra reference

* fixed import name

* file rename

* fixed alphacomp mode, fixing overlay+padding wip

* fixed corner cases in add_overlay()

* change back to ghcr registry

* removed outdated build arg

* Frame RoI change (#69)

* Refactor nvds utils.

* Support frame roi property.

* quality and bitrate configuration for savant output frame (#70)

* fixed typo in debug logs

* added docs for output_frame module parameter

* added encoder elements properties lists

* Removed mention of gstreamer encoder elements

* NvDsFrameMeta is extended and returns frame tags (#62)

The "NvDsFrameMeta" has been extended to include frame tags and other video frame metadata information. The pipeline metadata now includes the source metadata, and the source video adapter reads and adds frame metadata to the sent frames.

* Always-On Low Latency Streaming Sink (RTSP) (#74)

* Move draw func before output meta preparation. (#81)

* 57 add an option to avoid scaling the frames back at the end of the pipeline (#78)

* NvDsFrameMeta is extended and returns frame tags

* Taking savant_frame_meta only when accessing tags

* Refactoring and all incoming meta information is transferred to the deepstream meta.

* Source metadata is added to the pipeline metadata, and the source video adapter reads and adds frame metadata to the sent frame.

* fixed bugs with scaling

* fixed convert to srt

* Input and output metadata coordinates is in absolute coordinates

* #59 filter zeromq messages by source ID

* Update README.md

* Update architecture.md

* Update architecture.md

* Update architecture.md

* Update README.md

* Update README.md

* Update architecture.md

* Update publications-samples.md

* Optimize the default selector with numba (#82)

* Fix numpy data types in model postprocessing.

* Add numba.

* Finalize Pub/Sub, Req/Rep, Dealer/Router configurations (#79)

* #51 add dealer/router zeromq sockets

* #51 set default zeromq sockets for source to dealer/router

* #51 add docstring for RoutingIdFilter

* #76 transfer multimedia object outside the avro message (#83)

* Fix numpy data types in rapid converter.

* Disable nvjpegdec on pipeline level.

* Support source EOS event callback in pyfunc.

* Fix EOS event propagation in pyfunc.

* Fixed JSON serialization in console sink. (#92)

* technical preview release demo pipeline (#94)

* Fix RTSP source adapter (#98)

* Filter caps on RTSP source adapters
* Filter out non-IDR frames at the start of the stream

* Drop EOS on nvstreammux when all sources finished (#97)

* Support rounded rect in artist (#101)

* Support rounded rect in artist.

* Review fix.

* Remove Jetson Nano support. (#102)

* Update README.md

* 107 describe provided adapters (#111)

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update README.md

* Update architecture.md

* Create README.md

* Update README.md

* Update README.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update README.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Fix adapters parameters.

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update README.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update adapters.md

* Update architecture.md

* Update architecture.md

* Update architecture.md

* Update architecture.md

* Add files via upload

* Update README.md

* Delete peoplenet-blur-demo.webp

* Add files via upload

* Delete peoplenet-blur-demo.webp

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update github workflows (#114)

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Updated demo pipeline configuration (#121)

Changed live demo to output RGBA frames, added demo_performance pipeline

* Add 0.2.0 environment compatibility test script (#124)

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Create docker compose demo run (#125)

* Add compose file with dGPU images

* Add compose file with jetson images

* Add start delay for rtsp source

---------

Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com>

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add test image removal in env check (#129)

* Create runtime-configuration.md

* Update runtime-configuration.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update runtime-configuration.md

* Fix input objects missing parents (#134)

* Add docker compose config for nvidia_car_classification (#137)

* Rename deepstream_test2 to nvidia_car_classification

* Track jpegs in LFS

* Add peoplenet demo stub image to LFS

* Move stub img

* Add stub image for 720p

* Add docker compose configs for car classification

* Add draw func for car classification

* Change default frame output to raw-rgba

* Add README entry, preview file

* 130 demonstrate mog2 background removal with opencv cuda (#141)

* added background remover sample base on MOG2

---------

Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com>

* Mog2 publishing (#144)

Update docs

* Update person_face_matching.py (#148)

* Update person_face_matching.py

* Add parent assignment on pyds meta level (#140)

* Update nvidia-car-classification (#138)

* Add fullscale webp for nvidia-car-classification

* Update README

* #115 configure sending EOS at the end of each file (#152)

* #136 implement video loop source adapter (#143)

* #126 calculate DTS for encoded frames in RTSP source (#159)

* fixed restart argument to conform #156 (#163)

* ok

* Update README.md

* #128 encode ZeroMQ socket type and bind/connection to endpoint (#162)

* #151 embed mediamtx into always-on-rtsp sink (#167)

* changed d/r to p/s (#170)

* Add line crossing demo module (#157)

* Add line crossing module wip
* Add conditional inference skip, reformat
* Add graphite + graphana stats
* Remove track id from graphite metrics
* Update to count stats
* Update README, docs, stale tracks removing
* Remove dependency on savant-samples image
* Change ROUTER/DEALER to PUB/SUB
* Fix preview file link
* Update samples/line_crossing/README.md

Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com>

---------

Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com>

* Changed main cfg version to be Savant-flavor for car sample (#174)

* #178 skip non-keyframes after each generated EOS in avro_video_demux (#180)

* Update README.md

* Update README.md

* Literal fixes to demo (#183)

* Add yolov8 detector to line crossing demo (#186)

* Add model builder patch
* Add x86 yolov8 module
* Fix base savant image
* Add env file with detector choice
* Update Jetson dockerfile

* Fix line cross demo direction bug (#190)

* Fix direction bug

* Add obj class label to metric name scheme

* Make ExternalFrame.type a string rather than enum (#191)

* Deploy savant as package (#187)

* Fix workflow.

* Fix build-docker workflow.

* 0.2.1 release fixes (#192)

* Add validation for custom format model (#195)

* custom_lib_path should be a file

* engine_create_func_name should be set

* Fix config checker for inference element with engine file specified (#196)

* Add implicit setup of engine options

* Add skip of calib file check for built engine

* Update car classification sample config

* Download file before starting the stream in video-loop-source (#197)

* Update demos to 0.2.1 (#194)

* Fix traffic meter config error (#199)

* Fix nvinfer config bug.

* Change default detector to peoplenet

* WIP: 0.2.1 Documentation (#153)

Initial documentation

---------

Co-authored-by: Denis Medyantsev <denisvmedyantsev@gmail.com>
Co-authored-by: Oleg Abramov <abramov-oleg@users.noreply.github.com>

* Update README.md

* Fix build docs: lfs=true.

* Fix build docs: install git-lfs before checkout.

* Update README.md

* Add per-batch cuda stream completion wait (#205)

* Add per-batch cuda stream completion wait

* Make artist stream argument mandatory

* Add per-batch stream completion wait pyfunc

* Update Artist usage in bg_remover

* Add prepare release script

* Add opencv deb packages build file

* Add installing OpenCV package from savant-data

* Separate release and latest build workflows

* Change savant version to 0.2.2 (#213)

Co-authored-by: Oleg V. Abramov <abramov.o.v@gmail.com>

* Prepare release 0.2.2

---------

Co-authored-by: bogoslovskiy_nn <bogoslovskiy_nn@bw-sw.com>
Co-authored-by: Pavel A. Tomskikh <tomskih_pa@bw-sw.com>
Co-authored-by: Pavel Tomskikh <tomskikh@users.noreply.github.com>
Co-authored-by: Nikolay Bogoslovskiy <bogoslovskii@gmail.com>
Co-authored-by: Bitworks LLC <bwsw@users.noreply.github.com>
Co-authored-by: Denis Medyantsev <44088010+denisvmedyantsev@users.noreply.github.com>
Co-authored-by: Denis Medyantsev <denisvmedyantsev@gmail.com>
Co-authored-by: bwsw <bitworks@bw-sw.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant