Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The implementation of particle filtering tracker #2

Merged
merged 1 commit into from Dec 16, 2013

Conversation

nailbiter
Copy link
Contributor

This contribution aims to implement the tracker based on particle filtering within a generic tracking API that opencv has. The tracker still has to be worked on, but it has potential. This work is supported by Google within the Google Summer of Code 2013 initiative. Mentor: Vadim Pisarevsky.

Besides, the implementation, I've also added a few minor changes to the docs. First, upon agreement from the previous author, I've added some cosmetic fix to existing documentation. Second, I've changed the top README.md file, for it does not correctly display the sequence of steps needed to get things built.

It still remains to write the documentation, I will do so during the acceptance of pull request.

@kirill-korniakov
Copy link

@lenlen, will you accept this pull request? You're the author of the module, thus you should decide if the change is OK.

@ghost ghost assigned vpisarev Sep 23, 2013
@nailbiter
Copy link
Contributor Author

Kyrill, good to hear from you! Let me just use an opportunity to forward
you an important concern I have regarding the whole opencv_contrib repo. I
have sent it to Vadim Pisarevsky and asked him if he can forwards it to the
person, responsible for this repo. But that's You, isn't it?

Below is the message:

Там у меня есть серьезный технический вопрос по поводу всего репозитория
opencv_contrib. Если можно, переадресуйте его, пожалуйста, какому-то
ответственному человеку (я хотел передать Кириллу Корнякову, но не смог
найти его почту). Сформулирую проблему.

Судя по описанию в README.md, можно код репозитория opencv ложить в одну
папку (скажем,opencv), код opencv_contrib в другую (opencv_contrib,
допустим) и потом просто собирать командой
cmake -DOPENCV_EXTRA_MODULES_PATH=<opencv_contrib>/modules
<opencv_source_directory>

Это все очень хорошо. Проблема начинается, когда мы начинаем компилировать
документацию. В opencv/modules генерируется файл refman.rst, в котором в
toc лежат имена топ-файлов документации для каждого модуля. Беда в том, что
те модули, которые пришли из opencv_contrib, будут иметь документацию,
лежащую в opencv_contrib.
По правилам Sphinx, насколько я знаю, документация вся должна лежать в той
же папке, что и refmast.rst, то есть в opencv, а те .rst файлы, что в
opencv_contrib не подхватятся. И документация для тех модулей не будет
скомпилирована. Вот.
On Sep 23, 2013 4:08 PM, "Kirill Kornyakov" notifications@github.com
wrote:

@lenlen https://github.com/lenlen, will you accept this pull request?
You're the author of the module, thus you should decide if the change is OK.

Reply to this email directly or view it on GitHubhttps://github.com//pull/2#issuecomment-24904240
.

std=(Mat_<double>(1,4)<<15.0,15.0,15.0,15.0);
}
TrackerPF::TrackerPF( const TrackerPF::Params &parameters){
params=parameters;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should add this line in the constructor of the specialized tracker:
isInit = false;
otherwise the base class Tracker returns error

@lenlen
Copy link
Contributor

lenlen commented Sep 23, 2013

Formally the TrackerPF seems fitting the API, I commented one issue that is needed and maybe you should add the copyright notice in all files that you have added?
I tried the algorithm with the samples/tracker.cpp with some videos usually used to test online learning algorithms and the tracker doesn't work very well. Can you give me the videos to test the TrackerPF?
I also have problems with the generation of documentation, and then I have another question on opencv_contrib.
I'm developing test and perf_test for this module, but I need to add video files in opencv_extra.
So if one uses the test of tracking module, he must have the video in opencv_extra. Maybe is needed also a repository opencv_extra for contrib or the developers of opencv_contrib can use directly opencv_extra?

@nailbiter
Copy link
Contributor Author

Dear Antonell!

First, I will change that line. By the way, I've compiled and ran my code
in Your module -- it gave no errors. Did You try to run the code?

Second, indeed, tracker is not very good -- it's rather a conceptual model
that shows how tracking problem can be seen as optimization problem. My
mentor asked me to add it, and I believe that it can be refined. I've
attached the trivial video I've used to track. I know, it's not real-life
video, but again, that's rather conceptual model, than production-level
tracker. It still has to be worked on.

Third, there's a problem with documentation, indeed. I've already told the
Kirill in my previous email (that part that's not on English is exactly the
error description -- in Russian).

Best,
Alex

2013/9/23 Antonella Cascitelli notifications@github.com

Formally the TrackerPF seems fitting the API, I commented one issue that
is needed and maybe you should add the copyright notice in all files that
you have added?
I tried the algorithm with the samples/tracker.cpp with some videos
usually used to test online learning algorithms and the tracker doesn't
work very well. Can you give me the videos to test the TrackerPF?
I also have problems with the generation of documentation, and then I have
another question on opencv_contrib.
I'm developing test and perf_test for this module, but I need to add video
files in opencv_extra.
So if one uses the test of tracking module, he must have the video in
opencv_extra. Maybe is needed also a repository opencv_extra for contrib or
the developers of opencv_contrib can use directly opencv_extra?


Reply to this email directly or view it on GitHubhttps://github.com//pull/2#issuecomment-24911209
.

@lenlen
Copy link
Contributor

lenlen commented Sep 23, 2013

@nailbiter where is video that you refered?

@lenlen
Copy link
Contributor

lenlen commented Sep 24, 2013

I read your code and I have some questions that I'd like discuss with you.

  1. I saw that the solver component (particle filter) and the features (histogram) are coupled. I think that it is possible separate the features from the solver. So if you split these components you can make them available for other trackers.
  2. The PF solver should be in a specialized TrackerStateEstimator and not in the specialized model. The TrackerStateEstimator has two methods: update that collects the input states and estimate that computes the output state and it should be an indipendent component from the tracker. The model is only a component (depends from the specialized Tracker) that prepares the data for the TrackerStateEstimator and calls its methods. In fact the model has the Confidence map (set of candidate states) and the trajectory (set of selected states).
  3. I saw that your tracker has not the sampler, in the API the component is not required, but I found a paper that uses particle filter and uses a sampling strategy [1]. This paper is cited also in the paper that I have implemented for the API[2].

So it could be interesting create a TrackerStateEstimator with PF solver and a TrackerFeature with histogram in order to create in the future a tracker (similar to [1]) that uses these two components and eventually one of the sampling strategies already implemented.
What do you think?

[1] J. Kwon and K. M. Lee, "Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive Basin hopping Monte Carlo sampling," in Proc. Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
[2] S Salti, A Cavallaro, L Di Stefano, Adaptive Appearance Modeling for Video Tracking: Survey and Evaluation, IEEE Transactions on Image Processing

@nailbiter
Copy link
Contributor Author

I've got the point. Will You be ready to discuss it, say, on Thursday? I
need some time to review Your suggestions. Would Thursday be convenient for
You?

I realize that Your GSoC is over, so that's rather an anchor for You, but
I'm really really interested in this and would like to do everything well
and I need time for this.

By the way, about the documentation not generating. They are aware of this
and seeking for solution. As a temporary measure, You may simply copy the
content of opencv_contrib/modules to opencv/modules to build docs.

Regards,
Alex

2013/9/24 Antonella Cascitelli notifications@github.com

I read your code and I have some questions that I'd like discuss with you.

  1. I saw that the solver component (particle filter) and the features
    (histogram) are coupled. I think that it is possible separate the features
    from the solver. So if you split these components you can make them
    available for other trackers.
  2. The PF solver should be in a specialized TrackerStateEstimator and not
    in the specialized model. The TrackerStateEstimator has two methods: update
    that collects the input states and estimate that computes the output state
    and it should be an indipendent component from the tracker. The model is
    only a component (depends from the specialized Tracker) that prepares the
    data for the TrackerStateEstimator and calls its methods. In fact the model
    has the Confidence map (set of candidate states) and the trajectory (set of
    selected states).
  3. I saw that your tracker has not the sampler, in the API the component
    is not required, but I found a paper that uses particle filter and uses a
    sampling strategy [1]. This paper is cited also in the paper that I have
    implemented for the API[2].

So it could be interesting create a TrackerStateEstimator with PF solver
and a TrackerFeature with histogram in order to create in the future a
tracker (similar to [1]) that uses these two components and eventually one
of the sampling strategies already implemented.
What do you think?

[1] J. Kwon and K. M. Lee, "Tracking of a non-rigid object via patch-based
dynamic appearance modeling and adaptive Basin hopping Monte Carlo
sampling," in Proc. Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
[2] S Salti, A Cavallaro, L Di Stefano, Adaptive Appearance Modeling for
Video Tracking: Survey and Evaluation, IEEE Transactions on Image Processing


Reply to this email directly or view it on GitHubhttps://github.com//pull/2#issuecomment-25007498
.

@lenlen
Copy link
Contributor

lenlen commented Sep 24, 2013

Ok perfect!

@kirill-korniakov
Copy link

Sorry for the late reply, I was too busy with other activities. Happy to see that you already know about the workaround for the documentation build. You can actually create a symbolic link, this way you will not need to copy docs every time you update them.

And I want to add that the doc build procedure will be changed, so you don't need to copy docs manually. I don't know when it will be ready, but hopefully we'll have it early in October.

@nailbiter nailbiter mentioned this pull request Sep 30, 2013
@nailbiter
Copy link
Contributor Author

Sorry for long delay. Now I'm ready to finalize the issue. I will try to
create separate TrackerFeature and TrackerStateEstimator without making my
own TrackerModel till tomorrow (that is, Wednesday). Please, comment on the
code once I'll push it.

2013/9/24 Antonella Cascitelli notifications@github.com

I read your code and I have some questions that I'd like discuss with you.

  1. I saw that the solver component (particle filter) and the features
    (histogram) are coupled. I think that it is possible separate the features
    from the solver. So if you split these components you can make them
    available for other trackers.
  2. The PF solver should be in a specialized TrackerStateEstimator and not
    in the specialized model. The TrackerStateEstimator has two methods: update
    that collects the input states and estimate that computes the output state
    and it should be an indipendent component from the tracker. The model is
    only a component (depends from the specialized Tracker) that prepares the
    data for the TrackerStateEstimator and calls its methods. In fact the model
    has the Confidence map (set of candidate states) and the trajectory (set of
    selected states).
  3. I saw that your tracker has not the sampler, in the API the component
    is not required, but I found a paper that uses particle filter and uses a
    sampling strategy [1]. This paper is cited also in the paper that I have
    implemented for the API[2].

So it could be interesting create a TrackerStateEstimator with PF solver
and a TrackerFeature with histogram in order to create in the future a
tracker (similar to [1]) that uses these two components and eventually one
of the sampling strategies already implemented.
What do you think?

[1] J. Kwon and K. M. Lee, "Tracking of a non-rigid object via patch-based
dynamic appearance modeling and adaptive Basin hopping Monte Carlo
sampling," in Proc. Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
[2] S Salti, A Cavallaro, L Di Stefano, Adaptive Appearance Modeling for
Video Tracking: Survey and Evaluation, IEEE Transactions on Image Processing


Reply to this email directly or view it on GitHubhttps://github.com//pull/2#issuecomment-25007498
.

@nailbiter
Copy link
Contributor Author

@lenlen I've tried to put a solver component into TrackerStateEstimator and
I don't quite see how to do so. The solver component requires the whole
frame to be given to it, while I can only submit ConfidenceMap's via
TrackerStateEstimator's default methods.

2013/10/1 Alex Leontiev alozz1991@gmail.com

Sorry for long delay. Now I'm ready to finalize the issue. I will try to
create separate TrackerFeature and TrackerStateEstimator without making my
own TrackerModel till tomorrow (that is, Wednesday). Please, comment on the
code once I'll push it.

2013/9/24 Antonella Cascitelli notifications@github.com

I read your code and I have some questions that I'd like discuss with you.

  1. I saw that the solver component (particle filter) and the features
    (histogram) are coupled. I think that it is possible separate the features
    from the solver. So if you split these components you can make them
    available for other trackers.
  2. The PF solver should be in a specialized TrackerStateEstimator and not
    in the specialized model. The TrackerStateEstimator has two methods: update
    that collects the input states and estimate that computes the output state
    and it should be an indipendent component from the tracker. The model is
    only a component (depends from the specialized Tracker) that prepares the
    data for the TrackerStateEstimator and calls its methods. In fact the model
    has the Confidence map (set of candidate states) and the trajectory (set of
    selected states).
  3. I saw that your tracker has not the sampler, in the API the component
    is not required, but I found a paper that uses particle filter and uses a
    sampling strategy [1]. This paper is cited also in the paper that I have
    implemented for the API[2].

So it could be interesting create a TrackerStateEstimator with PF solver
and a TrackerFeature with histogram in order to create in the future a
tracker (similar to [1]) that uses these two components and eventually one
of the sampling strategies already implemented.
What do you think?

[1] J. Kwon and K. M. Lee, "Tracking of a non-rigid object via
patch-based dynamic appearance modeling and adaptive Basin hopping Monte
Carlo sampling," in Proc. Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
[2] S Salti, A Cavallaro, L Di Stefano, Adaptive Appearance Modeling for
Video Tracking: Survey and Evaluation, IEEE Transactions on Image Processing


Reply to this email directly or view it on GitHubhttps://github.com//pull/2#issuecomment-25007498
.

@lenlen
Copy link
Contributor

lenlen commented Oct 9, 2013

@vrabaud what do you think on the particle filter component in the API?

@lenlen
Copy link
Contributor

lenlen commented Oct 14, 2013

@nailbiter in my opinion the TrackerPF doesn't fit the model of the author of the paper and consequently the implementation of the API, but my idea is that particle filter may be very useful in the tracking API, but I think it is better generalize the class in order to use it in several declinations: sampling, decisor or motion estimator (as Kalman), etc. So I suggest to put it in video where is the Kalman filter or in ML module. So anyone can use the component to develop a new tracker using particle filter.
Anyway I'd like to know the point of views of our mentors @vrabaud and @vpisarev.

for(int j=0;j<img.cols;j++){
Vec3f pt=hsv.at<Vec3f>(i,j);

/*dprintf(("%d %d\n",i,j));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably remove that no ?

@vrabaud
Copy link
Contributor

vrabaud commented Oct 16, 2013

The current PF API does not seem generic enough to be included in ML or video. I'd like to be able to define my own particles, somehow (probably a vector of any size). That could be interesting for an implementation of the Bramble tracker which uses generalized cylinders.

@nailbiter
Copy link
Contributor Author

Dear Vincent! @vincent.

Sorry for such a slow reply. I've fixed the issues You've raised, I guess.
Please, note that my coding style is inspired by my previous GSoC project,
so it might be a bit inconsistent with what You've used to get from Your
students, I apologize in advance. In particular, Vadim didn't encourage me
to use iterators in favour of explicit loops (as he said latter are
clearer) and I don't think removing the ALEX_DEBUG and dprintf facility at
all is good idea -- after all, the code always has hope to be developed, so
I think debugging facility is useful thing.

Now, about the API not being too generic. I developed it using my optim
package, so naturally this restricts how generic the tracker can be.
Second, when I've been talking to Vadim, we agreed on developing exactly
this tracker. Third, if You think generalizations are possible, please let
me know some more details (I'm not very familiar with the tracking,
unfortunately, so I can't get some things so easily) -- I'll see, is that
possible to do this.

After all, I think that almost any sort of generalization can be done
latter, but it might be reasonable to let this request in at the moment, as
the opencv_contrib repo becomes newer and newer and while my code is
"disconnected" from it, it get rotten to the point that I won't be able to
work on it.

Best Regards,
Alex

2013/10/17 Vincent Rabaud notifications@github.com

The current PF API does not seem generic enough to be included in ML or
video. I'd like to be able to define my own particles, somehow (probably a
vector of any size). That could be interesting for an implementation of the
Bramble tracker which uses generalized cylinders.


Reply to this email directly or view it on GitHubhttps://github.com//pull/2#issuecomment-26433916
.

void PFSolver::getOptParam(OutputArray params)const{
params.create(1,_std.rows,CV_64FC1);
Mat mat(1,_std.rows,CV_64FC1);
#ifdef WEIGHTED
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't that be an option instead of an ifdef ?

@vrabaud
Copy link
Contributor

vrabaud commented Oct 21, 2013

@nailbiter don't apologize, this is great work !
I guess we should figure out if the tracking API is only for tracking/learning a rectangular region. @lenlen ?

Otherwise, generalizing your code to any kind of particle should be easy. I just need to look closer at the API to see if there could be something flexible enough. The cues should also be an input right ? You don't want to do the HSV conversion inside and you might have several clues (in Bramble, you also use DoG).

Orwe leave your code as is but maybe it should just be renamed rectangularPF. I just want to make sure that we don't block ourselves with a more generic PF technique that we could implement (I coded Bramble many moons ago, I need time to see if it would fit with yours).
But thx for your work anyway, it shows Antonella's work was well design and that contributors can easily help :)

@lenlen
Copy link
Contributor

lenlen commented Oct 22, 2013

The tracking API now works only with the rectangular region because all papers that I read use the rectangular shape. But I think that adding a vector of masks in the sampling component, could enabled the tracking for any shape.

About "rectangularPF" I think that the tracker is too monolitic, it doesn't use the TrackingSampler and TrackingFeature components and the histograms are coupled in the solver.
The decomposition between the visual component and the statistical (mathematical) component in the appearance modelling is the main issue in the papers that I read.

@nailbiter
Copy link
Contributor Author

Dear Antonella!

Thank You for critical review!

To begin with, tracking non-rectangular objects... In my defense, I might
say that the API requires us to submit the rectangle as a bounding region,
so from the beginning working with rectangles is sort of inherent in API. I
mean, it has not much sense to track circle if API initially gives you a
rectangle to track. API is not general enough in this aspect. Nevertheless,
from purely mathematical aspect, the PF algorithm as it is implemented now
does not make any assumptions about the shape of tracked patch. In
principle, it can be any shape together with some space of allowed
deformations (ie. you may give elipsis and then we have 5 parameters to
change it: 2 axes size, location and rotation angle). The question is how
do you submit these to tracker if at the beginning API gives it only a
rectange?

Next, the coupledness of my code. We've been already talking about this.
First, my conclusion was that further decoupling is impossible. Afterwards
You said that You will review the code and make the conclusion yourself,
but that was 2 weeks ago... and still nothing.

Finally, I'm not the huge fan of excessive decoupling. I mean, if one day
we'll have two versions of PF, I'll be the first person to vote for
decoupling to ensure modularity, but now, when all we have is just pretty
bad tracker, encapsulate it in 5 classes is sort of... silly? I mean, the
good code is not the one with bigger number of classes, that's what I
think, but the one where amount of the useful code is not less than the
amount of boil-plate code (like constructing all these classes, and make 3
layers of one-line methods just to pass the data from outermost class to
innermost).

Summarizing, I don't think further decoupling is possible. If You see the
way, please let me know how. And I don't think further decoupling is
necessary either for aforementioned reason.

Sorry for being a bit rude maybe. I really respect Your skills as software
engineer and this system You've built is just... awesome. But I think that
good API should help programmers, not chain them and make them implement
class hierarchy when they see no need in it, i.e. it should have place for
both complicated algorithms with multiple parts and simple, monolithic
ones, that are yet to be generalized.

Best Regards,
Alex

2013/10/22 Antonella Cascitelli notifications@github.com

The tracking API now works only with the rectangular region because all
papers that I read use the rectangular shape. But I think that adding a
vector of masks in the sampling component, could enabled the tracking for
any shape.

About "rectangularPF" I think that the tracker is too monolitic, it
doesn't use the TrackingSampler and TrackingFeature components and the
histograms are coupled in the solver.
The decomposition between the visual component and the statistical
(mathematical) component in the appearance modelling is the main issue in
the papers that I read.


Reply to this email directly or view it on GitHubhttps://github.com//pull/2#issuecomment-26794210
.

@lenlen
Copy link
Contributor

lenlen commented Oct 25, 2013

@nailbiter I have a question: can your algorithm return N rectangles rather that one rectangle? I mean the N rectangles that made a good matching with the histogram of the region of the bounding box given in input.

@nailbiter
Copy link
Contributor Author

@lenlen Yes, in principle. However, I don't see how API can handle this, as
it communicates with tracker by passince ONE rectangle. Anyway, the answer
is yes (with slight modifications in the code)

@lenlen
Copy link
Contributor

lenlen commented Oct 25, 2013

I think that your algorithm can become a good sampling component, instead that a tracker with poor performance.
The TrackerSamplerAlgorithm starts from a boundindbox and the whole frame and returns a vector of Mat as small portions of the frame, we can see these portions as the "N rectangles that made a good matching with the histogram of the region of the bounding box given in input".
So I think that it is possibile transform your algorithm in a unique class that computes the sampling of the frame, what do you think?

vpisarev pushed a commit that referenced this pull request Jul 8, 2015
vpisarev pushed a commit that referenced this pull request Jul 8, 2015
tucna added a commit to tucna/opencv_contrib that referenced this pull request Aug 20, 2015
fixes

fixed compile error and warning

Remove AGAST (for merger to upstream opencv)

wrapping remaining xfeature2d classes to scripting

added test for http://code.opencv.org/issues/3943;
replaced "const InputArray" with "InputArray" to avoid warnings about "const const _InputArray&"

added test for http://code.opencv.org/issues/3943;
replaced "const InputArray" with "InputArray" to avoid warnings about "const const _InputArray&"

adding LATCH

fixed warnings in LATCH

fixed errors

fixed warnings

fixed warnings2

fixed warnings3

fixed warnings4

fixed warnings5

added description of LATCH and fixed indentation

cleaned the code a bit

added tests and renamed LATCH

Code to grab the red lined polygon from Google Maps

added python support and completed documentation

figure update

added stdout welcome message and fixed warning

fixed warning

text enhancement

trying to fix python wrapper warning on win64

trying to solve pyhton warnings

bugfix4269 included remarks in http://answers.opencv.org/question/59293/problem-with-example-motemplcpp/

Replace tab with four space

Add rotation invariance option for BRIEF descriptor.

Fix docs and repush for buildbot.

Bug fix for feature extraction

According to CartToPolar() function documentation, result angles could be in range (0..360). To prevent index overflow this check is important.

Adding edge-aware disparity filtering

Added basic interface and demo for disparity filtering, added unoptimized fast weighted least
squares filter implementation. Current demo tests domain transform, guided and
weighted least squares filters on a dataset, measures speed and quality.

Fix for Bug 4074. This seems to be just a typo-error, because the Tesseract API can handle correctly with RGB images (double-checked and it works).

Fix for Bug opencv#3633: do away with "quads [2][3] = 255;" The four lowest bits in each quads[i][j] correspond to the 2x2 binary patterns Q_1, Q_2, Q_3 in the NM paper [1] (see in page 4 at the end of first column). Q_1 and Q_2 have four patterns, while Q_3 has only two.

added INRIA pedestrian dataset

autowbGrayworld: include+src+test+testdata+sample

Add saturation based thresholding to grayworld WB

Add basic perf tests for grayworld

Add more doxygen comments

Suppress uchar conv related warning on Windows

Apply fixes suggested by Vadim

Be more correct with int types

Remove dangling N_good++

Use cvRound to suppress Windows warnings

remove floor call

vs2010 does not know, ceil, floor, round and friends.

also, those are plain integer divisions, that do not need floor at all.

New stereo module created and added some relevant files for this module

Update README.md

made some extra changes to the modules so I receive no warnings

moved the opencv2/core/private.hpp from stereo_binary_sgbm.cpp to precomp.hpp

fix for issue opencv#195

avoid overflow in histogram access

SurfaceMatching: OpenMP indices

Fixes compiler error: "index variable in OpenMP 'for' statement must have signed integral type"

Adding confidence support and optimizing disparity filtering

DisparityWLSFilter demonstrated the best results, so I removed all the other
filters. Quality was significantly improved by adding confidence support
to the filter (left-right consistency + penalty for areas near depth discontinuities).
Filter was optimized using parallel_for_ and HAL intrinsics. Demo application was
rewritten for better compliance with OpenCV standards. Added accuracy and
performance tests. Documentation was added, as well as references to the
original papers.

added PASCAL VOC dataset

+ add KCF Tracker, initial commit, added: tutorial, trackerKCF.cpp, modified: tracker.cpp, tracker.hpp

adding the resize feature

References for KCF tracker and KCF-CN  tracker

Unified the formatting

Fixed: ROI extraction when the given ROI is out of image; made the max_patch_size to be adjustable; add the CN feature extraction method

Removing all shadowing variables, make functions to const, make the table of color-names become static

change the color-names table to const

Add a framework for choosing the descriptor

Added error message for descriptor other than GRAY

Added new line at end of file

Fixing the ColorNames table initialization

Fixed warning: conversion from double to int

Updated the support for color-names features and fixing some typos

Fixing the tabulation

Split the training coefficient into numerator and denumerator

Added the feature compression method

Fixing some indentations

Fixing some indentations

Fixing alignments

Fixing some alignments

Use Doxigen format

Remove whitespaces

Removing whitespaces in featureColorName.cpp

Add an example code for the KCF tracker

update the header in example/kcf.cpp

Updating the rectangle drawing, avoid warning from variable conversion

Added doxygen documentations

Fixing warnings

remove warnings

Fixing some warnings

TLD Fixes & Optimizations

1. TLD now have module structure
2. Made some small code optimizations
3. Fixed Ensemble Classifier according to the original paper - 10
randomized ferns
4. Added comments to most of the functions and methods

Added test on TLD Dataset

Added BSD-compatible license

Added BSD-compatible license to some files

Fixed header

Fixed build error

Fix

Fix opencv#2

Fix opencv#3

Fix opencv#4

Fixed Warnings opencv#1

Fixed Warnings opencv#2

Fixed Warnings opencv#3

Shadow Fix

Fixing whitespaces

Fixing whitespaces opencv#2

Fixing whitespaces opencv#3

Adds a first implementation of the OCRBeamSearchDecoder class using the Single Layer CNN character classifier described in Coates, Adam, et al. paper: Text detection and character recognition in scene images with unsupervised feature learning, ICDAR 2011

Add a demo program for the OCRBeamSearchDecoder class and needed data files

trailing whitespaces

fix compilation warnings

fix win64 compilation error: arrays must be defined with compile-time fixed size :)

fix doxygen warnings

Fix for opencv#278 - core dump in the case of no match results.

Modified reported poses by constraining to the number of poses found.

ulong -> size_t

fixed warnings in the tracking module

Added OCL versions of Sr and Sc functions

2-nd level of parallelization + detector remake

1. Added 2-nd level of parallelization of NN on OpenCL
2. Restructured detector - now all filters work independently:
Variance Filter->Ensemble->NN,  through "buffers"

Warnings Fix opencv#1

Fixing Warnings opencv#2

Fixing Warnings opencv#3

Fixing Warnings opencv#4

Fixing Warnings opencv#5

Fixing Warnings opencv#3

Fixing Warnings opencv#4

Fixing Warnings opencv#5

Added OCL version of "integrateAdditional" function

Whitespace Fix

Transparent API Support

Fixing Warnings

Fixed bug in LSDDetector where mask doesn't remove all undesired lines

Fixing GCC 4.9 warning

Fix memory leak bug #4420

Fix bug #4373: Error (Assertion failed in resize) when passing very elongated contours to the recognition module

make sources compile again on MSVC 2012 (VC 11) by adding round()

fix suffix that was in-compatible with MSVC 2012 (VC 11)

eliminate some warnings

use better condition for checking if compiler supports round()

fixing facerecognizer tutorials and interface

Adds createOCRHMMTransitionsTable() utility function to create a tailored language model transitions table from a given list of words (lexicon)

update to use the new createOCRHMMTransitionsTable() function, and fix program description in header comments

fix Winx64 warnings

Better CNN model for character recognition. Trained with an augmented dataset by adding translation/scale variations. Updated the croped word recognition with new class numbering (compatible with previous NM classifier).

Overload the run() method in BaseOCR class in order to adapt to different classifier callbacks. The original run() method accepts only one Mat input image, this is expected to be a binarzed image with black and white text and works both with the OCRTesseract class and the OCRHMMDecoder class when the character classifier callback works with binary images (e.g. NM). The new run() method accepts two Mat input parameters. One for the gray scale (or color) source image and the other for a binary mask where each connected component corresponds to a pre-segmented character in the input image. This way the OCRHMMDecoder is able to work with character classifiers that operate in grey scale (or color) images (e.g. a CNN).

Adds example on segmented word recognition. Shows the use of the OCRHMMDecoder with the NM and CNN default classifiers.

Minor bugfix: removes unwanted space character at the begining of recognition output strings.

Fix w64 warnings

Fix w64 warnings

Improving DisparityWLSFilter interface and adding a tutorial

Now the filter natively supports StereoBM and StereoSGBM with no
parameter tuning required. Also, now user won't need to set the ROI and
the right matcher parameters manually, it is all done in the respective
convenience factory method based on the left matcher instance. Tutorial
was added to clarify the provided example of use.

doc update
opencv-pushbot pushed a commit that referenced this pull request Aug 26, 2015
@inferrna inferrna mentioned this pull request Sep 18, 2015
opencv-pushbot pushed a commit that referenced this pull request Nov 3, 2015
opencv-pushbot pushed a commit that referenced this pull request Nov 3, 2015
opencv-pushbot pushed a commit that referenced this pull request Nov 3, 2015
Aravind-Suresh added a commit to Aravind-Suresh/opencv_contrib that referenced this pull request Feb 13, 2016
opencv-pushbot pushed a commit that referenced this pull request Jun 10, 2016
Auron-X added a commit to Auron-X/opencv_contrib that referenced this pull request Jun 25, 2016
Auron-X added a commit to Auron-X/opencv_contrib that referenced this pull request Dec 6, 2016
savuor pushed a commit to savuor/opencv_contrib that referenced this pull request Nov 25, 2017
More directory restructuring, getting CUDA through opencv rather than…
berak added a commit to berak/opencv_contrib that referenced this pull request Dec 26, 2017
vpisarev referenced this pull request in vpisarev/opencv_contrib May 26, 2020
a few fixes to compile julia bindings on mac
shanchenqi pushed a commit to shanchenqi/opencv_contrib that referenced this pull request Aug 30, 2020
@zhanglaplace zhanglaplace mentioned this pull request Apr 6, 2021
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants