Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

connectedcomponents submission #135

Merged
merged 17 commits into from

6 participants

@nevion

Hi,

This is a git pull request for the following issue (that has went on way too long!). I've fixed up the Dustin's complaints... let's get this code out there!

http://code.opencv.org/issues/1236

There are COUNTLESS threads on how to do connectedcomponents almost monthly submissions to the ML. Stackoverflow alone must have 20+ threads on this. Most end up using cvblobslib which is often broken and too slow to boot. There are some other implementations too and it's possible to do it opencv with findContours but it's really a bad situation atm. People actually are rolling their own implementations still (naively).

Anyway this presents an easy to use single function call which gives the labeled image and optionally the statistics one often wants with connected components (bounding box, centroid, area) and does so with better worst case and average case performance than all the other implementations.

I hope I'll have better luck at submission than through the issue tracker.

Jason Newton added some commits
Jason Newton connectedComponents: warning free version 45b4f4f
Jason Newton connectedComponents: peep-hole optimizations, mostly surrouding the f…
…act that cv::Mat::at is expensive in a tight-loop -also added a "blobstats" version
4c0cb25
Jason Newton connectedcomponents: use opencv integral types, add to docs, fix up t…
…hings for a python export
8588039
@vpisarev vpisarev was assigned
@vpisarev
Owner

ok, let's finally put it in. However, I would like you to modify interface significantly and partly modify implementation. Since the current trend in OpenCV is to introduce very stable API and do not change it for a long time, I want the API to be future-proof, and not only at the source code level, but also at binary level.

Why we did not integrate the patch earlier in the first place? Because connected components is quite generic thing. They can be extracted from binary (bi-level) images, they can be extracted from grayscale or even multi-channel images if we define the predicate (close(p, q)) for the neighbor pixels. On the output side, representation of connected components can be very different too, they can be represented as contours, as sets of horizontal/vertical line segments etc. The statistics that is computed in your implementation may be sufficient for some users, but insufficient for others, e.g. J. Matas et al in "Real-time scene text localization and recognition" consider different, in my opinion very useful characteristics like the perimeter, number of holes, number of zero-crossings, number of inflections etc.

So, we can not just "occupy" quite generic name "connected components" for something rather specific and not covering this area.

But I admit that something is better than nothing, and designing an ideal connected component extraction function may take a very long time.

So, let's put the current code in, but make the API more generic and more wrapper-friendly:

vvvvvvvvvvvvvvvvvvvvvvvv

CV_EXPORTS_W int connectedComponents(InputArray image, OutputArray labels, int connectivity = 8, int flags=0);

enum { CC_STAT_LEFT=0, CC_STAT_TOP=1, CC_STAT_WIDTH=2, CC_STAT_HEIGHT=3, CC_STAT_CX=4, CC_STAT_CY=5, CC_STAT_AREA=6, CC_STAT_INTEGRAL_X=7, CC_STAT_INTEGRAL_Y=8, ... };

CV_EXPORTS_W int connectedComponentsWithStats(InputArray image, OutputArray labels, OutputArray stats, int connectivity = 8, int flags=0);

^^^^^^^^^^^^^^^^^^^^^^^^^^^^

First of all, the use of uint64 should be eliminated since it's a weird type and its use here is not justified for any reasonable image size (2 billions of pixels is quite a big number). Then, input image should be made the first parameter, by our common convention and should be made InputArray, which was introduced in 2.4.0. "const Mat&" is implicitly converted to InputArray. The output label image should be made OutputArray. "Mat&" is implicitly converted to OutputArray. The output vector of cc statistics should be also wrapped into OutputArray. The structure ConnectedComponentStats should be removed as too specific and not wrapper-friendly. For Python you provided wrappers, but what about Java? Also, for example, quite a few people want Matlab & Octave interface for OpenCV, and each extra structure means that we will have to provide such a conversion function. Instead, I suggest to pack all the statistics into array of integers. The centroid can be represented in fixed-point format.

Mat labels, stats;
connectedComponents(my_image, labels, stats);
Point2f center_i(stats.at(i,CC_STAT_CX)/256.f, stats.at(i,CC_STAT_CY)/256.f);

I would also add the label of the component in the "default" statistics, because if we remove (filter off) some of the connected components, e.g. too small ones, the ordering will be lost.

flags parameter will regulate what exactly needs to be computed. 0 means your statistics, e.g. CC_STAT_MATAS_MODE could mean adding some more stuff etc.


Now, on the implementation. It's a lot of code in OpenCV already, and we try to do our best to keep functionality/footprint ratio reasonable (when possible, we try to increase it).

The proposed patch implements connected components extraction for binary image, which is 8-bit single-channel image (CV_8UC1) in 99% cases. If not, it can be converted to this format with a single line of code:

connectedComponents((my_32f_image > eps), labels, stats);

So, it's enough to support CV_8UC1 input. Similarly, it's enough to constrain the output image of labels to 32-bit format (CV_32S). We support only this format in our labeled distance transform and watershed transforms functions and did not hear a single complain. So, let's bring the code size down and make the function non-template 8u->32s code. I'm sure we will find a better use of the space (especially on mobile devices).


One final point. For the code that can be tested (i.e. except for camera capturing, GUI stuff) we now demand some tests, at least smoke tests. The sample you prepared is great, but we can not run it nightly in batch mode. So we will need some tests for these connected component extraction functions. You can look at https://github.com/Itseez/opencv/blob/master/modules/imgproc/test/test_watershed.cpp for a relevant test. It just reads some image, runs the function and compares the output with the previously stored one.

What do you think? If you are fine with the proposed changes and willing to do that and update the pull request, I will merge it in.

@nevion
@nevion
Jason Newton adjust output type to return int32... it should at least be unsigned …
…but this breaks python bindings;

remove non-8bit input type support, not worth the binary size
d5aa679
@vpisarev
Owner

Jason,
no need to implement Matas etc. props, we just need to leave space for it in the API. Actually, I now think of going even further to make it an "Algorithm":

class CComp : public Algorithm
{
public:
...
int label(InputArray img, OutputArray labels) const;
int labelAndGetStat(InputArray img, OutputArray labels, OutputArray stats) const;
};

and everything else can be made properties, so it's super-extendable:

Ptr ccomp = Algorithm::create("connected-component-extractor" /* or use some better name? */ );
// change parameters if the default values are not good enough
ccomp->setInt("connectivity", 4);
ccomp->setString("mode", "Matas");
ccomp->setInt("outputFormat", CV_64F);
// run the algorithm
ccomp->label(img, labels, stats);

I understand that:
1) you may not have enough time to modify the code and those InputArray/OutputArray/Algorithm concepts is something extra that you do not spend time on.
2) whatever changes we propose, the end result should work for you, otherwise it would be very silly situation - contribute something, spend time on it and get unusable result.

so I suggest to do it together. If you make a branch in your repository and give me access to it, I can help to adjust the code. Or I can submit a pull request to your repository at github. Is it fine?

On the various data type support. uint32 is actually needed very rarely, as we learnt from our experience and users' feedback. If int32 is not enough (we prefer to call it int, since on any known for us 32-bit or 64-bit OS sizeof(int)==4), we found double to be the best alternative. It's supported in hardware anywhere nowadays, from most low-power ARMs to the modern Intel chips with AVX instructions (that can process 4 double's at once). double can represent any integer <=2*53 exactly, it takes the same space as [u]int64, it's usually faster to process than 64-bit integers on 32-bit CPUs and about as fast on 64-bit CPUs (actually, it's faster there too if we take into account SSE2/AVX that have a very limited support for 64-bit integers). So, uint32, int64/uint64 are useless types in my opinion. (BTW, if "int" as return type is not enough for connectedComponent functions, I would suggest to use "double"). And if we have some spare resources to add another datatype to OpenCV's Mat, I would immediately choose float16 (half).

@vpisarev
Owner

oh, I forgot to add that another reason to make it an Algorithm is to add [later, if not now] some other useful properties, e.g.
ccomp->setInt("minArea", 10);
if a connected component is too small, the algorithm could wipe it out (or just do to store the statistics for it, but for that we need the "label" component in each "stat" row). Filtering out tiny connected components is very useful feature, sometimes it's the only purpose of using connected component function (like in our stereo correspondence algorithms)

Jason Newton A few changes to comply with upstream requirements for merge.
-Change input/output order from (out Labeled, in Image) -> (in Image, Out Labeled) and convert
to Input/OutputArrays in the process.

-Adopt OutputArray for statistics export so that the algorithm is "wrapper friendly" and not requiring a new struct in
language bindings at the expense of using doubles for everything and slowing statistics computation down..
00bdca7
@nevion
@vpisarev
Owner

Hi Jason!
good progress, it's converging.
Some comments:

1) First of all, the output arrays are not created in your code, instead, you use the existing type to dispatch to the right function. This does not conform to the current OpenCV guidelines. The output arrays can empty or have wrong size or type, and the function should work correctly nevertheless. In fact, it should ignore the current size and type of the output image. If the output image type can be different, it should be explicitly passed to the function. Also, do we really need 8u output type for labels? It's 1/3 of the code size and seem quite impractical. Still, if you want to support 8u, 16u and 32u/s, you need to add another output parameter:

int connectedComponents(InputArray _img, OutputArray _labels, int connectivity=8, int ltype=CV_32S)
{
Mat img = _img.getMat();
_labels.create(img.size(), CV_MAT_TYPE(ltype));
Mat labels = _labels.getMat();
// ... call internal functions.
}

2) the related note - the internal functions should probably take [const] Mat& instead of InputArrays, it's more efficient and safe.

3) even though you already refactored your code, I would suggest to declare the ConnectedCompStat structure (or how you call it) to the .cpp and make the template functions return vector. The transformation vector => OutputArray stats can be moved to a separate function, which can be non-template (it does not depend on the type of labels image, right?). Also, this transformation will help to keep the code more elegant and easier to handle various issues with accuracy etc. - they are postponed till the very end.

4) on CC_STAT_CX, CC_STAT_CY - I did not quite understand what you mean, the formatting in your comment is broken.

5) on CC_STAT_INTEGRAL_* - I did not quite understand it too, but for a different reason. I thought, the characteristics depends on the area and shape of the connected component, not on its position, and therefore 32-bit value is enough. Can you provide some information on what is CC_STAT_INTEGRAL_X/Y exactly?

@nevion
@vpisarev
Owner

ok, so if the ConnectedComp structure is used for intermediate representation (where integrals can be encoded as double's or 64-bit integers) then the output array of statistics can be without them, because they do not much sense for the users.

On 8-bit output label array - let's throw it away and keep just CV_16U and CV_32S options.

CX and CY representation - well, need to think. Personally I do not think that they should be stored with very high accuracy - any noise pixel added to or removed from the component may affect them. I would say, 1/256 should be enough, but I admit that I do not know all the possible use cases. Splitting the output array into 2 - I would say, this is the least convenient option.

Hey, what about yet another option - encode each of CX and CY as a pair of integers?
stat[i][CC_STAT_CX] would store the rounded-to-nearest x-coordinate of the centroid and stat[i][CC_STAT_FX] would store the signed fractional part, multiplied by (2**31)? We can add a constant:

#define CV_CC_STAT_CENTROID_SCALE 4.656612873077393e-10 /* 1/(2**31) */

so that the actual centroid can be computed as stat[i][CC_STAT_CX]+stat[i][CC_STAT_FX]*CV_CC_STAT_CENTROID_SCALE.

It will give significantly better accuracy than single-precision floating-point number. We can add inline function

inline Point2f getCCompCentroid(const int* stat) { return Point2f(stat[CC_STAT_CX]+..., stat[CC_STAT_CY]+...); }

which can be used as

vector > ccomp;
Mat img, labels;
getConnectedComponentsAndStats(img, labels, ccomp, 8);
for( size_t i = 0; i < ccomp.size(); i++ )
{
Point2f c = getCCompCentroid(&ccomp[i][0]);
}

those who only need very rough position of the centroid can just use Point(stat[CC_STAT_CX], stat[CC_STAT_CY]).

What do you think?

@nevion
samples/cpp/connected_components.cpp
((9 lines not shown))
-
- Mat dst = Mat::zeros(img.size(), CV_8UC3);
-
- if( !contours.empty() && !hierarchy.empty() )
- {
- // iterate through all the top-level contours,
- // draw each connected component with its own random color
- int idx = 0;
- for( ; idx >= 0; idx = hierarchy[idx][0] )
- {
- Scalar color( (rand()&255), (rand()&255), (rand()&255) );
- drawContours( dst, contours, idx, color, CV_FILLED, 8, hierarchy );
- }
+ Mat labelImage(img.size(), CV_32S);
+ int nLabels = connectedComponents(bw, labelImage, 8);
+ Vec3b colors[nLabels];
@joshdoe
joshdoe added a note

This is an error with MSVC9 (VS2008) since nLabels is not a constant; use new and delete instead.

@nevion
nevion added a note
@vpisarev Owner
vpisarev added a note

"const int" will not help. Using run-time, not a compile-time value to specify the array size is GCC extension to C/C++ standards. Use

vector colors(nLabels);

instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@vpisarev
Owner

ok, let's output the connected component centroid in a separate output array of 2-channel floating-point type (CV_32FC2 or CV_64FC2 - at your choice).

@nevion

Vadim,

I believe I've got everything you mentioned taken care of, including the test case which I have also forked from your branch and updated with my input/expected output for regression testing.

Let me know if there's anything missing still.

-Jason

@vpisarev
Owner

looks good now.
One thing to fix, though, is compile errors on Windows: http://pullrequest.opencv.org (I have to apologize for scrambled error messages; we are aware of this problem and working on it) and then we can probably integrate it and then do incremental improvements

modules/imgproc/src/connectedcomponents.cpp
((38 lines not shown))
+// the use of this software, even if advised of the possibility of such damage.
+//
+// 2011 Jason Newton <nevion@gmail.com>
+//M*/
+//
+#include "precomp.hpp"
+#include <vector>
+
+//It's 2012 and we still let compilers get by without defining standard integer types...
+typedef schar int8_t;
+typedef uchar uint8_t;
+typedef short int16_t;
+typedef unsigned short uint16_t;
+typedef int int32_t;
+typedef unsigned int uint32_t;
+
@joshdoe
joshdoe added a note

To fix the Windows errors, at least with VC10, add typedef unsigned __int64 uint64_t;. However shouldn't this really be in core? Look at core\types_c.h, as uint64 is already defined here. Consider using uint64 etc. in your code, moving the necessary bits of this set of typedefs there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@alekcac
Owner

Vadim, any changes?

@nevion
@nevion

Hm, regarding these types I've added them to types_c.h... common programming lore states that the type of char is not guaranteed to be 8 bits though... and neither is int to be 32 (I've used one of these - a dsp platform with 16 bit ints and still using gcc 2.95 or some such). I'm not sure what to propose for older code but C++ 0x11/c99 guarantees stdint which provides integer types with guaranteed bit-depths.

With that said I made the change reusing existing typedefs so int8 - uint32 are now also defined rather than just int64 and updated connectedcomponents afterwords.

@vpisarev
Owner

Hi Jason,
such a change in type_c.h can potentially cause conflicts. I'm afraid, we can not accept such a change, sorry. If you need such definitions in .cpp, please, put the declarations there.

Otherwise, the patch looks fine. With one little note - please, take a look at http://pullrequest.opencv.org, your code produces some warnings for android; the code can only be merged if it builds without warnings. You can fix it or I can submit pull request to your repository to fix that, as you wish.

@nevion

Vadim - yea I thought it was pretty controversial but somebody (joshdoe?) suggested to add such typedefs there. If I may ask where is the conflict you are concerned about and is this a reasonable judgment? I have received no compilation errors in regards to it and it just replicates the naming convention of u/int64 which is a sane and useful thing IMO. Or is this why they were originally missing? I can move them back but I do consider this a small step backwards..

I have been monitoring the builds since you originally provided me the URL, I figured there was such no merge unless ... conditions were in play. RE the android failing, wasn't able to decern why and I don't have an android environment setup... if it's not to much trouble could you submit a patch to fix the error there?

@joshdoe

It was my suggestion. I was a bit surprised to see OpenCV didn't have fixed width standard typedefs. However Vadim does have somewhat of a point, since your simple typedefs aren't robust enough to work across all platforms. Leaving each module or even each cpp file to implement these on their own is asking for trouble. Perhaps a better way is to use stdint.h, though it would mean including some compiler specific files (e.g. msinttypes).

@nevion

joshdoe - sorry I didn't realize github attached the comment to the diff and hid it all together.

I think this is the best approach if OpenCV is going to stay stuck on C++ 98 - use the compiler specific options or just manually coded statements... .I think the later is a little easier, see lines 74-90 here http://code.google.com/p/msinttypes/source/browse/trunk/stdint.h for all the windows support you will need. Looks like that covers borland too.

For glibc using platforms, stdint.h has been there since circa 2005.

Small amendment: boost has a single file solution for this:
http://www.boost.org/doc/libs/1_52_0/boost/cstdint.hpp

Second amendment: it is almost single file... it relies on boost/config.hpp and in tern the 23 files under boost/config/compiler to deal with int64.

@vpisarev
Owner

Hi Jason,

if you want to contribute some little (yet useful) piece code to opencv, please, try to make as little modifications in the external headers as possible, and follow our conventions. I'm 100% sure, every big project has a similar policy.

In OpenCV the conventions are the following:

uchar == unsigned char == uint8_t
schar == signed char == int8_t
ushort = unsigned short == uint16_t
short == short == int16_t
int == int32_t
unsigned == uint32_t
int64 == long long == int64_t
uint64 == unsigned long long == uint64_t
float == float32_t
double == float64_t

That's it. No argues about that. Please, do not mention C++ standards, this is constant pain for us. It's just our rules that work fine for us and our users during 12 years of OpenCV existence.

If you want to have more freedom, that's perfectly fine, and starting from 2.5 we will have alternative option for such contributions - make a 3rd-party module. Here is the short guide, a copy of my e-mail to another contributor. You should replace "color_tracker" below with "connected_component" or something like that. As soon as we finalize some more details, we will properly format the guide and publish it somewhere.

[start of e-mail]

In order to make OpenCV more solid and at the same time to make OpenCV community more open, where code can be shared not only via the very small core team, we decided to change our policy in respect with the contributed code. While bug fixes and very minor additions can be included right into OpenCV, new algorithms should be shaped as modules. For example, if you download OpenCV 2.4.3 and look at the modules subdirectory, you will see ~20 modules, from quite large basic modules like core and imgproc to higher-level specific modules like photo and videostab.

We are in the process of designing all the necessary web infrastructure, but if you wish, you can already do all the necessary work at you side.

  1. (optional) register at github, fork opencv (master branch). If you do not want to register at github, just use git capabilities to clone the repository.
  2. make the branch with a sensible name, like "color_tracker".
  3. within modules directory create a subdirectory (e.g. color_tracker). the easiest way would be to copy a small module, like photo and rename all the headers.
  4. in modules/color_tracker/include/opencv2/color_tracker/color_tracker.hpp define a separate namespace and your classes, put the internal headers and .cpp files to src directory.
  5. modify modules/color_tracker/CMakeLists.txt - specify name of your module (color_tracker) and its dependencies.
  6. optionally, add some tests, documentation, performance tests etc. (test, doc, pert subdirectories).

run CMake, make everything compile.

That's it,the module is ready. Now everyone can download the .zip file of the color_tracker subdirectory, place it to modules and get your functionality.
For now we can just publish the list of 3rd-party modules at code.opencv.org wiki, but later we plan to have something like Perl's CPAN or Python's PIP system.

Then, if your module appears to be useful, after some staging period (like 1+ year) we can put it into the core OpenCV, if you or someone else agrees to support it further for several years.

[end of e-mail]

please, consider both options and if you choose the second one, let me know, I'll close this pull request and will gladly add the link to your module at code.opencv.org wiki.

@nevion

Hi Vadim,

I will get this to work in opencv without the types_c.h modification - I wanted this module in mainline so cvblobslib et-al doesn't happen again and all those email/stackoverflow posts to stop.

I did consider it a long shot to get the type decls in... however given joshdoe's comment I figured it would perhaps come to fruition and I guess I wanted to at least get it brought up. There's been a lot of changes in OpenCV in the last 2 years and some changes have been fairly radical. The new project layout, alot more C++ than C, and switch to Qt are the things off the top of my mind but I guess also use of Cuda and OpenCL... so from my POV I figured it was possible you guys would do introduce something like those types. . But it's rejected, and that's OK. For the record I do think it is less error prone (in terms of range considerations) to use bit-width encoded types and there is an inconsistency in using the opencv conventions with the CV_(float|S|U) types... I apologize again if you've heard this all before again and again but it is weird in a library like opencv to do this IMO. And that's about all I can say on it so I'm done on that topic.

My one question to you regarding android is can you still fling me a patch to fix that build error? I wasn't able to figure it out from the error message. This is very close to being ready.

@vpisarev
Owner

Jason,
thanks for understanding and for the prompt reaction. I will try to fix the warnings.

@vpisarev
Owner

Jason, here is the patch that fixes build problems with VS and probably Android; let's see if it works; For some reason, I could not submit a pull request to your repository, so you need to apply the patch manually:

vpisarev@1eae455

basically, you can merge https://github.com/vpisarev/opencv.git, branch cc, to your master.

@nevion

Vadim - I added your patch. Still build errors on android... something the compiler is having trouble with on Input/OutputArrays. On mac the regression test also fails with invalid test data... this only happens when the input matrix (loaded from a png file) is empty... ... perhaps the dataset repo isn't up to date on the mac test environment? This is just a warning however.

Btw can you take references of Input/OutputArrays? I noted you changed the CCStatsOp fields to be pointers but thought it was funny you didn't use references instead...

@vpisarev
Owner

Hi Jason,

1) regarding the tests - can you, please, send me the two files you use for the regression tests? I will put them to opencv_extra. Without the test data the tests will always pass (unless the test will crash somewhere inside the function).
2) on the android build - I'm still trying to figure out why it fails.
3) on Input/OutputArray. using references as members of C++ classes is bad, because it issues the correct warning that the assignment operator and copy constructor can not be generated. Instead of suppressing the warning or ignoring it I always choose to fix it properly.

@vpisarev
Owner

ok; I fixed the android build, please check the cc branch. Basically, you can just copy the implementation of the external cv::connectedComponentsWithStats functions to your .cpp file.

The test data is still needed; with failing tests we can not accept the code

@nevion

Vadim - the test data is here (pushed a week ago): https://github.com/nevion/opencv_extra

I pushed your patch too.

@vpisarev
Owner

ok, I added test data to our copy of opencv_extra; hopefully, now the tests will pass

@nevion

Vadim - not sure if it's been rebuilt since you added the test data. Can you force a rebuild or should I make a dummy commit?

@vpisarev
Owner

I can and I did. Looks like, this is the problem of our buildbot. Still, we have to wait a bit until it's solved.

@vpisarev
Owner

Hi Jason!

2 more things:
1) can you, please, update the documentation to reflect the API? E.g. the structure description should be removed.

2) we will integrate your pull request early next week, because this week we prepare 2.4.3.x release, and so we want to minimize the possibility of conflicts during after-release 2.4 -> master merges. is that fine with you?

@nevion

I'll get on 1 sometime today, re 2, we're tantalizingly close and the core code is stable and fairly well tested... I'd hope we could get it in in the next release which would allow such things in, whenever that is. I also hope we're not missing one though that seems like a bug fix release. Btw, come January I'm going to be on a long set of work related field tests so it works best for me to finish up before that... or after though I'd have to fight the distraction.

@vpisarev
Owner

Jason,
don't worry, as long as it's integrated (and it's only the documentation that remains to be fixed), we will put it into the nearest OpenCV release.

Putting it to 2.4.3.x is too late; it's very long release process, going through multiple QA steps, it's already on the way. So the earliest release when we can put it in is 2.4.4, which is scheduled for 2013 Feb.

Regards,
Vadim

@kirill-kornyakov

Vadim, actually the earliest possible release is 2.5, since this is a new functionality and it is targeted for "master" branch. But I don't think that it will cause any issues, since after merge to master this will be a part of OpenCV!

@vpisarev
Owner

oh, that's right; the pull request should actually be retargeted for 2.4 branch in order to be put into 2.4.4

@kirill-kornyakov

But I thought that 2.4 should accept only bug-fixes and optimizations =) Is it a "must have" for 2.4.4?

@vpisarev
Owner

there are "should" and "could". 2.4.4 can accept anything that does not break 2.4 API :) So, we can potentially add this functionality, no probs

@nevion

Ok I've pushed an update to the docs. Re 2.4, should I rebase off 2.4 and submit a pull request to that branch?

@vpisarev
Owner

Hi Jason,

Let's put it to master to keep things going. We can still then to move it to 2.4

@nevion

Looks like the android build bot is freaking out again - other than that I see no warnings relevant to connected components, just histogram equalization and a cascade classifier, not sure what to interpret from the docs, but the topic of interest isn't in it's logs.

@vpisarev
Owner

:+1:

@vpisarev
Owner

:shipit:

@opencv-pushbot opencv-pushbot merged commit 4cb25e9 into Itseez:master
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Nov 5, 2012
  1. connectedComponents: warning free version

    Jason Newton authored
  2. connectedComponents: peep-hole optimizations, mostly surrouding the f…

    Jason Newton authored
    …act that cv::Mat::at is expensive in a tight-loop -also added a "blobstats" version
  3. connectedcomponents: use opencv integral types, add to docs, fix up t…

    Jason Newton authored
    …hings for a python export
Commits on Nov 23, 2012
  1. adjust output type to return int32... it should at least be unsigned …

    Jason Newton authored
    …but this breaks python bindings;
    
    remove non-8bit input type support, not worth the binary size
Commits on Nov 27, 2012
  1. A few changes to comply with upstream requirements for merge.

    Jason Newton authored
    -Change input/output order from (out Labeled, in Image) -> (in Image, Out Labeled) and convert
    to Input/OutputArrays in the process.
    
    -Adopt OutputArray for statistics export so that the algorithm is "wrapper friendly" and not requiring a new struct in
    language bindings at the expense of using doubles for everything and slowing statistics computation down..
Commits on Dec 9, 2012
  1. use vector instead of non-standard stack allocation. also correct pro…

    Jason Newton authored
    …gram argument borkage
  2. use a ltype parameter to determine result Label image type; export st…

    Jason Newton authored
    …ats with differening types over different outputarrays
Commits on Dec 10, 2012
  1. connectedcomponents test case

    Jason Newton authored
Commits on Dec 15, 2012
  1. use opencv's integer type convension

    Jason Newton authored
  2. disable windows build warning for connectedcomponents template argume…

    Jason Newton authored
    …nt comparisons
  3. @vpisarev
  4. @vpisarev
Commits on Dec 17, 2012
  1. drop usage of macros... the type is already there!

    Jason Newton authored
Commits on Dec 18, 2012
  1. @vpisarev

    probably fixed build problems on Android

    vpisarev authored Jason Newton committed
Commits on Dec 19, 2012
This page is out of date. Refresh to see the latest.
View
32 modules/imgproc/doc/structural_analysis_and_shape_descriptors.rst
@@ -118,6 +118,38 @@ These values are proved to be invariants to the image scale, rotation, and refle
.. seealso:: :ocv:func:`matchShapes`
+connectedComponents
+-----------------------
+computes the connected components labeled image of boolean image ``image`` with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.
+
+.. ocv:function:: int connectedComponents(InputArray image, OutputArray labels, int connectivity = 8, int ltype=CV_32S)
+
+.. ocv:function:: int connectedComponentsWithStats(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, int connectivity = 8, int ltype=CV_32S)
+
+ :param image: the image to be labeled
+
+ :param labels: destination labeled image
+
+ :param connectivity: 8 or 4 for 8-way or 4-way connectivity respectively
+
+ :param ltype: output image label type. Currently CV_32S and CV_16U are supported.
+
+ :param statsv: statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via statsv(label, COLUMN) where available columns are defined below.
+
+ * **CC_STAT_LEFT** The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal
+ direction.
+
+ * **CC_STAT_TOP** The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical
+ direction.
+
+ * **CC_STAT_WIDTH** The horizontal size of the bounding box
+
+ * **CC_STAT_HEIGHT** The vertical size of the bounding box
+
+ * **CC_STAT_AREA** The total area (in pixels) of the connected component
+
+ :param centroids: floating point centroid (x,y) output for each label, including the background label
+
findContours
----------------
View
14 modules/imgproc/include/opencv2/imgproc/imgproc.hpp
@@ -1102,6 +1102,20 @@ enum { TM_SQDIFF=0, TM_SQDIFF_NORMED=1, TM_CCORR=2, TM_CCORR_NORMED=3, TM_CCOEFF
CV_EXPORTS_W void matchTemplate( InputArray image, InputArray templ,
OutputArray result, int method );
+enum { CC_STAT_LEFT=0, CC_STAT_TOP=1, CC_STAT_WIDTH=2, CC_STAT_HEIGHT=3, CC_STAT_AREA=4, CC_STAT_MAX = 5};
+
+// computes the connected components labeled image of boolean image ``image``
+// with 4 or 8 way connectivity - returns N, the total
+// number of labels [0, N-1] where 0 represents the background label.
+// ltype specifies the output label image type, an important
+// consideration based on the total number of labels or
+// alternatively the total number of pixels in the source image.
+CV_EXPORTS_W int connectedComponents(InputArray image, OutputArray labels,
+ int connectivity = 8, int ltype=CV_32S);
+CV_EXPORTS_W int connectedComponentsWithStats(InputArray image, OutputArray labels,
+ OutputArray stats, OutputArray centroids,
+ int connectivity = 8, int ltype=CV_32S);
+
//! mode of the contour retrieval algorithm
enum
{
View
411 modules/imgproc/src/connectedcomponents.cpp
@@ -0,0 +1,411 @@
+/*M///////////////////////////////////////////////////////////////////////////////////////
+//
+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+//
+// By downloading, copying, installing or using the software you agree to this license.
+// If you do not agree to this license, do not download, install,
+// copy or use the software.
+//
+//
+// Intel License Agreement
+// For Open Source Computer Vision Library
+//
+// Copyright (C) 2000, Intel Corporation, all rights reserved.
+// Third party copyrights are property of their respective owners.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistribution's of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+//
+// * Redistribution's in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// * The name of Intel Corporation may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// This software is provided by the copyright holders and contributors "as is" and
+// any express or implied warranties, including, but not limited to, the implied
+// warranties of merchantability and fitness for a particular purpose are disclaimed.
+// In no event shall the Intel Corporation or contributors be liable for any direct,
+// indirect, incidental, special, exemplary, or consequential damages
+// (including, but not limited to, procurement of substitute goods or services;
+// loss of use, data, or profits; or business interruption) however caused
+// and on any theory of liability, whether in contract, strict liability,
+// or tort (including negligence or otherwise) arising in any way out of
+// the use of this software, even if advised of the possibility of such damage.
+//
+// 2011 Jason Newton <nevion@gmail.com>
+//M*/
+//
+#include "precomp.hpp"
+#include <vector>
+
+namespace cv{
+ namespace connectedcomponents{
+
+ struct NoOp{
+ NoOp(){
+ }
+ void init(int /*labels*/){
+ }
+ inline
+ void operator()(int r, int c, int l){
+ (void) r;
+ (void) c;
+ (void) l;
+ }
+ void finish(){}
+ };
+ struct Point2ui64{
+ uint64 x, y;
+ Point2ui64(uint64 _x, uint64 _y):x(_x), y(_y){}
+ };
+
+ struct CCStatsOp{
+ const _OutputArray* _mstatsv;
+ cv::Mat statsv;
+ const _OutputArray* _mcentroidsv;
+ cv::Mat centroidsv;
+ std::vector<Point2ui64> integrals;
+
+ CCStatsOp(OutputArray _statsv, OutputArray _centroidsv): _mstatsv(&_statsv), _mcentroidsv(&_centroidsv){
+ }
+ inline
+ void init(int nlabels){
+ _mstatsv->create(cv::Size(CC_STAT_MAX, nlabels), cv::DataType<int>::type);
+ statsv = _mstatsv->getMat();
+ _mcentroidsv->create(cv::Size(2, nlabels), cv::DataType<double>::type);
+ centroidsv = _mcentroidsv->getMat();
+
+ for(int l = 0; l < (int) nlabels; ++l){
+ int *row = (int *) &statsv.at<int>(l, 0);
+ row[CC_STAT_LEFT] = INT_MAX;
+ row[CC_STAT_TOP] = INT_MAX;
+ row[CC_STAT_WIDTH] = INT_MIN;
+ row[CC_STAT_HEIGHT] = INT_MIN;
+ row[CC_STAT_AREA] = 0;
+ }
+ integrals.resize(nlabels, Point2ui64(0, 0));
+ }
+ void operator()(int r, int c, int l){
+ int *row = &statsv.at<int>(l, 0);
+ if(c > row[CC_STAT_WIDTH]){
+ row[CC_STAT_WIDTH] = c;
+ }else{
+ if(c < row[CC_STAT_LEFT]){
+ row[CC_STAT_LEFT] = c;
+ }
+ }
+ if(r > row[CC_STAT_HEIGHT]){
+ row[CC_STAT_HEIGHT] = r;
+ }else{
+ if(r < row[CC_STAT_TOP]){
+ row[CC_STAT_TOP] = r;
+ }
+ }
+ row[CC_STAT_AREA]++;
+ Point2ui64 &integral = integrals[l];
+ integral.x += c;
+ integral.y += r;
+ }
+ void finish(){
+ for(int l = 0; l < statsv.rows; ++l){
+ int *row = &statsv.at<int>(l, 0);
+ row[CC_STAT_LEFT] = std::min(row[CC_STAT_LEFT], row[CC_STAT_WIDTH]);
+ row[CC_STAT_WIDTH] = row[CC_STAT_WIDTH] - row[CC_STAT_LEFT] + 1;
+ row[CC_STAT_TOP] = std::min(row[CC_STAT_TOP], row[CC_STAT_HEIGHT]);
+ row[CC_STAT_HEIGHT] = row[CC_STAT_HEIGHT] - row[CC_STAT_TOP] + 1;
+
+ Point2ui64 &integral = integrals[l];
+ double *centroid = &centroidsv.at<double>(l, 0);
+ double area = ((unsigned*)row)[CC_STAT_AREA];
+ centroid[0] = double(integral.x) / area;
+ centroid[1] = double(integral.y) / area;
+ }
+ }
+ };
+
+ //Find the root of the tree of node i
+ template<typename LabelT>
+ inline static
+ LabelT findRoot(const LabelT *P, LabelT i){
+ LabelT root = i;
+ while(P[root] < root){
+ root = P[root];
+ }
+ return root;
+ }
+
+ //Make all nodes in the path of node i point to root
+ template<typename LabelT>
+ inline static
+ void setRoot(LabelT *P, LabelT i, LabelT root){
+ while(P[i] < i){
+ LabelT j = P[i];
+ P[i] = root;
+ i = j;
+ }
+ P[i] = root;
+ }
+
+ //Find the root of the tree of the node i and compress the path in the process
+ template<typename LabelT>
+ inline static
+ LabelT find(LabelT *P, LabelT i){
+ LabelT root = findRoot(P, i);
+ setRoot(P, i, root);
+ return root;
+ }
+
+ //unite the two trees containing nodes i and j and return the new root
+ template<typename LabelT>
+ inline static
+ LabelT set_union(LabelT *P, LabelT i, LabelT j){
+ LabelT root = findRoot(P, i);
+ if(i != j){
+ LabelT rootj = findRoot(P, j);
+ if(root > rootj){
+ root = rootj;
+ }
+ setRoot(P, j, root);
+ }
+ setRoot(P, i, root);
+ return root;
+ }
+
+ //Flatten the Union Find tree and relabel the components
+ template<typename LabelT>
+ inline static
+ LabelT flattenL(LabelT *P, LabelT length){
+ LabelT k = 1;
+ for(LabelT i = 1; i < length; ++i){
+ if(P[i] < i){
+ P[i] = P[P[i]];
+ }else{
+ P[i] = k; k = k + 1;
+ }
+ }
+ return k;
+ }
+
+ //Based on "Two Strategies to Speed up Connected Components Algorithms", the SAUF (Scan array union find) variant
+ //using decision trees
+ //Kesheng Wu, et al
+ //Note: rows are encoded as position in the "rows" array to save lookup times
+ //reference for 4-way: {{-1, 0}, {0, -1}};//b, d neighborhoods
+ const int G4[2][2] = {{1, 0}, {0, -1}};//b, d neighborhoods
+ //reference for 8-way: {{-1, -1}, {-1, 0}, {-1, 1}, {0, -1}};//a, b, c, d neighborhoods
+ const int G8[4][2] = {{1, -1}, {1, 0}, {1, 1}, {0, -1}};//a, b, c, d neighborhoods
+ template<typename LabelT, typename PixelT, typename StatsOp = NoOp >
+ struct LabelingImpl{
+ LabelT operator()(const cv::Mat &I, cv::Mat &L, int connectivity, StatsOp &sop){
+ CV_Assert(L.rows == I.rows);
+ CV_Assert(L.cols == I.cols);
+ CV_Assert(connectivity == 8 || connectivity == 4);
+ const int rows = L.rows;
+ const int cols = L.cols;
+ size_t Plength = (size_t(rows + 3 - 1)/3) * (size_t(cols + 3 - 1)/3);
+ if(connectivity == 4){
+ Plength = 4 * Plength;//a quick and dirty upper bound, an exact answer exists if you want to find it
+ //the 4 comes from the fact that a 3x3 block can never have more than 4 unique labels
+ }
+ LabelT *P = (LabelT *) fastMalloc(sizeof(LabelT) * Plength);
+ P[0] = 0;
+ LabelT lunique = 1;
+ //scanning phase
+ for(int r_i = 0; r_i < rows; ++r_i){
+ LabelT *Lrow = (LabelT *)(L.data + L.step.p[0] * r_i);
+ LabelT *Lrow_prev = (LabelT *)(((char *)Lrow) - L.step.p[0]);
+ const PixelT *Irow = (PixelT *)(I.data + I.step.p[0] * r_i);
+ const PixelT *Irow_prev = (const PixelT *)(((char *)Irow) - I.step.p[0]);
+ LabelT *Lrows[2] = {
+ Lrow,
+ Lrow_prev
+ };
+ const PixelT *Irows[2] = {
+ Irow,
+ Irow_prev
+ };
+ if(connectivity == 8){
+ const int a = 0;
+ const int b = 1;
+ const int c = 2;
+ const int d = 3;
+ const bool T_a_r = (r_i - G8[a][0]) >= 0;
+ const bool T_b_r = (r_i - G8[b][0]) >= 0;
+ const bool T_c_r = (r_i - G8[c][0]) >= 0;
+ for(int c_i = 0; Irows[0] != Irow + cols; ++Irows[0], c_i++){
+ if(!*Irows[0]){
+ Lrow[c_i] = 0;
+ continue;
+ }
+ Irows[1] = Irow_prev + c_i;
+ Lrows[0] = Lrow + c_i;
+ Lrows[1] = Lrow_prev + c_i;
+ const bool T_a = T_a_r && (c_i + G8[a][1]) >= 0 && *(Irows[G8[a][0]] + G8[a][1]);
+ const bool T_b = T_b_r && *(Irows[G8[b][0]] + G8[b][1]);
+ const bool T_c = T_c_r && (c_i + G8[c][1]) < cols && *(Irows[G8[c][0]] + G8[c][1]);
+ const bool T_d = (c_i + G8[d][1]) >= 0 && *(Irows[G8[d][0]] + G8[d][1]);
+
+ //decision tree
+ if(T_b){
+ //copy(b)
+ *Lrows[0] = *(Lrows[G8[b][0]] + G8[b][1]);
+ }else{//not b
+ if(T_c){
+ if(T_a){
+ //copy(c, a)
+ *Lrows[0] = set_union(P, *(Lrows[G8[c][0]] + G8[c][1]), *(Lrows[G8[a][0]] + G8[a][1]));
+ }else{
+ if(T_d){
+ //copy(c, d)
+ *Lrows[0] = set_union(P, *(Lrows[G8[c][0]] + G8[c][1]), *(Lrows[G8[d][0]] + G8[d][1]));
+ }else{
+ //copy(c)
+ *Lrows[0] = *(Lrows[G8[c][0]] + G8[c][1]);
+ }
+ }
+ }else{//not c
+ if(T_a){
+ //copy(a)
+ *Lrows[0] = *(Lrows[G8[a][0]] + G8[a][1]);
+ }else{
+ if(T_d){
+ //copy(d)
+ *Lrows[0] = *(Lrows[G8[d][0]] + G8[d][1]);
+ }else{
+ //new label
+ *Lrows[0] = lunique;
+ P[lunique] = lunique;
+ lunique = lunique + 1;
+ }
+ }
+ }
+ }
+ }
+ }else{
+ //B & D only
+ const int b = 0;
+ const int d = 1;
+ const bool T_b_r = (r_i - G4[b][0]) >= 0;
+ for(int c_i = 0; Irows[0] != Irow + cols; ++Irows[0], c_i++){
+ if(!*Irows[0]){
+ Lrow[c_i] = 0;
+ continue;
+ }
+ Irows[1] = Irow_prev + c_i;
+ Lrows[0] = Lrow + c_i;
+ Lrows[1] = Lrow_prev + c_i;
+ const bool T_b = T_b_r && *(Irows[G4[b][0]] + G4[b][1]);
+ const bool T_d = (c_i + G4[d][1]) >= 0 && *(Irows[G4[d][0]] + G4[d][1]);
+ if(T_b){
+ if(T_d){
+ //copy(d, b)
+ *Lrows[0] = set_union(P, *(Lrows[G4[d][0]] + G4[d][1]), *(Lrows[G4[b][0]] + G4[b][1]));
+ }else{
+ //copy(b)
+ *Lrows[0] = *(Lrows[G4[b][0]] + G4[b][1]);
+ }
+ }else{
+ if(T_d){
+ //copy(d)
+ *Lrows[0] = *(Lrows[G4[d][0]] + G4[d][1]);
+ }else{
+ //new label
+ *Lrows[0] = lunique;
+ P[lunique] = lunique;
+ lunique = lunique + 1;
+ }
+ }
+ }
+ }
+ }
+
+ //analysis
+ LabelT nLabels = flattenL(P, lunique);
+ sop.init(nLabels);
+
+ for(int r_i = 0; r_i < rows; ++r_i){
+ LabelT *Lrow_start = (LabelT *)(L.data + L.step.p[0] * r_i);
+ LabelT *Lrow_end = Lrow_start + cols;
+ LabelT *Lrow = Lrow_start;
+ for(int c_i = 0; Lrow != Lrow_end; ++Lrow, ++c_i){
+ const LabelT l = P[*Lrow];
+ *Lrow = l;
+ sop(r_i, c_i, l);
+ }
+ }
+
+ sop.finish();
+ fastFree(P);
+
+ return nLabels;
+ }//End function LabelingImpl operator()
+
+ };//End struct LabelingImpl
+}//end namespace connectedcomponents
+
+//L's type must have an appropriate depth for the number of pixels in I
+template<typename StatsOp>
+static
+int connectedComponents_sub1(const cv::Mat &I, cv::Mat &L, int connectivity, StatsOp &sop){
+ CV_Assert(L.channels() == 1 && I.channels() == 1);
+ CV_Assert(connectivity == 8 || connectivity == 4);
+
+ int lDepth = L.depth();
+ int iDepth = I.depth();
+ using connectedcomponents::LabelingImpl;
+ //warn if L's depth is not sufficient?
+
+ CV_Assert(iDepth == CV_8U || iDepth == CV_8S);
+
+ if(lDepth == CV_8U){
+ return (int) LabelingImpl<uchar, uchar, StatsOp>()(I, L, connectivity, sop);
+ }else if(lDepth == CV_16U){
+ return (int) LabelingImpl<ushort, uchar, StatsOp>()(I, L, connectivity, sop);
+ }else if(lDepth == CV_32S){
+ //note that signed types don't really make sense here and not being able to use unsigned matters for scientific projects
+ //OpenCV: how should we proceed? .at<T> typechecks in debug mode
+ return (int) LabelingImpl<int, uchar, StatsOp>()(I, L, connectivity, sop);
+ }
+
+ CV_Error(CV_StsUnsupportedFormat, "unsupported label/image type");
+ return -1;
+}
+
+}
+
+int cv::connectedComponents(InputArray _img, OutputArray _labels, int connectivity, int ltype){
+ const cv::Mat img = _img.getMat();
+ _labels.create(img.size(), CV_MAT_DEPTH(ltype));
+ cv::Mat labels = _labels.getMat();
+ connectedcomponents::NoOp sop;
+ if(ltype == CV_16U){
+ return connectedComponents_sub1(img, labels, connectivity, sop);
+ }else if(ltype == CV_32S){
+ return connectedComponents_sub1(img, labels, connectivity, sop);
+ }else{
+ CV_Error(CV_StsUnsupportedFormat, "the type of labels must be 16u or 32s");
+ return 0;
+ }
+}
+
+int cv::connectedComponentsWithStats(InputArray _img, OutputArray _labels, OutputArray statsv,
+ OutputArray centroids, int connectivity, int ltype)
+{
+ const cv::Mat img = _img.getMat();
+ _labels.create(img.size(), CV_MAT_DEPTH(ltype));
+ cv::Mat labels = _labels.getMat();
+ connectedcomponents::CCStatsOp sop(statsv, centroids);
+ if(ltype == CV_16U){
+ return connectedComponents_sub1(img, labels, connectivity, sop);
+ }else if(ltype == CV_32S){
+ return connectedComponents_sub1(img, labels, connectivity, sop);
+ }else{
+ CV_Error(CV_StsUnsupportedFormat, "the type of labels must be 16u or 32s");
+ return 0;
+ }
+}
View
108 modules/imgproc/test/test_connectedcomponents.cpp
@@ -0,0 +1,108 @@
+/*M///////////////////////////////////////////////////////////////////////////////////////
+//
+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+//
+// By downloading, copying, installing or using the software you agree to this license.
+// If you do not agree to this license, do not download, install,
+// copy or use the software.
+//
+//
+// License Agreement
+// For Open Source Computer Vision Library
+//
+// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
+// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
+// Third party copyrights are property of their respective owners.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistribution's of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+//
+// * Redistribution's in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// * The name of the copyright holders may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// This software is provided by the copyright holders and contributors "as is" and
+// any express or implied warranties, including, but not limited to, the implied
+// warranties of merchantability and fitness for a particular purpose are disclaimed.
+// In no event shall the Intel Corporation or contributors be liable for any direct,
+// indirect, incidental, special, exemplary, or consequential damages
+// (including, but not limited to, procurement of substitute goods or services;
+// loss of use, data, or profits; or business interruption) however caused
+// and on any theory of liability, whether in contract, strict liability,
+// or tort (including negligence or otherwise) arising in any way out of
+// the use of this software, even if advised of the possibility of such damage.
+//
+//M*/
+
+#include "test_precomp.hpp"
+#include <string>
+
+using namespace cv;
+using namespace std;
+
+class CV_ConnectedComponentsTest : public cvtest::BaseTest
+{
+public:
+ CV_ConnectedComponentsTest();
+ ~CV_ConnectedComponentsTest();
+protected:
+ void run(int);
+};
+
+CV_ConnectedComponentsTest::CV_ConnectedComponentsTest() {}
+CV_ConnectedComponentsTest::~CV_ConnectedComponentsTest() {}
+
+void CV_ConnectedComponentsTest::run( int /* start_from */)
+{
+ string exp_path = string(ts->get_data_path()) + "connectedcomponents/ccomp_exp.png";
+ Mat exp = imread(exp_path, 0);
+ Mat orig = imread(string(ts->get_data_path()) + "connectedcomponents/concentric_circles.png", 0);
+
+ if (orig.empty())
+ {
+ ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_TEST_DATA );
+ return;
+ }
+
+ Mat bw = orig > 128;
+ Mat labelImage;
+ int nLabels = connectedComponents(bw, labelImage, 8, CV_32S);
+
+ for(int r = 0; r < labelImage.rows; ++r){
+ for(int c = 0; c < labelImage.cols; ++c){
+ int l = labelImage.at<int>(r, c);
+ bool pass = l >= 0 && l <= nLabels;
+ if(!pass){
+ ts->set_failed_test_info( cvtest::TS::FAIL_INVALID_OUTPUT );
+ return;
+ }
+ }
+ }
+
+ if( exp.empty() || orig.size() != exp.size() )
+ {
+ imwrite(exp_path, labelImage);
+ exp = labelImage;
+ }
+
+ if (0 != norm(labelImage > 0, exp > 0, NORM_INF))
+ {
+ ts->set_failed_test_info( cvtest::TS::FAIL_MISMATCH );
+ return;
+ }
+ if (nLabels != norm(labelImage, NORM_INF)+1)
+ {
+ ts->set_failed_test_info( cvtest::TS::FAIL_MISMATCH );
+ return;
+ }
+ ts->set_failed_test_info(cvtest::TS::OK);
+}
+
+TEST(Imgproc_ConnectedComponents, regression) { CV_ConnectedComponentsTest test; test.safe_run(); }
+
View
11 modules/python/src2/cv2.cpp
@@ -410,7 +410,7 @@ static bool pyopencv_to(PyObject* obj, bool& value, const char* name = "<unknown
static PyObject* pyopencv_from(size_t value)
{
- return PyLong_FromUnsignedLong((unsigned long)value);
+ return PyLong_FromSize_t(value);
}
static bool pyopencv_to(PyObject* obj, size_t& value, const char* name = "<unknown>")
@@ -497,9 +497,16 @@ static bool pyopencv_to(PyObject* obj, float& value, const char* name = "<unknow
static PyObject* pyopencv_from(int64 value)
{
- return PyFloat_FromDouble((double)value);
+ return PyLong_FromLongLong(value);
}
+#if !defined(__LP64__)
+static PyObject* pyopencv_from(uint64 value)
+{
+ return PyLong_FromUnsignedLongLong(value);
+}
+#endif
+
static PyObject* pyopencv_from(const string& value)
{
return PyString_FromString(value.empty() ? "" : value.c_str());
View
36 samples/cpp/connected_components.cpp
@@ -11,25 +11,21 @@ int threshval = 100;
static void on_trackbar(int, void*)
{
Mat bw = threshval < 128 ? (img < threshval) : (img > threshval);
-
- vector<vector<Point> > contours;
- vector<Vec4i> hierarchy;
-
- findContours( bw, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
-
- Mat dst = Mat::zeros(img.size(), CV_8UC3);
-
- if( !contours.empty() && !hierarchy.empty() )
- {
- // iterate through all the top-level contours,
- // draw each connected component with its own random color
- int idx = 0;
- for( ; idx >= 0; idx = hierarchy[idx][0] )
- {
- Scalar color( (rand()&255), (rand()&255), (rand()&255) );
- drawContours( dst, contours, idx, color, CV_FILLED, 8, hierarchy );
- }
+ Mat labelImage(img.size(), CV_32S);
+ int nLabels = connectedComponents(bw, labelImage, 8);
+ std::vector<Vec3b> colors(nLabels);
+ colors[0] = Vec3b(0, 0, 0);//background
+ for(int label = 1; label < nLabels; ++label){
+ colors[label] = Vec3b( (rand()&255), (rand()&255), (rand()&255) );
}
+ Mat dst(img.size(), CV_8UC3);
+ for(int r = 0; r < dst.rows; ++r){
+ for(int c = 0; c < dst.cols; ++c){
+ int label = labelImage.at<int>(r, c);
+ Vec3b &pixel = dst.at<Vec3b>(r, c);
+ pixel = colors[label];
+ }
+ }
imshow( "Connected Components", dst );
}
@@ -45,14 +41,14 @@ static void help()
const char* keys =
{
- "{@image |stuff.jpg|image for converting to a grayscale}"
+ "{@image|stuff.jpg|image for converting to a grayscale}"
};
int main( int argc, const char** argv )
{
help();
CommandLineParser parser(argc, argv, keys);
- string inputImage = parser.get<string>(1);
+ string inputImage = parser.get<string>("@image");
img = imread(inputImage.c_str(), 0);
if(img.empty())
Something went wrong with that request. Please try again.