Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple class detection through text file containing list of models #98

Open
wants to merge 141 commits into
base: unstable
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
141 commits
Select commit Hold shift + click to select a range
903d5cb
committed image-net trained example directly
liuliu Feb 9, 2014
e046b08
added Jacobi method for eigenvectors / eigenvalues
liuliu Feb 10, 2014
4c626b9
support data augmentation with PCA
liuliu Feb 11, 2014
3c7c668
try to add travis.yml file
liuliu Feb 11, 2014
c0818bb
minimal dependency to pass unittests
liuliu Feb 11, 2014
ac57bb1
try libatlas on TravisCI
liuliu Feb 11, 2014
a52cf50
last attempt before go to sleep
liuliu Feb 11, 2014
e499822
link against blas instead
liuliu Feb 11, 2014
e30d99a
relax one unittest, added build status to README.md
liuliu Feb 11, 2014
e26b3c5
make sure if fails, it will fail all
liuliu Feb 11, 2014
cb2eac6
intentional to fail a unittest to test CI
liuliu Feb 11, 2014
136ed5c
fix the unittest
liuliu Feb 11, 2014
b5bd91a
try again to make sure fail
liuliu Feb 11, 2014
1af1f49
fix unittest again
liuliu Feb 11, 2014
9f73dbf
fix a crash on net sgd
liuliu Feb 12, 2014
3a2badb
renamed a few functions to make it sound better
liuliu Feb 13, 2014
18f32d0
change what make and make test do in /test
liuliu Feb 13, 2014
5ce8d68
check dispatch header and then do the blocks runtime
liuliu Feb 14, 2014
4cf9b73
workaround clang 3.4 issues on raspberry pi
liuliu Feb 15, 2014
06f0d0f
forget to add 3rdparty.tests
liuliu Feb 15, 2014
5b829bf
unittests pass on raspberry pi
liuliu Feb 15, 2014
5e34e21
changed how mean activity calculated, and changed channel gain value …
liuliu Feb 17, 2014
fef7db3
fix unittests
liuliu Feb 17, 2014
b216bd8
fix compiler warnings on raspberrypi-slave
liuliu Feb 17, 2014
e7440f4
try to trigger buildbot webhook instead
liuliu Feb 17, 2014
cd17a08
fix autoconf on Mac OSX
liuliu Feb 18, 2014
3294891
continue fix on mac osx unittests
liuliu Feb 18, 2014
763bc4f
fix a few static analyzer reports, still a few: http://ci.libccv.org/…
liuliu Feb 18, 2014
3a00516
using five-stencil to compute numeric diff, unittest should be less f…
liuliu Feb 18, 2014
aeaf9ff
change the eps a bit to pass unittests
liuliu Feb 18, 2014
398d80f
fix static analyzer complains for ccv_numeric and ccv_basic
liuliu Feb 19, 2014
de370d5
fixed analyzer complaints on ccv_icf as well, now only ccv_resample
liuliu Feb 19, 2014
8ce752a
bypass static analyzer for ccv_resample.c
liuliu Feb 20, 2014
1e76acb
change unittest for more stable comparison method
liuliu Feb 20, 2014
3156ec0
not crop middle art only for training
liuliu Feb 23, 2014
2742097
change the way we use learn_rate
liuliu Feb 25, 2014
95e7086
support partition the convnet
liuliu Feb 26, 2014
dd35f15
fix unittests, and few style changes to cwc_convnet
liuliu Feb 27, 2014
55551ec
made cwc-bench-runtime work again
liuliu Feb 28, 2014
8d9ba55
add libccv.org site source file to the tree
liuliu Mar 1, 2014
381263f
add compile binaries to CI
liuliu Mar 1, 2014
a83239d
start to modernize ccv_convnet (CPU convnet impl)
liuliu Mar 1, 2014
c71f3fa
updated ccv_convnet to support partition as well
liuliu Mar 2, 2014
6b19307
fix build failure on cwc-bench-runtime
liuliu Mar 2, 2014
4714eb3
change ccv_comp_t to have substruct of ccv_classification_t
liuliu Mar 3, 2014
e01609c
added cnnclassify binary
liuliu Mar 3, 2014
07e6d42
make cnnclassify clear in valgrind
liuliu Mar 3, 2014
a136c7a
correct averaging
liuliu Mar 4, 2014
df195f6
added cnndraw.rb
liuliu Mar 4, 2014
36f84be
I forget to fix ./serve
liuliu Mar 4, 2014
30e484e
speed up ccv_convnet_classify from 700ms to 300ms
liuliu Mar 5, 2014
a3b655f
finished HTTP mapping for convnet/classify
liuliu Mar 5, 2014
24b09f4
copy the new image-net.sqlite3 to samples
liuliu Mar 5, 2014
fdc7235
starting to write doc for convnet
liuliu Mar 6, 2014
d843c29
fix typo. change license for files included in ./doc ./samples, ./sit…
liuliu Mar 6, 2014
18331cd
fixed some typos
liuliu Mar 6, 2014
be8db89
fix a bug in cnnclassify, more improvements on convnet.md
liuliu Mar 7, 2014
e20a0a1
fix a bug for swt that crashes if no region available
liuliu Mar 7, 2014
fad6adb
fix some problem with http server
liuliu Mar 7, 2014
d7284d9
updated doc for convnet
liuliu Mar 7, 2014
7d254ad
fix a layout with doc in website
liuliu Mar 7, 2014
f009aa3
add max_dimension to REST interface
liuliu Mar 12, 2014
ac54d1d
this is the right max_dimension
liuliu Mar 12, 2014
07c95b9
fftw plan creation is not thread safe
liuliu Mar 13, 2014
477babd
add tesseract-ocr for REST API
liuliu Mar 13, 2014
d895cbd
handle tesseract return nil correctly
liuliu Mar 13, 2014
e0451e8
filter result so that I don't need to escape
liuliu Mar 13, 2014
b253476
forget to include 0-9
liuliu Mar 14, 2014
0cc5f95
added lib doc for convnet
liuliu Mar 16, 2014
cb10a22
add doc for HTTP API
liuliu Mar 16, 2014
c985ff1
I forget to implement relu in full connect layer
liuliu Mar 17, 2014
8f7fbf4
refactored how to do cuda allocation
liuliu Mar 24, 2014
4e012b3
fix build break
liuliu Mar 24, 2014
a4801d5
have a working cnnclassify from GPU
liuliu Mar 25, 2014
9dbf65e
fix memory leak & build break
liuliu Mar 25, 2014
a0e7880
finally get similar results as my monkey patch
liuliu Mar 25, 2014
a86dd9c
modified CPU ccv_convnet_classify implementation and start to gather
liuliu Mar 26, 2014
05af785
fix output issue with ccv_convnet_classify
liuliu Mar 26, 2014
e2b714a
using a look up table for cube root in ICF
liuliu Mar 26, 2014
3c247ce
fixed documentation for convnet.md
liuliu Mar 27, 2014
38db768
added a new post and updated cifar-10 to be runnable
liuliu Mar 27, 2014
1ca6c3d
prepare to cut 0.6
liuliu Mar 27, 2014
9d4db30
assert on gemm if no blas library linked, fixed a typo in doc
liuliu Mar 27, 2014
e098d56
fix a typo on the main site
liuliu Mar 28, 2014
803dc42
added imageNet 2012 pretrained model, followed Matt's parameters.
liuliu Apr 25, 2014
c92e9fc
fix a few typo
liuliu Apr 25, 2014
68e2458
another typo
liuliu Apr 25, 2014
8d43375
add data for CPU implementation on imageNet 2012
liuliu Apr 25, 2014
c7177a3
new post
liuliu Apr 26, 2014
f799ac5
fix compilation warning in clang 3.5
liuliu Apr 26, 2014
05fc1f8
try to fix clang compilation complains.
liuliu Apr 27, 2014
8b7223b
making unittest to run with stable generated random numbers
liuliu Apr 28, 2014
f2100c9
share inline methods between cuda and cpu impl
liuliu Apr 28, 2014
cd9e1c3
added simple SSE support for convolutional layer, very early
liuliu Apr 28, 2014
c38b61b
Revert "added simple SSE support for convolutional layer, very early"
liuliu Apr 29, 2014
7b7ee6f
added proper SSE support
liuliu Apr 29, 2014
e4d68d3
fix build break
liuliu Apr 29, 2014
9743bc6
further speed improvement
liuliu Apr 30, 2014
34aae8e
fix compile with gcc and gsl
liuliu May 2, 2014
bc335ff
reorganize sse2 part in ccv_convnet.c
liuliu May 14, 2014
81d96d1
added basic NEON support
liuliu May 14, 2014
928e50a
fix compile error
liuliu May 14, 2014
6806408
fix build break
liuliu May 14, 2014
a3c6428
cleaner and faster for SIMD
liuliu May 15, 2014
15c58a9
this seems have better performance on ARM NEON but I need to make gpr…
liuliu May 15, 2014
6d26275
make cwc-bench works again
liuliu May 21, 2014
fc97be7
only 1ms improvement
liuliu May 21, 2014
629e621
fix a tiny issue with assertion in kernel
liuliu May 23, 2014
3f700d6
move NEON / SSE related header inclusion to each file
liuliu May 27, 2014
b1ad32a
pass strides as template parameter for marginal speed up
liuliu May 28, 2014
39da2ee
get rid of the hack of loading
liuliu May 28, 2014
42a640b
move parameter positions
liuliu Jun 5, 2014
abd142a
shave off 20ms from reordering matrix
liuliu Jun 5, 2014
931bb0b
shave off another 100ms
liuliu Jun 5, 2014
39e3df8
expanding parameter search and improve another 20ms
liuliu Jun 5, 2014
90b0360
added bench for full mattnet so that I can optimize on
liuliu Jun 6, 2014
f031b56
just use gemv for bias propagation
liuliu Jun 9, 2014
ebd81a7
replace gemm with gemv
liuliu Jun 9, 2014
c3ca453
commit and track bench / verification separately
liuliu Jun 10, 2014
59951a5
minor change in the makefile of cuda bench
liuliu Jun 14, 2014
da34a2c
tried to bench AlexNet 12 and AlexNet 14
liuliu Jun 14, 2014
8cf2e96
shave off another 60ms by careful arrange copying
liuliu Jun 17, 2014
4009282
reorganized code a bit
liuliu Jun 17, 2014
a5be42c
small change
liuliu Jun 17, 2014
3e2629f
updated doc and will start multi-GPU impl soon
liuliu Jun 18, 2014
a92735b
fix leaks on namespaces for some local methods
Jul 9, 2014
043b081
added wnid file
liuliu Jul 10, 2014
b044e58
add more synchronization points to prepare for model / data mix paral…
liuliu Jul 15, 2014
fec7df8
fix build failure
liuliu Jul 15, 2014
1f23483
add verify for fc pass. fixed CPU version training when we cached wei…
liuliu Jul 17, 2014
dd1d828
Merge pull request #1 from liuliu/unstable
ml7013 Jul 24, 2014
83be631
Added multiple model files support, through single text file holding …
ml7013 Jul 25, 2014
ed4c360
Forgot ccfree(models) before returning (in main).
ml7013 Jul 25, 2014
c0bdbec
Improvement: made it robust to blank lines in model list file.
ml7013 Jul 29, 2014
63deaba
Fix: freed and closed file pointers.
ml7013 Jul 29, 2014
e6b376b
Wrong variable name fixed.
ml7013 Jul 29, 2014
56b5b19
Comments added.
ml7013 Jul 29, 2014
11362cd
TODOs for 1st week of August
Jul 29, 2014
46a2b9a
[Temporary] Revert to pull-request commit.
ml7013 Jul 29, 2014
394bf5d
Revert "Wrong variable name fixed." (next commits moved to project-te…
ml7013 Jul 29, 2014
61ac22d
Wrong variable name fixed.
ml7013 Aug 1, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Expand Up @@ -3,7 +3,6 @@
*.swp
tags
gh-pages
site
*.tests
*.pyc
lib/.CC
Expand Down Expand Up @@ -31,3 +30,4 @@ data/*
tool/*
config.mk
build
site/_site
6 changes: 6 additions & 0 deletions .travis.yml
@@ -0,0 +1,6 @@
language: c
compiler: clang
before_install:
- sudo apt-get update -qq
- sudo apt-get install -qq libpng-dev libjpeg-dev libblas-dev libgsl0-dev
script: cd lib && ./configure && make && cd ../bin && make && cd ../test && make test
2 changes: 2 additions & 0 deletions COPYING
@@ -1,3 +1,5 @@
Files in directories ./doc, ./samples and ./site are licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Copyright (c) 2010, Liu Liu
All rights reserved.

Expand Down
2 changes: 2 additions & 0 deletions README.md
@@ -1,6 +1,8 @@
Intro
=====

[![Build Status](https://travis-ci.org/liuliu/ccv.png?branch=unstable)](https://travis-ci.org/liuliu/ccv)

Around 2010, when Lian and I was working on our gesture recognition demo, out
of the frustration to abstract redundant image preprocessing operations into a
set of clean and concise functions, I started to consider moving away from the
Expand Down
4 changes: 2 additions & 2 deletions bin/bbfdetect.c
Expand Up @@ -25,7 +25,7 @@ int main(int argc, char** argv)
for (i = 0; i < seq->rnum; i++)
{
ccv_comp_t* comp = (ccv_comp_t*)ccv_array_get(seq, i);
printf("%d %d %d %d %f\n", comp->rect.x, comp->rect.y, comp->rect.width, comp->rect.height, comp->confidence);
printf("%d %d %d %d %f\n", comp->rect.x, comp->rect.y, comp->rect.width, comp->rect.height, comp->classification.confidence);
}
printf("total : %d in time %dms\n", seq->rnum, elapsed_time);
ccv_array_free(seq);
Expand All @@ -51,7 +51,7 @@ int main(int argc, char** argv)
for (i = 0; i < seq->rnum; i++)
{
ccv_comp_t* comp = (ccv_comp_t*)ccv_array_get(seq, i);
printf("%s %d %d %d %d %f\n", file, comp->rect.x, comp->rect.y, comp->rect.width, comp->rect.height, comp->confidence);
printf("%s %d %d %d %d %f\n", file, comp->rect.x, comp->rect.y, comp->rect.width, comp->rect.height, comp->classification.confidence);
}
ccv_array_free(seq);
ccv_matrix_free(image);
Expand Down
109 changes: 62 additions & 47 deletions bin/cifar-10.c
Expand Up @@ -14,6 +14,7 @@ int main(int argc, char** argv)
.rows = 31,
.cols = 31,
.channels = 3,
.partition = 1,
},
},
.output = {
Expand All @@ -24,41 +25,44 @@ int main(int argc, char** argv)
.border = 2,
.strides = 1,
.count = 32,
.partition = 1,
},
},
},
{
.type = CCV_CONVNET_MAX_POOL,
.type = CCV_CONVNET_LOCAL_RESPONSE_NORM,
.input = {
.matrix = {
.rows = 31,
.cols = 31,
.channels = 32,
.partition = 1,
},
},
.output = {
.pool = {
.rnorm = {
.size = 3,
.strides = 2,
.border = 0,
.kappa = 1,
.alpha = 1e-4,
.beta = 0.75,
},
},
},
{
.type = CCV_CONVNET_LOCAL_RESPONSE_NORM,
.type = CCV_CONVNET_MAX_POOL,
.input = {
.matrix = {
.rows = 15,
.cols = 15,
.rows = 31,
.cols = 31,
.channels = 32,
.partition = 1,
},
},
.output = {
.rnorm = {
.pool = {
.size = 3,
.kappa = 1,
.alpha = 0.0001,
.beta = 0.75,
.strides = 2,
.border = 0,
},
},
},
Expand All @@ -71,6 +75,7 @@ int main(int argc, char** argv)
.rows = 15,
.cols = 15,
.channels = 32,
.partition = 1,
},
},
.output = {
Expand All @@ -81,41 +86,44 @@ int main(int argc, char** argv)
.border = 2,
.strides = 1,
.count = 32,
.partition = 1,
},
},
},
{
.type = CCV_CONVNET_AVERAGE_POOL,
.type = CCV_CONVNET_LOCAL_RESPONSE_NORM,
.input = {
.matrix = {
.rows = 15,
.cols = 15,
.channels = 32,
.partition = 1,
},
},
.output = {
.pool = {
.rnorm = {
.size = 3,
.strides = 2,
.border = 0,
.kappa = 1,
.alpha = 1e-4,
.beta = 0.75,
},
},
},
{
.type = CCV_CONVNET_LOCAL_RESPONSE_NORM,
.type = CCV_CONVNET_AVERAGE_POOL,
.input = {
.matrix = {
.rows = 7,
.cols = 7,
.rows = 15,
.cols = 15,
.channels = 32,
.partition = 1,
},
},
.output = {
.rnorm = {
.pool = {
.size = 3,
.kappa = 1,
.alpha = 0.0001,
.beta = 0.75,
.strides = 2,
.border = 0,
},
},
},
Expand All @@ -128,6 +136,7 @@ int main(int argc, char** argv)
.rows = 7,
.cols = 7,
.channels = 32,
.partition = 1,
},
},
.output = {
Expand All @@ -138,6 +147,7 @@ int main(int argc, char** argv)
.border = 2,
.strides = 1,
.count = 64,
.partition = 1,
},
},
},
Expand All @@ -148,6 +158,7 @@ int main(int argc, char** argv)
.rows = 7,
.cols = 7,
.channels = 64,
.partition = 1,
},
},
.output = {
Expand All @@ -167,19 +178,21 @@ int main(int argc, char** argv)
.rows = 3,
.cols = 3,
.channels = 64,
.partition = 1,
},
.node = {
.count = 3 * 3 * 64,
},
},
.output = {
.full_connect = {
.relu = 0,
.count = 10,
},
},
},
};
ccv_convnet_t* convnet = ccv_convnet_new(1, params, sizeof(params) / sizeof(ccv_convnet_layer_param_t));
ccv_convnet_t* convnet = ccv_convnet_new(1, ccv_size(32, 32), params, sizeof(params) / sizeof(ccv_convnet_layer_param_t));
assert(ccv_convnet_verify(convnet, 10) == 0);
assert(argc == 5);
int num1 = atoi(argv[2]);
Expand All @@ -195,18 +208,18 @@ int main(int argc, char** argv)
{
fread(bytes, 32 * 32 + 1, 1, r1);
int c = bytes[0];
ccv_dense_matrix_t* a = ccv_dense_matrix_new(31, 31, CCV_32F | CCV_C3, 0, 0);
for (i = 0; i < 31; i++)
for (j = 0; j < 31; j++)
a->data.f32[(j + i * 31) * 3] = bytes[j + i * 32 + 1] / 255.0 * 2 - 1;
ccv_dense_matrix_t* a = ccv_dense_matrix_new(32, 32, CCV_32F | CCV_C3, 0, 0);
for (i = 0; i < 32; i++)
for (j = 0; j < 32; j++)
a->data.f32[(j + i * 32) * 3] = bytes[j + i * 32 + 1];
fread(bytes, 32 * 32, 1, r1);
for (i = 0; i < 31; i++)
for (j = 0; j < 31; j++)
a->data.f32[(j + i * 31) * 3 + 1] = bytes[j + i * 32] / 255.0 * 2 - 1;
for (i = 0; i < 32; i++)
for (j = 0; j < 32; j++)
a->data.f32[(j + i * 32) * 3 + 1] = bytes[j + i * 32];
fread(bytes, 32 * 32, 1, r1);
for (i = 0; i < 31; i++)
for (j = 0; j < 31; j++)
a->data.f32[(j + i * 31) * 3 + 2] = bytes[j + i * 32] / 255.0 * 2 - 1;
for (i = 0; i < 32; i++)
for (j = 0; j < 32; j++)
a->data.f32[(j + i * 32) * 3 + 2] = bytes[j + i * 32];
ccv_categorized_t categorized = ccv_categorized(c, a, 0);
ccv_array_push(categorizeds, &categorized);
}
Expand All @@ -215,47 +228,47 @@ int main(int argc, char** argv)
{
fread(bytes, 32 * 32 + 1, 1, r2);
int c = bytes[0];
ccv_dense_matrix_t* a = ccv_dense_matrix_new(31, 31, CCV_32F | CCV_C3, 0, 0);
for (i = 0; i < 31; i++)
for (j = 0; j < 31; j++)
a->data.f32[(j + i * 31) * 3] = bytes[j + i * 32 + 1] / 255.0 * 2 - 1;
ccv_dense_matrix_t* a = ccv_dense_matrix_new(32, 32, CCV_32F | CCV_C3, 0, 0);
for (i = 0; i < 32; i++)
for (j = 0; j < 32; j++)
a->data.f32[(j + i * 32) * 3] = bytes[j + i * 32 + 1];
fread(bytes, 32 * 32, 1, r2);
for (i = 0; i < 31; i++)
for (j = 0; j < 31; j++)
a->data.f32[(j + i * 31) * 3 + 1] = bytes[j + i * 32] / 255.0 * 2 - 1;
for (i = 0; i < 32; i++)
for (j = 0; j < 32; j++)
a->data.f32[(j + i * 32) * 3 + 1] = bytes[j + i * 32];
fread(bytes, 32 * 32, 1, r2);
for (i = 0; i < 31; i++)
for (j = 0; j < 31; j++)
a->data.f32[(j + i * 31) * 3 + 2] = bytes[j + i * 32] / 255.0 * 2 - 1;
for (i = 0; i < 32; i++)
for (j = 0; j < 32; j++)
a->data.f32[(j + i * 32) * 3 + 2] = bytes[j + i * 32];
ccv_categorized_t categorized = ccv_categorized(c, a, 0);
ccv_array_push(tests, &categorized);
}
ccv_convnet_layer_train_param_t layer_params[9];
memset(layer_params, 0, sizeof(layer_params));

layer_params[0].w.decay = 0.005;
layer_params[0].w.learn_rate = 0.0005;
layer_params[0].w.learn_rate = 0.001;
layer_params[0].w.momentum = 0.9;
layer_params[0].bias.decay = 0;
layer_params[0].bias.learn_rate = 0.001;
layer_params[0].bias.momentum = 0.9;

layer_params[3].w.decay = 0.005;
layer_params[3].w.learn_rate = 0.0005;
layer_params[3].w.learn_rate = 0.001;
layer_params[3].w.momentum = 0.9;
layer_params[3].bias.decay = 0;
layer_params[3].bias.learn_rate = 0.001;
layer_params[3].bias.momentum = 0.9;

layer_params[6].w.decay = 0.005;
layer_params[6].w.learn_rate = 0.0005;
layer_params[6].w.learn_rate = 0.001;
layer_params[6].w.momentum = 0.9;
layer_params[6].bias.decay = 0;
layer_params[6].bias.learn_rate = 0.001;
layer_params[6].bias.momentum = 0.9;

layer_params[8].w.decay = 0.01;
layer_params[8].w.learn_rate = 0.0005;
layer_params[8].w.learn_rate = 0.001;
layer_params[8].w.momentum = 0.9;
layer_params[8].bias.decay = 0;
layer_params[8].bias.learn_rate = 0.001;
Expand All @@ -265,6 +278,8 @@ int main(int argc, char** argv)
.max_epoch = 999,
.mini_batch = 128,
.iterations = 500,
.symmetric = 1,
.color_gain = 0,
.layer_params = layer_params,
};
ccv_convnet_supervised_train(convnet, categorizeds, tests, "cifar-10.sqlite3", params);
Expand Down