1.0

@shelhamer shelhamer released this Apr 18, 2017 · 117 commits to master since this release

This release marks the convergence of development into a stable, reference release of the framework and a shift into maintenance mode. Let's review the progress culminating in our 1.0:

  • research: nearly 4,000 citations, usage by award papers at CVPR/ECCV/ICCV, and tutorials at ECCV'14 and CVPR'15
  • industry: adopted by Facebook, NVIDIA, Intel, Sony, Yahoo! Japan, Samsung, Adobe, A9, Siemens, Pinterest, the Embedded Vision Alliance, and more
  • community: 250+ contributors, 15k+ subscribers on github, and 7k+ members of the mailing list
  • development: 10k+ forks, >1 contribution/day on average, and dedicated branches for OpenCL and Windows
  • downloads: 10k+ downloads and updates a month, ~50k unique visitors to the home page every two weeks, and >100k unique downloads of the reference models
  • winner of the ACM MM open source award 2014 and presented as a talk at ICML MLOSS 2015

Thanks for all of your efforts leading us to Caffe 1.0! Your part in development, community, feedback, and framework usage brought us here. As part of 1.0 we will be welcoming collaborators old and new to join as members of the Caffe core.

Stay tuned for the next steps in DIY deep learning with Caffe. As development is never truly done, there's always 1.1!

Now that 1.0 is done, the next generation of the framework—Caffe2—is ready to keep up the progress on DIY deep learning in research and industry. While Caffe 1.0 development will continue with 1.1, Caffe2 is the new framework line for future development led by Yangqing Jia. Although Caffe2 is a departure from the development line of Caffe 1.0, we are planning a migration path for models just as we have future-proofed Caffe models in the past.

Happy brewing,
The Caffe Crew

☕️

release candidate 5

@shelhamer shelhamer released this Feb 21, 2017 · 226 commits to master since this release

This packages up 42 commits by 15 contributors to help hone in on 1.0.
Thanks all!

With all releases one should do make clean && make superclean to clear out old materials before compiling the new release.

  • set soversion properly #5296
  • documentation: improved dockerfiles and usage notes #5153, links and fixes #5227
  • build: groom cmake build #4609, find veclib more reliably on mac #5236
  • pycaffe: give Net a layer dictionary #4347
  • matcaffe: destroy individual nets and solvers #4737

Fixes

  • restore solvers for resuming multi-GPU training #5215
  • draw net helper #5010

☕️

release candidate 4

@shelhamer shelhamer released this Jan 20, 2017 · 268 commits to master since this release

It's a new year and a new release candidate. This packages up 348 commits by 68 authors. Thanks all!

This is intended to be the last release candidate before 1.0. We hope to catch any lurking issues, improve documentation, and polish the packaging for then.

With all releases one should do make clean && make superclean to clear out old materials before compiling the new release. See all merged PRs since the last release.

  • RNNs + LSTMs #3948
  • layers
    • Parameter layer for learning any bottom #2047
    • Crop layer for aligning coordinate maps for FCNs #3570
    • Tied weights with transpose for InnerProduct layer #3612
    • Batch Norm docs, numerics, and robust proto def #4704 #5184
    • Sigmoid Cross Entropy Loss on GPU #4908 and with ignore #4986
  • pycaffe
    • solver callbacks #3020
    • net spec coordinate mapping and cropping for FCNs #3613
    • N-D blob interface #3703
    • python3 compatibility by six #3716
    • dictionary-style net spec #3747
    • Python layer can have phase #3995
  • Docker image #3518
  • expose all NetState options for all-in-one nets #3863
  • force backprop on or off by propagate_down #3942
  • cuDNN v5 #4159
  • multi-GPU parallelism through NCCL + multi-GPU python interface #4563

Fixes

  • Net upgrade tools catch mixed versions, handle input fields, and log outputs #3755
  • Exp layer for base e and shift != 0 #3937
  • Crop layer checks only the crop dimensions it should #3993

Dependencies

  • cuDNN compatibility is now at v5 + v4 and cuDNN v3 and earlier are not supported
  • NCCL is now required for multi-GPU operation

As a reminder the OpenCL and Windows branches continue to make progress with the community leadership of Fabian Tschopp and Guillaume Dumont resp.

☕️

release candidate 3

@shelhamer shelhamer released this Jan 30, 2016 · 616 commits to master since this release

A lot has happened since the last release! This packages up ~800 commits by 119 authors. Thanks all!

With all releases one should do make clean && make superclean to clear out old materials before compiling the new release.

  • layers
  • solvers: Adam #2918, RMSProp #2867, AdaDelta #2782
    • accumulate gradients to decouple computational and learning batch size #1977
    • de-duplicate solver code #2518
    • make solver type a string and split classes #3166 -- you should update your solver definitions
  • MSRA #1946 and bilinear interpolation #2213 weight fillers
  • N-D blobs #1970 and convolution #2049 for higher dimensional data and filters
  • tools:
    • test caffe command line tool execution #1926
    • network summarization tool #3090
    • snapshot on signal / before quit #2253
    • report ignored layers when loading weights #3305
    • caffe command fine-tunes from multiple caffemodels #1456
  • pycaffe:
    • python net spec #2086 #2813 #2959
    • handle python exceptions #2462
    • python layer arguments #2871
    • python layer weights #2944
    • snapshot in pycaffe #3082
    • top + bottom names in pycaffe #2865
    • python3 compatibility improvements
  • matcaffe: totally new interface with examples and tests #2505
  • cuDNN: switch to v2 #2038, switch to v3 #3160, make v4 compatible #3439
  • separate IO dependencies for configurable build #2523
  • large model and solverstate serialization through hdf5 #2836
  • train by multi-GPU data parallelism #2903 #2921 #2924 #2931 #2998
  • dismantle layer headers so every layer has its own include #3315
  • workflow: adopt build versioning #3311 #3593, contributing guide #2837, and badges for build status and license #3133
  • SoftmaxWithLoss normalization options #3296
  • dilated convolution #3487
  • expose Solver Restore() to C++ and Python #2037
  • set mode once and only once in testing #2511
  • turn off backprop by skip_propagate_down #2095
  • flatten layer learns axis #2082
  • trivial slice and concat #3014
  • hdf5 data layer: loads integer data #2978, can shuffle #2118
  • cross platform adjustments #3300 #3320 #3321 #3362 #3361 #3378
  • speed-ups for GPU solvers #3519 and CPU im2col #3536
  • make and cmake build improvements
  • and more!

Fixes

  • #2866 fix weight sharing to (1) reduce memory usage and computation (2) correct momentum and other solver computations
  • #2972 fix concat (broken in #1970)
  • #2964 #3162 fix MVN layer
  • #2321 fix contrastive loss layer to match Hadsell et al. 2006
  • fix deconv backward #3095 and conv reshape #3096 (broken in #2049)
  • #3393 fix in-place reshape and flatten
  • #3152 fix silence layer to not zero bottom on backward
  • #3574 disable cuDNN max pooling (incompatible with in-place)
  • make backward compatible with negative LR #3007
  • #3332 fix pycaffe forward_backward_all()
  • #1922 fix cross-channel LRN for large channel band
  • #1457 fix shape of C++ feature extraction demo output

Dependencies:

  • hdf5 is required
  • cuDNN compatibility is now at v3 + v4 and cuDNN v1 and v2 are not supported
  • IO dependencies (lmdb, leveldb, opencv) are now optional #2523

☕️

release candidater

@shelhamer shelhamer released this Feb 20, 2015 · 1412 commits to master since this release

This is the release candidate for Caffe 1.0 once more with feeling. See #1849 for details.

With documentation, fixes, and feedback this could soon be 1.0!

release candidate

@shelhamer shelhamer released this Sep 19, 2014 · 1953 commits to master since this release

This is the release candidate for Caffe 1.0. See #1112 for details.

  • documentation
  • standard model format and model zoo for sharing models
  • cuDNN acceleration

cold-brew

@shelhamer shelhamer released this Aug 8, 2014 · 2410 commits to master since this release

See #880 for details.

Dependencies: lmdb and gflags are required. CPU-only Caffe without any GPU / CUDA dependencies is turned on by setting CPU_ONLY := 1 in your Makefile.config.

Deprecations: the new caffe tool includes commands for model training and testing, querying devices, and timing models. The corresponding train_net.bin, finetune_net.bin, test_net.bin, device_query.bin, and net_speed_benchmark.bin are deprecated.

kona-snow

@shelhamer shelhamer released this May 20, 2014 · 2970 commits to master since this release

See #429 for details.

Please upgrade your models! Caffe's proto definition was changed in #208 and #219 for extensibility. The upgrade_net_proto_binary.bin and upgrade_net_proto_text.bin tools are provided to convert current models. Caffe will attempt to automagically upgrade old models when loaded, but doesn't save the changes.

Update your Makefile.config! Caffe has a new Makefile and Makefile.config that learned to auto-configure themselves a bit better. Look at the new Makefile.config.example and update your configuration accordingly.

Dependencies: Caffe's matrix and vector computations can be done with ATLAS, OpenBLAS, or MKL. The hard dependency on MKL is no more!

Deprecation: V0 model definitions. While Caffe will try to automagically upgrade old models when loaded, see tools/upgrade_net_proto* to make the permanent upgrade since this will be dropped.

polyculture

@shelhamer shelhamer released this Mar 20, 2014 · 3423 commits to master since this release

See #231 for details.

New Dependency: hdf5 is now required. Caffe learned how to load blobs and (multiple!) labels from hdf5.

  • sudo apt-get install libhdf5-serial-dev for ubuntu.
  • brew install homebrew/science/hdf5 for osx.

Deprecation: padding layers. See 2848aa1 for an example of how to update your model schema and note that an automated tool is coming for this and other model schema updates #219.