Skip to content

Commit

Permalink
Merge master@{2016-05-09} into windows
Browse files Browse the repository at this point in the history
  • Loading branch information
sasagalic-MSFT committed May 9, 2016
2 parents 4b8fa07 + c6bd853 commit ad52ef5
Show file tree
Hide file tree
Showing 36 changed files with 431 additions and 179 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ endif
ifeq ($(OSX), 1)
CXX := /usr/bin/clang++
ifneq ($(CPU_ONLY), 1)
CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release \d' | grep -o '\d')
CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release [0-9.]*' | grep -o '[0-9.]*')
ifeq ($(shell echo | awk '{exit $(CUDA_VERSION) < 7.0;}'), 1)
CXXFLAGS += -stdlib=libstdc++
LINKFLAGS += -stdlib=libstdc++
Expand Down
1 change: 1 addition & 0 deletions Makefile.config.example
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

Expand Down
2 changes: 1 addition & 1 deletion docker/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ docker_files: standalone_files

standalone_files: standalone/cpu/Dockerfile standalone/gpu/Dockerfile

FROM_GPU = "nvidia/cuda:cudnn"
FROM_GPU = "nvidia/cuda:7.5-cudnn4-devel-ubuntu14.04"
FROM_CPU = "ubuntu:14.04"
GPU_CMAKE_ARGS = -DUSE_CUDNN=1
CPU_CMAKE_ARGS = -DCPU_ONLY=1
Expand Down
2 changes: 1 addition & 1 deletion docker/standalone/gpu/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM nvidia/cuda:cudnn
FROM nvidia/cuda:7.5-cudnn4-devel-ubuntu14.04
MAINTAINER caffe-maint@googlegroups.com

RUN apt-get update && apt-get install -y --no-install-recommends \
Expand Down
32 changes: 19 additions & 13 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,23 @@ title: Installation
# Installation

Prior to installing, have a glance through this guide and take note of the details for your platform.
We install and run Caffe on Ubuntu 14.04 and 12.04, OS X 10.10 / 10.9 / 10.8, and AWS.
The official Makefile and `Makefile.config` build are complemented by an automatic CMake build from the community.
We install and run Caffe on Ubuntu 16.04–12.04, OS X 10.11–10.8, and through Docker and AWS.
The official Makefile and `Makefile.config` build are complemented by a [community CMake build](#cmake-build).

**Step-by-step Instructions**:

- [Docker setup](https://github.com/BVLC/caffe/tree/master/docker) *out-of-the-box brewing*
- [Ubuntu installation](install_apt.html) *the standard platform*
- [OS X installation](install_osx.html)
- [RHEL / CentOS / Fedora installation](install_yum.html)
- [Windows](https://github.com/BVLC/caffe/tree/windows) *see the Windows branch led by Microsoft*
- [OpenCL](https://github.com/BVLC/caffe/tree/opencl) *see the OpenCL branch led by Fabian Tschopp*

**Overview**:

- [Prerequisites](#prerequisites)
- [Compilation](#compilation)
- [Hardware](#hardware)
- Platforms: [Ubuntu guide](install_apt.html), [OS X guide](install_osx.html), and [RHEL / CentOS / Fedora guide](install_yum.html)

When updating Caffe, it's best to `make clean` before re-compiling.

Expand All @@ -20,7 +30,7 @@ When updating Caffe, it's best to `make clean` before re-compiling.
Caffe has several dependencies:

* [CUDA](https://developer.nvidia.com/cuda-zone) is required for GPU mode.
* library version 7.0 and the latest driver version are recommended, but 6.* is fine too
* library version 7+ and the latest driver version are recommended, but 6.* is fine too
* 5.5, and 5.0 are compatible but considered legacy
* [BLAS](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) via ATLAS, MKL, or OpenBLAS.
* [Boost](http://www.boost.org/) >= 1.55
Expand All @@ -30,14 +40,14 @@ Optional dependencies:

* [OpenCV](http://opencv.org/) >= 2.4 including 3.0
* IO libraries: `lmdb`, `leveldb` (note: leveldb requires `snappy`)
* cuDNN for GPU acceleration (v3)
* cuDNN for GPU acceleration (v4)

Pycaffe and Matcaffe interfaces have their own natural needs.

* For Python Caffe: `Python 2.7` or `Python 3.3+`, `numpy (>= 1.7)`, boost-provided `boost.python`
* For MATLAB Caffe: MATLAB with the `mex` compiler.

**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic. The current version is cuDNN v3; older versions are supported in older Caffe.
**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic. The current version is cuDNN v4; older versions are supported in older Caffe.

**CPU-only Caffe**: for cold-brewed CPU-only Caffe uncomment the `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Caffe without CUDA. This is helpful for cloud or cluster deployment.

Expand Down Expand Up @@ -82,10 +92,6 @@ Install MATLAB, and make sure that its `mex` is in your `$PATH`.

*Caffe's MATLAB interface works with versions 2015a, 2014a/b, 2013a/b, and 2012b.*

#### Windows

There is an unofficial Windows port of Caffe at [niuzhiheng/caffe:windows](https://github.com/niuzhiheng/caffe). Thanks [@niuzhiheng](https://github.com/niuzhiheng)!

## Compilation

Caffe can be compiled with either Make or CMake. Make is officially supported while CMake is supported by the community.
Expand Down Expand Up @@ -113,7 +119,7 @@ Be sure to set your MATLAB and Python paths in `Makefile.config` first!

Now that you have installed Caffe, check out the [MNIST tutorial](gathered/examples/mnist.html) and the [reference ImageNet model tutorial](gathered/examples/imagenet.html).

### Compilation with CMake
### CMake Build

In lieu of manually editing `Makefile.config` to configure the build, Caffe offers an unofficial CMake build thanks to @Nerei, @akosiorek, and other members of the community. It requires CMake version >= 2.8.7.
The basic steps are as follows:
Expand All @@ -129,9 +135,9 @@ See [PR #1667](https://github.com/BVLC/caffe/pull/1667) for options and details.

## Hardware

**Laboratory Tested Hardware**: Berkeley Vision runs Caffe with K40s, K20s, and Titans including models at ImageNet/ILSVRC scale. We also run on GTX series cards (980s and 770s) and GPU-equipped MacBook Pros. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. All reported hardware issues thus-far have been due to GPU configuration, overheating, and the like.
**Laboratory Tested Hardware**: Berkeley Vision runs Caffe with Titan Xs, K80s, GTX 980s, K40s, K20s, Titans, and GTX 770s including models at ImageNet/ILSVRC scale. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. All reported hardware issues thus-far have been due to GPU configuration, overheating, and the like.

**CUDA compute capability**: devices with compute capability <= 2.0 may have to reduce CUDA thread numbers and batch sizes due to hardware constraints. Your mileage may vary.
**CUDA compute capability**: devices with compute capability <= 2.0 may have to reduce CUDA thread numbers and batch sizes due to hardware constraints. Brew with caution; we recommend compute capability >= 3.0.

Once installed, check your times against our [reference performance numbers](performance_hardware.html) to make sure everything is configured properly.

Expand Down
2 changes: 2 additions & 0 deletions examples/cifar10/convert_cifar_data.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,8 @@ void convert_dataset(const string& input_folder, const string& output_folder,
}

int main(int argc, char** argv) {
FLAGS_alsologtostderr = 1;

if (argc != 4) {
printf("This script converts the CIFAR dataset to the leveldb format used\n"
"by caffe to perform classification.\n"
Expand Down
2 changes: 1 addition & 1 deletion examples/cpp_classification/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ script:
The ImageNet labels file (also called the *synset file*) is also
required in order to map a prediction to the name of the class:
```
./data/ilsvrc12/get_ilsvrc_aux.sh.
./data/ilsvrc12/get_ilsvrc_aux.sh
```
Using the files that were downloaded, we can classify the provided cat
image (`examples/images/cat.jpg`) using this command:
Expand Down
6 changes: 5 additions & 1 deletion examples/finetune_flickr_style/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,11 @@ The prototxts in this example assume this, and also assume the presence of the I

We'll also need the ImageNet-trained model, which you can obtain by running `./scripts/download_model_binary.py models/bvlc_reference_caffenet`.

Now we can train! (You can fine-tune in CPU mode by leaving out the `-gpu` flag.)
Now we can train! The key to fine-tuning is the `-weights` argument in the
command below, which tells Caffe that we want to load weights from a pre-trained
Caffe model.

(You can fine-tune in CPU mode by leaving out the `-gpu` flag.)

caffe % ./build/tools/caffe train -solver models/finetune_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu 0

Expand Down
89 changes: 14 additions & 75 deletions examples/mnist/convert_mnist_data.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,15 @@
#include <fstream> // NOLINT(readability/streams)
#include <string>

#include "boost/scoped_ptr.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/db.hpp"
#include "caffe/util/format.hpp"

#if defined(USE_LEVELDB) && defined(USE_LMDB)

using namespace caffe; // NOLINT(build/namespaces)
using boost::scoped_ptr;
using std::string;

DEFINE_string(backend, "lmdb", "The backend for storing the result");
Expand Down Expand Up @@ -72,43 +75,10 @@ void convert_dataset(const char* image_filename, const char* label_filename,
image_file.read(reinterpret_cast<char*>(&cols), 4);
cols = swap_endian(cols);

// lmdb
MDB_env *mdb_env;
MDB_dbi mdb_dbi;
MDB_val mdb_key, mdb_data;
MDB_txn *mdb_txn;
// leveldb
leveldb::DB* db;
leveldb::Options options;
options.error_if_exists = true;
options.create_if_missing = true;
options.write_buffer_size = 268435456;
leveldb::WriteBatch* batch = NULL;

// Open db
if (db_backend == "leveldb") { // leveldb
LOG(INFO) << "Opening leveldb " << db_path;
leveldb::Status status = leveldb::DB::Open(
options, db_path, &db);
CHECK(status.ok()) << "Failed to open leveldb " << db_path
<< ". Is it already existing?";
batch = new leveldb::WriteBatch();
} else if (db_backend == "lmdb") { // lmdb
LOG(INFO) << "Opening lmdb " << db_path;
CHECK_EQ(mkdir(db_path, 0744), 0)
<< "mkdir " << db_path << "failed";
CHECK_EQ(mdb_env_create(&mdb_env), MDB_SUCCESS) << "mdb_env_create failed";
CHECK_EQ(mdb_env_set_mapsize(mdb_env, 1099511627776), MDB_SUCCESS) // 1TB
<< "mdb_env_set_mapsize failed";
CHECK_EQ(mdb_env_open(mdb_env, db_path, 0, 0664), MDB_SUCCESS)
<< "mdb_env_open failed";
CHECK_EQ(mdb_txn_begin(mdb_env, NULL, 0, &mdb_txn), MDB_SUCCESS)
<< "mdb_txn_begin failed";
CHECK_EQ(mdb_open(mdb_txn, NULL, 0, &mdb_dbi), MDB_SUCCESS)
<< "mdb_open failed. Does the lmdb already exist? ";
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}

scoped_ptr<db::DB> db(db::GetDB(db_backend));
db->Open(db_path, db::NEW);
scoped_ptr<db::Transaction> txn(db->NewTransaction());

// Storing to db
char label;
Expand All @@ -130,59 +100,28 @@ void convert_dataset(const char* image_filename, const char* label_filename,
string key_str = caffe::format_int(item_id, 8);
datum.SerializeToString(&value);

// Put in db
if (db_backend == "leveldb") { // leveldb
batch->Put(key_str, value);
} else if (db_backend == "lmdb") { // lmdb
mdb_data.mv_size = value.size();
mdb_data.mv_data = reinterpret_cast<void*>(&value[0]);
mdb_key.mv_size = key_str.size();
mdb_key.mv_data = reinterpret_cast<void*>(&key_str[0]);
CHECK_EQ(mdb_put(mdb_txn, mdb_dbi, &mdb_key, &mdb_data, 0), MDB_SUCCESS)
<< "mdb_put failed";
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}
txn->Put(key_str, value);

if (++count % 1000 == 0) {
// Commit txn
if (db_backend == "leveldb") { // leveldb
db->Write(leveldb::WriteOptions(), batch);
delete batch;
batch = new leveldb::WriteBatch();
} else if (db_backend == "lmdb") { // lmdb
CHECK_EQ(mdb_txn_commit(mdb_txn), MDB_SUCCESS)
<< "mdb_txn_commit failed";
CHECK_EQ(mdb_txn_begin(mdb_env, NULL, 0, &mdb_txn), MDB_SUCCESS)
<< "mdb_txn_begin failed";
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}
txn->Commit();
}
}
// write the last batch
if (count % 1000 != 0) {
if (db_backend == "leveldb") { // leveldb
db->Write(leveldb::WriteOptions(), batch);
delete batch;
delete db;
} else if (db_backend == "lmdb") { // lmdb
CHECK_EQ(mdb_txn_commit(mdb_txn), MDB_SUCCESS) << "mdb_txn_commit failed";
mdb_close(mdb_env, mdb_dbi);
mdb_env_close(mdb_env);
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}
LOG(ERROR) << "Processed " << count << " files.";
txn->Commit();
}
LOG(INFO) << "Processed " << count << " files.";
delete[] pixels;
db->Close();
}

int main(int argc, char** argv) {
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif

FLAGS_alsologtostderr = 1;

gflags::SetUsageMessage("This script converts the MNIST dataset to\n"
"the lmdb/leveldb format used by Caffe to load data.\n"
"Usage:\n"
Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ These messages tell you the details about each layer, its connections and its ou
I1203 solver.cpp:36] Solver scaffolding done.
I1203 solver.cpp:44] Solving LeNet

Based on the solver setting, we will print the training loss function every 100 iterations, and test the network every 1000 iterations. You will see messages like this:
Based on the solver setting, we will print the training loss function every 100 iterations, and test the network every 500 iterations. You will see messages like this:

I1203 solver.cpp:204] Iteration 100, lr = 0.00992565
I1203 solver.cpp:66] Iteration 100, loss = 0.26044
Expand Down
9 changes: 9 additions & 0 deletions include/caffe/layers/crop_layer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ class CropLayer : public Layer<Dtype> {
vector<int> offsets;

private:
// Recursive copy function.
void crop_copy(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top,
const vector<int>& offsets,
Expand All @@ -53,6 +54,14 @@ class CropLayer : public Layer<Dtype> {
Dtype* dest_data,
bool is_forward);

// Recursive copy function: this is similar to crop_copy() but loops over all
// but the last two dimensions to allow for ND cropping while still relying on
// a CUDA kernel for the innermost two dimensions for performance reasons. An
// alterantive implementation could rely on the kernel more by passing
// offsets, but this is problematic because of its variable length.
// Since in the standard (N,C,W,H) case N,C are usually not cropped a speedup
// could be achieved by not looping the application of the copy_kernel around
// these dimensions.
void crop_copy_gpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top,
const vector<int>& offsets,
Expand Down
45 changes: 45 additions & 0 deletions include/caffe/layers/parameter_layer.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#ifndef CAFFE_PARAMETER_LAYER_HPP_
#define CAFFE_PARAMETER_LAYER_HPP_

#include <vector>

#include "caffe/layer.hpp"

namespace caffe {

template <typename Dtype>
class ParameterLayer : public Layer<Dtype> {
public:
explicit ParameterLayer(const LayerParameter& param)
: Layer<Dtype>(param) {}
virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
if (this->blobs_.size() > 0) {
LOG(INFO) << "Skipping parameter initialization";
} else {
this->blobs_.resize(1);
this->blobs_[0].reset(new Blob<Dtype>());
this->blobs_[0]->Reshape(this->layer_param_.parameter_param().shape());
}
top[0]->Reshape(this->layer_param_.parameter_param().shape());
}
virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) { }
virtual inline const char* type() const { return "Parameter"; }
virtual inline int ExactNumBottomBlobs() const { return 0; }
virtual inline int ExactNumTopBlobs() const { return 1; }

protected:
virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
top[0]->ShareData(*(this->blobs_[0]));
top[0]->ShareDiff(*(this->blobs_[0]));
}
virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom)
{ }
};

} // namespace caffe

#endif
1 change: 1 addition & 0 deletions include/caffe/layers/python_layer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ class PythonLayer : public Layer<Dtype> {
}
self_.attr("param_str") = bp::str(
this->layer_param_.python_param().param_str());
self_.attr("phase") = static_cast<int>(this->phase_);
self_.attr("setup")(bottom, top);
}
virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
Expand Down
13 changes: 8 additions & 5 deletions include/caffe/util/db_lmdb.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
#define CAFFE_UTIL_DB_LMDB_HPP

#include <string>
#include <vector>

#include "lmdb.h"

Expand Down Expand Up @@ -54,14 +55,16 @@ class LMDBCursor : public Cursor {

class LMDBTransaction : public Transaction {
public:
explicit LMDBTransaction(MDB_dbi* mdb_dbi, MDB_txn* mdb_txn)
: mdb_dbi_(mdb_dbi), mdb_txn_(mdb_txn) { }
explicit LMDBTransaction(MDB_env* mdb_env)
: mdb_env_(mdb_env) { }
virtual void Put(const string& key, const string& value);
virtual void Commit() { MDB_CHECK(mdb_txn_commit(mdb_txn_)); }
virtual void Commit();

private:
MDB_dbi* mdb_dbi_;
MDB_txn* mdb_txn_;
MDB_env* mdb_env_;
vector<string> keys, values;

void DoubleMapSize();

DISABLE_COPY_AND_ASSIGN(LMDBTransaction);
};
Expand Down
Loading

0 comments on commit ad52ef5

Please sign in to comment.