TH++, C++ interface to the torch7 TH library
C++ CMake Shell Thrift
Latest commit 899817d Dec 18, 2016 Mathieu Baudet committed with facebook-github-bot fbcode: remove unused includes from .cpp files with no #if and #define
Summary:
This is a first diff to remove the "easiest" unused includes in fbcode.

* For safety, we only touch .cpp files without #if and #define,
* We do not try to remove redundant systems headers (aka. "packing").

The diff was generated as follows:
```
foundation/scripts/ls-cpp-dirs | grep -v '^\(\.\.\|external/\|.*/external\)' | xargs ffmr -o /tmp/ffmr-diff-1 codegraph/scripts/ffmr/analyze_includes_no_headers_no_packing_skipping_ifdefs.sh

cat /tmp/ffmr-diff-1/*.diff | patch -p2
hg commit -m something
arc diff --prepare --nolint --nounit --less-context --excuse refactoring
```

Note: `grep -v` is just an optimization. The actual configuration is in these two files:
diffusion/FBS/browse/master/fbcode/codegraph/analysis/config.py
diffusion/FBS/browse/master/fbcode/codegraph/scripts/ffmr/analyze_includes_no_headers_no_packing_skipping_ifdefs.sh

See the task for more context, and the recent "safety" improvements on the tool.

depends on D4317825 for very few cases where `nolint` had to be manually added.

Reviewed By: igorsugak

Differential Revision: D4312617

fbshipit-source-id: ecc1f0addfd0651fa4770fcc43cd1314661a311a

README.md

TH++: A C++ tensor library

TH++ is a C++ tensor library, implemented as a wrapper around the TH library (the low-level tensor library in Torch). There is unfortunately little documentation about TH, but the interface mimics the Lua Tensor interface.

The core of the library is the Tensor<T> class template, where T is a numeric type (usually floating point, float or double). A tensor is a multi-dimensional array, usually in C (row-major) order, but many operations (transpose, slice, etc) are performed by permuting indexes and changing offsets, so the data is no longer contiguous / in row-major order. Read the numpy.ndarray documentation for more details about the strided indexing scheme.

Tensors may also share memory with other tensors; operations that manipulate metadata (select, slice, transpose, etc) will make the destination tensor share memory with the source. To ensure you have a unique copy, call force(Tensor<T>::UNIQUE) on the tensor. Similarly, to ensure you have a contiguous C (row-major) tensor, call force(Tensor<T>::CONTIGUOUS), which may also create a unique copy.

Please see the header file <thpp/Tensor.h> for more details.