Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .jenkins/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ if [[ "$DISTRIB_RELEASE" == 14.04 ]]; then
source activate tc-env
conda install -y pyyaml mkl-include
conda install -yc conda-forge pytest
conda install -y pytorch-nightly=2018.04.17 -c pytorch
conda install -y pytorch -c pytorch
WITH_PYTHON_C2=OFF CORES=$(nproc) CLANG_PREFIX=/usr/local/clang+llvm-tapir5.0 BUILD_TYPE=Release ./build.sh --all
else
echo "Building TC in non-conda env"
Expand All @@ -87,7 +87,7 @@ if [[ "$DISTRIB_RELEASE" == 16.04 ]]; then
source activate tc-env
conda install -y pyyaml mkl-include
conda install -yc conda-forge pytest
conda install -y pytorch-nightly=2018.04.17 cuda90 -c pytorch
conda install -y pytorch cuda90 -c pytorch
WITH_PYTHON_C2=OFF CORES=$(nproc) CLANG_PREFIX=/usr/local/clang+llvm-tapir5.0 BUILD_TYPE=Release ./build.sh --all
else
echo "Building TC in non-conda env"
Expand Down
2 changes: 1 addition & 1 deletion docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ git submodule update --init --recursive
# build TC
conda install -y mkl-include pyyaml
conda install -yc conda-forge pytest
conda install -y pytorch-nightly=2018.04.17 -c pytorch # OR conda install -y pytorch-nightly=2018.04.17 cuda90 -c pytorch
conda install -y pytorch -c pytorch # OR conda install -y pytorch cuda90 -c pytorch
CORES=$(nproc) WITH_CAFFE2=ON CLANG_PREFIX=/usr/local/clang+llvm-tapir5.0 BUILD_TYPE=Release ./build.sh --all
# Test the TC build is fine
./test.sh
Expand Down
14 changes: 9 additions & 5 deletions docker/common/install_base.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,15 @@ apt-get install -y --no-install-recommends \
apt-get clean
rm -rf /var/lib/apt/lists/*
# setup gcc
add-apt-repository ppa:ubuntu-toolchain-r/test
apt-get update
apt-get install -y --no-install-recommends libcilkrts5 gcc-$GCC_VERSION g++-$GCC_VERSION
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-$GCC_VERSION 50
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-$GCC_VERSION 50
if [[ "$GCC_VERSION" == 4.9 ]]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SO this installs from PPA when asked for GCC 4.9, but what is the version shipped with the system?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

system version 4.8.4

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then I don't really understand how is this supposed to make it use gcc-5.4 as the commit message says

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the trusty distrib, we use gcc4.9 but default shipped is 4.8.4
for xenial, we use gcc 5.4 but default shipped by ubuntu toolchain is 5.5 as of April 24

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so how do we get gcc 5.4 then? we only install from ppa if 4.9 is requested. so when 5.4 is requested -- nothing happens. I assume than the default 5.5 is used, which sort of contradicts your commit message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if gcc version 4.9 is requested, we get it from ppa. otherwise, don't use the toolchain and apt install it. Earlier, gcc5 was coming from ppa as well which used to be gcc5.4 before April 24

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the part that is missing from the description is that xenial has 5.4 by default

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it. I understand now what you mean. I'll edit the description. :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! sorry it took so long

add-apt-repository ppa:ubuntu-toolchain-r/test
apt-get update
apt-get install -y --no-install-recommends libcilkrts5 gcc-$GCC_VERSION g++-$GCC_VERSION
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-$GCC_VERSION 50
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-$GCC_VERSION 50
else
apt-get install -y --no-install-recommends libcilkrts5 gcc g++
fi

# Install ccache from source. Needs 3.4 or later for ccbin support
# Needs specific branch to work with nvcc (ccache/ccache#145)
Expand Down
6 changes: 3 additions & 3 deletions tensor_comprehensions/tc_unit.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def set_gflags(
def check_cache_file_exists(cache_file):
# for autotuning, we save two files: .cuda and .options, we will check that
# these two files exists for the validity of cache
if os.path.exists(cache_file + ".options") and os.path.exists(cache_file + ".cuda"):
if os.path.exists(cache_file + ".options"):
return True
return False

Expand Down Expand Up @@ -254,7 +254,7 @@ def autotune(self, *inputs, **kwargs):
cache_file = "/tmp/{}_{}".format(hash_key, str(uuid.uuid4()))
elif isinstance(kwargs["cache"], str):
cache_file = kwargs["cache"]
logger.info('Autotuning cache will be saved to: {}.cuda/options'.format(cache_file))
logger.info('Autotuning cache will be saved to: {}.options'.format(cache_file))
else:
logger.warning("Autotuning results won't be cached. 'cache' option is not set")

Expand Down Expand Up @@ -297,7 +297,7 @@ def autotune(self, *inputs, **kwargs):

if cache_file:
cache_file = cache_file + "_backward"
logger.info('Backwards autotuning cache will be saved to: {}.cuda/options'.format(cache_file))
logger.info('Backwards autotuning cache will be saved to: {}.options'.format(cache_file))
kwargs["type"] = "backward"
options = get_options_from_kwargs_and_tuner_cache(backward_name, cache_file, options_cache, *inputs, **kwargs)
backward_best_options = self.tune_and_store(
Expand Down
2 changes: 2 additions & 0 deletions test_python/test_tc.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,9 @@ def matmul(float(M,N) A, float(N,K) B) -> (output) {
inputs = [mat1, mat2]
handle = cu.compile("matmul", [mat1, mat2], options="mlp")
outputs = cu.run(handle, "matmul", inputs)
torch.cuda.synchronize()
expected = torch.mm(mat1, mat2)
torch.cuda.synchronize()
diff = outputs[0] - expected
self.assert_almost_equal(diff, inputs, 4)

Expand Down
2 changes: 1 addition & 1 deletion test_python/test_tc_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ def test_autotuner_cachefile_first(self):
def test_autotuner_cachefile_load_automatic(self):
lang = MATMUL_LANG
cache_file = "{}/matmul_100_400_500".format(PATH_PREFIX) # use argparse if input from command line
assert os.path.isfile("{}.cuda".format(cache_file)), "looks like the cache_file doesn't exist"
assert os.path.isfile("{}.options".format(cache_file)), "looks like the cache_file doesn't exist"

matmul = tc.define(lang, name="matmul")
mat1, mat2 = torch.randn(100, 400).cuda(), torch.randn(400, 500).cuda()
Expand Down