Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Redhat, Centos and many superclusters #110

Closed
trungnt13 opened this issue Nov 11, 2015 · 61 comments
Closed

Support for Redhat, Centos and many superclusters #110

trungnt13 opened this issue Nov 11, 2015 · 61 comments

Comments

@trungnt13
Copy link

@trungnt13 trungnt13 commented Nov 11, 2015

Many clusters system using module with Redhat or Centos < 7 which is glibc 2.12

Since, bazel requires glibc 2.14 and the prebuilt version for linux requires glibc 2.17. It is hopeless to make tensorflow run on clusters.

Referred to this issue reported on bazel: bazelbuild/bazel#583

@vrv
Copy link
Contributor

@vrv vrv commented Nov 11, 2015

Since we depend on bazel, this sounds like a bazel issue.

Feel free to re-open if bazel ends up supporting 2.12 or lower, and we can see what we can do.

@vrv vrv closed this Nov 11, 2015
@alantus
Copy link

@alantus alantus commented Nov 30, 2015

Am I right that you depend on bazel only at build-time? If this is true then it can be viewed as something you could do something about too... You could also release static-linked packages that would be very useful to people stuck on clusters with old libraries...

@urimerhav
Copy link

@urimerhav urimerhav commented Dec 17, 2015

So did anyone find some way past this problem? I'm using redhat 6.4, as is my entire corporation. We're stuck on redhat 6.4. I'm not sure how to end up running tensorflow on such a machine...

@ttrouill
Copy link

@ttrouill ttrouill commented Jan 20, 2016

I managed to have it running on a CentOS 6.7 : http://stackoverflow.com/a/34897674/1990516 :)
Tell me if it works for you.

Edit: I proposed an alternative solution also: http://stackoverflow.com/a/34900471/1990516

@urimerhav
Copy link

@urimerhav urimerhav commented Jan 20, 2016

Thanks man! I'll look into it as soon as I can.

Sent from my IPhone

On Jan 20, 2016, at 2:41 AM, Théo Trouillon notifications@github.com wrote:

I managed to have it running on a CentOS 6.7 : http://stackoverflow.com/a/34897674/1990516 :)
Tell me if it works for you


Reply to this email directly or view it on GitHub.

@altaetran
Copy link

@altaetran altaetran commented Jan 30, 2016

Could you let me know if this worked? I can't seem to get any of these other solutions working.

@urimerhav
Copy link

@urimerhav urimerhav commented Feb 22, 2016

Since @ttrouill only says he got it working on 6.7 so I didn't check whether this works on 6.4 actually...

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Feb 29, 2016

Both solutions seem to work, but they're not optimal. TensorFlow and Python seem to run okay, but if I try and run IPython, then with the first solution I get an Invalid ELF error, and with the second solution there is a memory leak and IPython continues to absorb all memory with time. I believe that this can also happen with other Python imports that rely on libraries that were compiled using the older libc.

I'd love to see a straightforward how-to-compile-bazel-with-old-glibc guide, but I haven't come across one yet.

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Feb 29, 2016

Also bazelbuild/bazel#760 is relevant, but it's far from straightforward and my attempt to build bazel using this guide failed. Hopefully within the next few weeks I can give it some more time and continue that thread with the errors I end up getting.

ilblackdragon added a commit to ilblackdragon/tensorflow that referenced this issue Mar 9, 2016
@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Mar 26, 2016

Compiling on CentOS still isn't all that straightforward, but I figured I'd give an overview here for now. This works for me with CentOS 6.7 and gcc 4.8.2, with GPU support (Cuda 7.0, cuDNN 4.0.7). A bazel modification for building with a custom gcc is in the works (bazelbuild/bazel#760) and should help streamline this later on.

The instructions here are specific to my base gcc path of /cm/shared/apps/gcc/4.8.2, but it should work for other configurations just by modifying the base path.

Paths for reference:
gcc path: /cm/shared/apps/gcc/4.8.2/bin/gcc
cpp path: /cm/shared/apps/gcc/4.8.2/bin/cpp
lib64 path: /cm/shared/apps/gcc/4.8.2/lib64
include1 dir: /cm/shared/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2/include
include2 dir: /cm/shared/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2/include-fixed
include3 dir: /cm/shared/apps/gcc/4.8.2/include/c++/4.8.2

Bazel

  1. git clone https://github.com/bazelbuild/bazel.git && cd bazel
  2. Edit tools/cpp/CROSSTOOL
    • Replace all occurrences of /usr/bin/gcc with gcc path
    • Replace all occurrences of /usr/bin/cpp with cpp path
    • After the toolpath containing gcc path, add the lines
      • linker_flag: "-Wl,-Rlib64 path"
      • cxx_builtin_include_directory: "include1 dir"
      • cxx_builtin_include_directory: "include2 dir"
      • cxx_builtin_include_directory: "include3 dir"
  3. Edit scripts/bootstrap/buildenv.sh
    • Comment out atexit "rm -fr ${DIR}"
  4. export EXTRA_BAZEL_ARGS='-s --verbose_failures --ignore_unsupported_sandboxing --genrule_strategy=standalone --spawn_strategy=standalone --jobs 8'
  5. ./compile.sh

TensorFlow

  1. git clone --recurse-submodules https://github.com/tensorflow/tensorflow && cd tensorflow
  2. Edit third_party/gpus/crosstool/CROSSTOOL, making the same changes we made for Bazel. (/usr/bin/gcc etc. likely won't need to be replaced, though.)
  3. Edit third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc
    • Replace all /usr/bin/gcc with gcc path.
    • Undo the temporary "fix" to find as by commenting out the line cmd = 'PATH=' + PREFIX_DIR + ' ' + cmd. (For me, this is necessary to find as.)
  4. ./configure
  5. export EXTRA_BAZEL_ARGS='-s --verbose_failures --ignore_unsupported_sandboxing --genrule_strategy=standalone --spawn_strategy=standalone --jobs 8'
  6. bazel build -c opt --config=cuda --linkopt '-lrt' --copt="-DGPR_BACKWARDS_COMPATIBILITY_MODE" --conlyopt="-std=c99" //tensorflow/tools/pip_package:build_pip_package
    • Why the strange flags? Because otherwise, after building with the older libc, we'll get an error about secure_getenv.
  7. bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/tensorflow_pkg
  8. pip install ~/tensorflow_pkg/*
@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented May 17, 2016

Update: Previous process was for a commit after release 7.

Here are necessary changes for commit 1d4fd06, which is after release 8:

  1. You need Bazel 0.2.x. As of this writing, with appropriate environment variables, Bazel at HEAD compiles simply with ./compile.sh. Thank you @damienmg !
  2. You still need to make the above changes to the TensorFlow files, including the changes to CROSSTOOL etc. (For some reason the bazel auto config doesn't work here.)
  3. Edit third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc
    and replace #!/usr/bin/env python2.7 with
    #!/usr/bin/env /full/path/to/python2.7. This is a hack to avoid bazel's confined environment from failing to pick up our custom Python location.
  4. Edit bazel-out/host/bin/tensorflow/swig and add
    export LD_LIBRARY_PATH=custom:paths:$LD_LIBRARY_PATH
    before swig is run. Otherwise swig won't find libraries that exist in our LD_LIBRARY_PATH. This is another hack to get around the confined environment.
  5. Use the same bazel build command from above: bazel build -c opt --config=cuda --linkopt '-lrt' --copt="-DGPR_BACKWARDS_COMPATIBILITY_MODE" --conlyopt="-std=c99" //tensorflow/tools/pip_package:build_pip_package
  6. cd bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles and cp -r __main__/* .. This is a hack associated with #2040.
  7. Finally we can bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/tensorflow_pkg, and
  8. pip install ~/tensorflow_pkg/*
@trungnt13
Copy link
Author

@trungnt13 trungnt13 commented May 18, 2016

Our administrator managed to run pip installed tensorflow package on RHEL 6.7 server (without building bazel and tensorflow source), the core idea is get separated newer version of GLIBC version:

Fast test:

import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
a = tf.constant(10)
b = tf.constant(32)
print(sess.run(a + b))

Note: this approach is only for running python scripts, remember that, every time you add $libcroot to your path all the shell commands are corrupted (i.e you cannot use ls, cd ...). You might use bash -l, or screen, or byobu before you try this so you don't mess up your own session.

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented May 18, 2016

Yeah that was described here a while back, but as you mention, it's not ideal. For example if you run Jupyter it'll lead to a memory leak / crash (at least on the system I tried it with).

@kskp
Copy link

@kskp kskp commented Jun 23, 2016

@rdipietro

Edit tools/cpp/CROSSTOOL
After the toolpath containing gcc path, add the lines
linker_flag: "-Wl,-Rlib64 path"
cxx_builtin_include_directory: "include1 dir"
cxx_builtin_include_directory: "include2 dir"
cxx_builtin_include_directory: "include3 dir"

Should these lines be added after every occurence of the toolpath containing gcc path- i.e. twice wherever i changed the usr/bin/gcc ?

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Jun 23, 2016

I don't know what you mean by twice. I'm pretty sure I only inserted those lines once, although if you were to insert them in multiple places it probably wouldn't do any harm.

@damienmg
Copy link
Member

@damienmg damienmg commented Jun 24, 2016

@kskp @rdipietro : is that still needed with latest version of Bazel? If yes then we have an issue in the C++ detection code.

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Jun 24, 2016

Bazel compiles out of the box as long as I set CC correctly. I haven't tried with TensorFlow 0.9, but as of 0.8, I still had to make manual changes on CentOS.

@damienmg
Copy link
Member

@damienmg damienmg commented Jun 24, 2016

You mean change to the cuda crosstool file?

On Fri, Jun 24, 2016 at 2:30 PM Robert DiPietro notifications@github.com
wrote:

Bazel compiles out of the box as long as I set CC correctly. I haven't
tried with TensorFlow 0.9, but as of 0.8, I still had to make manual
changes on CentOS.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#110 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ADjHf_Ij539IWtrDlTebMajjTTI87GSBks5qO83SgaJpZM4Gf6Qp
.

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Jun 24, 2016

Yes. My May 17 comment above includes everything I needed to do. Specifically, needed to edit CROSSTOOL and needed to introduce two hacks to get bazel to find things outside of its isolated environment.

@kskp
Copy link

@kskp kskp commented Jun 24, 2016

@rdipietro Thanks for your reply. Sorry for my ignorance, but could you please tell me what toolpath is? I am assuming it is the block of code where the gcc path had to be changed. I did that twice in the entire file (Since it said to replace all occurences of /usr/bin/gcc). So do I have to add those lines after the block of code where I changed the /usr/bin/gcc path??

@kskp
Copy link

@kskp kskp commented Jun 24, 2016

@rdipietro @damienmg I am not using the latest version of Bazel. I need the 0.2.2b version. I ultimately have to run Syntaxnet on Cent OS 6.7.

@damienmg
Copy link
Member

@damienmg damienmg commented Jun 24, 2016

0.2.2b should work too.

On Fri, Jun 24, 2016 at 2:55 PM kskp notifications@github.com wrote:

@rdipietro https://github.com/rdipietro @damienmg
https://github.com/damienmg I am not using the latest version of Bazel.
I need the 0.2.2b version. I ultimately have to run Syntaxnet on Cent OS
6.7.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#110 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ADjHf4sjm971bfucsyRzcsZk_rgAUo8qks5qO9ObgaJpZM4Gf6Qp
.

@kskp
Copy link

@kskp kskp commented Jun 24, 2016

Oh, I tried a couple of weeks ago but it did not work. Will do it again today. Thanks for your reply.

@damienmg
Copy link
Member

@damienmg damienmg commented Jun 24, 2016

note that you still have to do the CUDA CROSSTOOL modification for doing it with --config cuda

@yliu120
Copy link

@yliu120 yliu120 commented Dec 17, 2016

I built the latest Tensorflow (github master branch) with GPU support on a supercomputing center (CentOS 6.7 with gcc 4.9.2/Generally with a customized cc tool chain). I pointed out some of environment variables settings that are necessary for a success built. Just to document here for future reference:

http://biophysics.med.jhmi.edu/~yliu120/tensorflow.html

@VittalP
Copy link

@VittalP VittalP commented Jan 23, 2017

Thanks @rdipietro ! I have been able to successfully install r0.12 with Bazel 0.4.3 on a cluster. Some of your suggestions needed to be modified to cater to the changes in the new version of TF and Bazel. But, your suggestions provided a solid starting point. When I get the time, I will write up the changes that I had to make.

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Jan 23, 2017

You're welcome @VittalP :)

I have an updated set of notes that works as of 1.0.0 alpha:

First of all Bazel finally just works. Can download the newest 0.4.x source code (dist zip version), run ./compile.sh, then add the printed output path to PATH.

TensorFlow unfortunately still doesn't just work. So (replacing my paths with yours):

  1. In configure, replace bazel clean --expunge with bazel clean --expunge_async

  2. In third_party/gpus/crosstool/CROSSTOOL.tpl, replace all occurrences of /usr/bin/cpp with /cm/shared/apps/gcc/4.8.2/bin/cpp

  3. In third_party/gpus/crosstool/CROSSTOOL.tpl, after the line -B/usr/bin/, add the lines

linker_flag: "-Wl,-R/cm/shared/apps/gcc/4.8.2/lib64"
cxx_builtin_include_directory: "/cm/shared/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2/include"
cxx_builtin_include_directory: "/cm/shared/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2/include-fixed"
cxx_builtin_include_directory: "/cm/shared/apps/gcc/4.8.2/include/c++/4.8.2"

  1. In third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl, replace NVCC_PATH = CURRENT_DIR + '/../../../cuda/bin/nvcc' with NVCC_PATH = ('/cm/shared/apps/cuda/7.5/bin/nvcc')

  2. In third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl, replace LLVM_HOST_COMPILER_PATH = ('/usr/bin/gcc') with LLVM_HOST_COMPILER_PATH = ('/cm/shared/apps/gcc/4.8.2/bin/gcc')

  3. In third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl, comment out the line cmd = 'PATH=' + PREFIX_DIR + ' ' + cmd

I configured with cuda 7.5, cudnn 5, compute compatibility 3.5 and built with bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

@yliu120
Copy link

@yliu120 yliu120 commented Jan 23, 2017

@rdipietro @VittalP I have wrote an explanation on the installation of the latest Tensorflow right before @VittalP 's post. But you guys just simply ignored my post... As a jhuer, I kindly note that I have sent my instructions to MARCC's guy and there is already a tensorflow module on MARCC.

If you like to read my post to see where is different. http://biophysics.med.jhmi.edu/~yliu120/tensorflow.html

If something needs to be updated, please inform me of that.

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented Jan 23, 2017

Sorry! I didn't notice that you had posted here. But note that you are making changes that I didn't need to make. Probably depends on specific versions of TF / cuda / gcc / whatever.

Side note: I still compile on MARCC because they only installed TF for Python 2.x, whereas I'm using 3.x.

@yliu120
Copy link

@yliu120 yliu120 commented Jan 23, 2017

I have updated my webpage for building tensorflow 1.0.0 with python 3.5.2. I provided two wheels on the webpage as well.

Please refer to:
http://biophysics.med.jhmi.edu/~yliu120/tensorflow.html

@fraudies
Copy link

@fraudies fraudies commented Mar 10, 2017

For whoever wants to compile TensorFlow 1.0 on RedHat 6 and with Python 2.7, I provide a detailed step-by-step guide here: https://www.linkedin.com/pulse/compiling-tensorflow-10-python-27-redhat-6-florian-raudies

@rdipietro
Copy link
Contributor

@rdipietro rdipietro commented May 25, 2017

And here we go again for r1.2. (Note: since r1.0, the Bazel configuration file organization has been mucked with.)

Bazel: Need new ish version. 0.4.3 did not work, 0.4.5 did. Again, Bazel now compiles easily even with older CentOS / glibc, so this is straightforward.

Required edits for TensorFlow:

vim third_party/gpus/crosstool/CROSSTOOL_nvcc.tpl
%s~/usr/bin/cpp~/cm/shared/apps/gcc/4.8.2/bin/cpp~g
And after linker_flag: "-B/usr/bin/" add

  linker_flag: "-Wl,-R/cm/shared/apps/gcc/4.8.2/lib64"
  cxx_builtin_include_directory: "/cm/shared/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2/include"
  cxx_builtin_include_directory: "/cm/shared/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2/include-fixed"
  cxx_builtin_include_directory: "/cm/shared/apps/gcc/4.8.2/include/c++/4.8.2"

vim third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl
NVCC_PATH = '/cm/shared/apps/cuda/7.5/bin/nvcc'

Final notes: Wouldn't work with Cuda 7.5, CuDNN 5 (cuda compilation errors). Success with Cuda 8.0, CuDNN 5.

lukeiwanski pushed a commit to codeplaysoftware/tensorflow that referenced this issue Oct 26, 2017
* [OpenCL] Provides atomic free MaxPool3DGrad

Atomic support in SYCL is not designed in a way that plays nicely with
Tensorflow and Eigen. Here we provide a new implementation for
MaxPool3DGrad which does not rely on atomics, and so avoids any such
problems.

* [OpenCL] Provides atomic free MaxPoolGrad

Atomic support in SYCL is not designed in a way that plays nicely with
Tensorflow and Eigen. Here we provide a new implementation for
MaxPoolGrad which does not rely on atomics, and so avoids any such
problems.

* [OpenCL] Changes expected NaN behaviour in test

The new SYCL kernels provide the same behaviour as the CUDA and cuDNN
kernels when an input tensor only contains NaN and the test needs to
reflect this.

As NaN cannot be compared to any other float value, it makes little
sense to decide which of the NaNs is the maximum, and so which NaN
should have the error propagated to it.

* [OpenCL] Removes unneeded SYCL atomic functions

* [OpenCL] Tidies SYCL MaxPoolGrad kernels

Some tidying up and also adds a local accumulator value which will be
written to memory at the end of the kernel, to decrease the number og
memory writes in the kernel.
@JoyChopra1298
Copy link

@JoyChopra1298 JoyChopra1298 commented Feb 24, 2018

I am working on a CentOS 6 cluster which uses Lustre filesystem. I am unable to make Bazel work on it since it can't use file locking. Refer this issue. So would it be possible for tensorflow to support other build tools?

Edit : Error: unexpected result from F_SETLK: Function not implemented. Also refer the hyper-link above

@yliu120
Copy link

@yliu120 yliu120 commented Feb 24, 2018

@JoyChopra1298
Up in this thread, lots of people built bazel and tf on CentOS 6. I am sure it can be built. Since you didn’t paste any error message, I am not sure what is your problem. But if you said Bazel can’t work with Lustre, you can move bazel ‘s output_user_root to /tmp/bazel. Usually the tmpfs is a locally mounted fs on a single node.

@JoyChopra1298
Copy link

@JoyChopra1298 JoyChopra1298 commented Feb 24, 2018

@yliu120 Thank you using bazel's output_user_root option worked.

@owenyoung75
Copy link

@owenyoung75 owenyoung75 commented Jul 15, 2018

I have some similar problem here, in step2 for TF specifically.
My cluster on campus uses module with Redhat which is glibc 2.12.
I successfully installed bazel 0.15.0. But when I tried to move forward to bazel build TF, I got a long log, a part of which appears as:

/home2/my_name/.cache/bazel/_bazel_my_name/b9c3b9594c932d1e804df44467c1c0d2/external/boringssl/BUILD:115:1: C++ compilation of rule '@boringssl//:crypto' failed (Exit 1)
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S: Assembler messages:
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:37: Error: suffix or operands invalid for vpxor' external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:80: Error: no such instruction: vpbroadcastq .Land_mask(%rip),%ymm15'
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:91: Error: suffix or operands invalid for vpaddq' external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:92: Error: no such instruction: vpbroadcastq 0-128(%rsi),%ymm10'
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:93: Error: suffix or operands invalid for vpaddq' external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:95: Error: suffix or operands invalid for vpaddq'
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:97: Error: suffix or operands invalid for vpaddq' external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:99: Error: suffix or operands invalid for vpaddq'
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:101: Error: suffix or operands invalid for vpaddq' external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:103: Error: suffix or operands invalid for vpaddq'
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:105: Error: suffix or operands invalid for vpaddq' external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:107: Error: suffix or operands invalid for vpxor'
external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:110: Error: suffix or operands invalid for vpmuludq' external/boringssl/linux-x86_64/crypto/fipsmodule/rsaz-avx2.S:111: Error: no such instruction: vpbroadcastq 32-128(%rsi),%ymm11'
...

And when I used --verbose_failures to monitor the building process, I obtained the output organized in error_records.txt
error_records.txt

Can anyone help with this issue?

@jw447
Copy link

@jw447 jw447 commented Feb 15, 2019

@owenyoung75 Did you solve this problem? I'm facing similar situation.

tensorflow-copybara pushed a commit that referenced this issue Aug 31, 2019
- more highlighting: numbers, elemental types inside shaped types
- add some more keywords

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes #110

COPYBARA_INTEGRATE_REVIEW=tensorflow/mlir#110 from bondhugula:vim 029777db0ecb95bfc6453c0869af1c233d84d521
PiperOrigin-RevId: 266487768
xinan-jiang pushed a commit to xinan-jiang/tensorflow that referenced this issue Oct 4, 2019
- more highlighting: numbers, elemental types inside shaped types
- add some more keywords

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow#110

COPYBARA_INTEGRATE_REVIEW=tensorflow/mlir#110 from bondhugula:vim 029777db0ecb95bfc6453c0869af1c233d84d521
PiperOrigin-RevId: 266487768
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.