-
Notifications
You must be signed in to change notification settings - Fork 75.2k
Description
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 14.04
- TensorFlow installed from (source or binary): Source
- TensorFlow version (use command below): ('v1.0.0-1783-g4c3bb1a', '1.0.0')
- Bazel version (if compiling from source): 0.4.5
- CUDA/cuDNN version: -
- GPU model and memory: -
- Exact command to reproduce: -
Problem description
As part of my Google Summer of Code project, I am trying to build TensorFlow with Polly-enabled LLVM. To do this, I have written my own BUILD file which runs TensorFlow with a custom repository of LLVM that has Polly checked out as well. I have managed to get a clean build and am now looking to incorporate Polly's passes in the Optimization pipeline of XLA.
In XLA, the llvm module passes are registered here.
Polly register's its passes in LLVM through the following steps
static llvm::RegisterStandardPasses RegisterPollyOptimizerEarly(
llvm::PassManagerBuilder::EP_ModuleOptimizerEarly,
registerPollyEarlyAsPossiblePasses);
Corresponding file - <polly-src>/lib/Support/RegisterPasses.cpp.
polly::initializePollyPasses(Registry);
Corresponding file - <polly-src>/lib/Polly.cpp
I have built the object files for both these files. But I want to check if Polly is actually being invoked in the pipeline, and so my question is -
- Are these steps enough to use Polly in the bazel build of TensorFlow?
- How can I pass configuration flags to LLVM in TensorFlow to check for Polly usage?
As a reference, please find my BUILD file here.