Skip to content
Permalink
Browse files

Drop the const modifier for CompilationContext (#3032)

Summary:
*Drop the const modifier for CompilationContext. Because I need to add a member variable that can be modified later.*

I propose to add a member variable (LogContext) into CompilationContext class in next PR.
The LogContext will be able to:
- Keep track the log scope (Together with another to-be-implemented class ScopedLogBlock)
- Store all the log strings
- Provide method to dump all the log strings

The coupling between LogContext and CompilationContext will make sure that:
- Logging will not be affected by multiple compilations that happen alternatively.
- Have a private stack for each LogContext instead of one static stack for the entire run of Glow, such that it's thread safe.
Pull Request resolved: #3032

Differential Revision: D15609310

Pulled By: ZchiPitt

fbshipit-source-id: 7a322895730b05945ba94ac8752528b60989d8e6
  • Loading branch information...
ZchiPitt authored and facebook-github-bot committed Jun 3, 2019
1 parent 00e009d commit 2f773c980e283796de87d510c710290cecbe4a96
@@ -52,7 +52,7 @@ are two pure virtual functions all backends must implement:

Additionally, there are virtual functions that backends can override:

- `virtual bool transformPostLowering(Function *F, const CompilationContext &cctx) const;`
- `virtual bool transformPostLowering(Function *F, CompilationContext &cctx) const;`

- Allow the backend to transform the `Function *F` after [node
lowering](https://github.com/pytorch/glow/blob/master/docs/IR.md#node-lowering)
@@ -87,7 +87,7 @@ different modes.

```
llvm::Error glow::optimizeFunction(Function *F, const Backend &B,
const CompilationContext &cctx);
CompilationContext &cctx);
```

An error is returned if something goes wrong during the optimization pipeline,
@@ -154,7 +154,8 @@ int main(int argc, char **argv) {
TypeRef inputType = module->uniqueType(ElemKind::FloatTy, inputShape);
input = loadResnet50Model(inputType, module.get(), 0);
phList = module->getPlaceholders();
EXIT_ON_ERR(hostManager->addNetwork(std::move(module), CompilationContext(),
CompilationContext cctx;
EXIT_ON_ERR(hostManager->addNetwork(std::move(module), cctx,
/*saturateHost*/ true));

LOG(INFO) << "Loading files from " << inputDirectory;
@@ -86,7 +86,7 @@ class Backend {
/// cleaning up after itself.
/// \returns True if the graph was modified.
virtual bool transformPostLowering(Function *F,
const CompilationContext &cctx) const {
CompilationContext &cctx) const {
return false;
}

@@ -105,7 +105,7 @@ class ExecutionEngine final {
/// then the function will be added to the collection of previously compiled
/// functions otherwise any previously compiled functions will be removed
/// first. This method should be invoked before the run method.
void compile(Function *F, const CompilationContext &cctx,
void compile(Function *F, CompilationContext &cctx,
bool clearOtherFunctions = true);

/// A convenience function for the most common type of compile.
@@ -118,8 +118,8 @@ class ExecutionEngine final {
/// Make \p networkName the function name for
/// the entry point of the network and prepend all generated
/// files with this name.
void save(Function *F, const CompilationContext &cctx,
llvm::StringRef outputDir, llvm::StringRef networkName);
void save(Function *F, CompilationContext &cctx, llvm::StringRef outputDir,
llvm::StringRef networkName);

/// Context aware single execution of a function. If more than one
/// function has been compiled by this ExecutionEngine then a name must be
@@ -34,10 +34,10 @@ class Placeholder;
/// Perform optimizations on the IR representation.
void optimize(IRFunction &M, bool shouldShareBuffers);
/// Perform optimizations on the graph representation.
void optimize(Function *F, const CompilationContext &cctx);
void optimize(Function *F, CompilationContext &cctx);
void optimize(Function *F, CompilationMode mode);
/// Fold nodes that were expressed lowered in the input model.
void fold(Function *F, const CompilationContext &cctx);
void fold(Function *F, CompilationContext &cctx);
void fold(Function *F, CompilationMode mode);

/// Lower the high-level neural network nodes found in \p F into low-level
@@ -76,7 +76,7 @@ std::unique_ptr<IRFunction> generateAndOptimizeIR(Function *F, const Backend &B,
/// \returns success if all nodes in the final resulting optimized Function are
/// supported by \p B; if not, this represents a compiler error.
llvm::Error optimizeFunction(Function *F, const Backend &B,
const CompilationContext &cctx);
CompilationContext &cctx);

} // namespace glow

@@ -244,12 +244,12 @@ class Partitioner {
llvm::Error
createDAGWithoutPartition(BackendKind backendKind,
std::map<BackendKind, BackendInfo> &backendMap,
const CompilationContext &cctx);
CompilationContext &cctx);

/// Decompose each function in a module. Now we support partitioning a module
/// among different type of devices. \p cctx is used during optimization of
/// the Function. \returns whether there was an error encountered.
llvm::Error Partition(const CompilationContext &cctx = CompilationContext());
llvm::Error Partition(CompilationContext &cctx);

/// Get the partitions.
DAGListTy &getPartitionResult() { return partitions_; }
@@ -93,8 +93,7 @@ class HostManager final {
/// optimized based on \p cctx. If \p saturateHost is set to true the
/// HostManager will try to use all available devices on the host.
llvm::Error addNetwork(std::unique_ptr<Module> module,
const CompilationContext &cctx = CompilationContext(),
bool saturateHost = false);
CompilationContext &cctx, bool saturateHost = false);

/// Given \p networkName removes that network from the host. This also
/// removes the network from any backends setup to execute it.
@@ -40,7 +40,7 @@ class CPUBackend : public LLVMBackend {
std::string getBackendName() const override { return "CPU"; }

bool transformPostLowering(Function *F,
const CompilationContext &cctx) const override;
CompilationContext &cctx) const override;

bool isOpSupported(const NodeInfo &NI) const override;

@@ -120,7 +120,7 @@ static Node *optimizeCPUMaxSplat(MaxNode *MN, Function *F) {
}

bool CPUBackend::transformPostLowering(Function *F,
const CompilationContext &) const {
CompilationContext &) const {
bool changed = false;
for (auto &node : F->getNodes()) {
// Try to replace generic convolution with cpu-optimized version.
@@ -1556,8 +1556,8 @@ bool surroundTileWithReshapes(Function *F, TileNode &tile) {

} // namespace

bool HabanaBackend::transformPostLowering(
Function *F, const CompilationContext &cctx) const {
bool HabanaBackend::transformPostLowering(Function *F,
CompilationContext &cctx) const {
bool changed = false;
for (auto &node : F->getNodes()) {
// Separate any Slice nodes into several that only slice in one dimension
@@ -243,7 +243,7 @@ class HabanaBackend final : public Backend {
bool shouldLower(const Node *N) const override;

bool transformPostLowering(Function *F,
const CompilationContext &cctx) const override;
CompilationContext &cctx) const override;

bool shouldShareBuffers() const override { return false; }
/// @}
@@ -202,7 +202,7 @@ class OCLBackend final : public BackendUsingGlowIR {
compile(Function *F, const BackendOptions &opts) const override;

bool transformPostLowering(Function *F,
const CompilationContext &cctx) const override;
CompilationContext &cctx) const override;

bool isOpSupported(const NodeInfo &NI) const override;

@@ -27,7 +27,7 @@ using namespace glow;

/// Perform OpenCL specific post-lowering graph transformation.
bool OCLBackend::transformPostLowering(Function *F,
const CompilationContext &cctx) const {
CompilationContext &cctx) const {
// NCHW transformation is not supported in training mode yet, because of some
// issues with gradient nodes.
if (cctx.compMode == CompilationMode::Train)
@@ -241,7 +241,7 @@ void ExecutionEngine::compile(CompilationMode mode, Function *F,
compile(F, cctx, clearOtherFunctions);
}

void ExecutionEngine::compile(Function *F, const CompilationContext &cctx,
void ExecutionEngine::compile(Function *F, CompilationContext &cctx,
bool clearOtherFunctions) {
llvm::StringRef name = F->getName();

@@ -266,7 +266,7 @@ void ExecutionEngine::compile(Function *F, const CompilationContext &cctx,
insertCompiledFunction(name, std::move(func));
}

void ExecutionEngine::save(Function *F, const CompilationContext &cctx,
void ExecutionEngine::save(Function *F, CompilationContext &cctx,
llvm::StringRef outputDir,
llvm::StringRef networkName) {
EXIT_ON_ERR(::glow::optimizeFunction(F, *backend_, cctx));
@@ -53,7 +53,8 @@ void HostManagerBackendId::runNetwork(const Graph *graph,
}

onnxStatus HostManagerBackendId::addNetwork(std::unique_ptr<Module> module) {
auto err = hostManager_->addNetwork(std::move(module));
CompilationContext cctx;
auto err = hostManager_->addNetwork(std::move(module), cctx);

if (errToBool(std::move(err))) {
return ONNXIFI_STATUS_INTERNAL_ERROR;
@@ -2585,7 +2585,7 @@ static void foldChannelShuffle(Function *F) {
}
}

void glow::fold(Function *F, const CompilationContext &cctx) {
void glow::fold(Function *F, CompilationContext &cctx) {
(void)cctx;
// Get Reshape nodes merged into constants to simplify folding.
optimizeReshape(F);
@@ -2606,7 +2606,7 @@ void glow::fold(Function *F, CompilationMode mode) {
fold(F, cctx);
}

void glow::optimize(Function *F, const CompilationContext &cctx) {
void glow::optimize(Function *F, CompilationContext &cctx) {
// Optimize may be called after backend specific transformations and some
// nodes may have become unused. It is a good idea to remove them, before
// proceeding with any further optimizations.
@@ -2731,7 +2731,7 @@ static llvm::Error checkAllNodesSupported(const Function &F, const Backend &B) {
/// PrecisionConfiguration found in \p cctx. This could include quantization,
/// profiling, and FP16 conversion.
static void transformForPrecisionMode(const Backend &B, Function *F,
const CompilationContext &cctx) {
CompilationContext &cctx) {
const PrecisionConfiguration &precConfig = cctx.precisionConfig;

switch (precConfig.quantMode) {
@@ -2764,7 +2764,7 @@ static void transformForPrecisionMode(const Backend &B, Function *F,
// NOTE: When updating this function, please also update the documentation in
// docs/GraphOptimizationPipeline.md
llvm::Error glow::optimizeFunction(Function *F, const Backend &B,
const CompilationContext &cctx) {
CompilationContext &cctx) {
// Verify the function pre-optimization/lowering.
assert(F->verify() && "Function must be valid");

@@ -754,7 +754,7 @@ void Partitioner::getBackendMap(

llvm::Error Partitioner::createDAGWithoutPartition(
BackendKind backendKind, std::map<BackendKind, BackendInfo> &backendMap,
const CompilationContext &cctx) {
CompilationContext &cctx) {
for (auto F : module_->getFunctions()) {
if (!optimized_) {
auto backend = backendMap[backendKind].backend;
@@ -781,7 +781,7 @@ llvm::Error Partitioner::createDAGWithoutPartition(
return llvm::Error::success();
}

llvm::Error Partitioner::Partition(const CompilationContext &cctx) {
llvm::Error Partitioner::Partition(CompilationContext &cctx) {
// Prepare the mapping between BackendKind and BackendInfo.
std::map<BackendKind, BackendInfo> backendMap;
std::vector<Backend *> backends;
@@ -62,7 +62,7 @@ HostManager::init(std::vector<std::unique_ptr<DeviceConfig>> configs) {
HostManager::~HostManager() { llvm::toString(clearHost()); }

llvm::Error HostManager::addNetwork(std::unique_ptr<Module> module,
const CompilationContext &cctx,
CompilationContext &cctx,
bool saturateHost) {
std::lock_guard<std::mutex> networkLock(networkLock_);
auto functions = module->getFunctions();
@@ -304,7 +304,8 @@ class HostManagerBenchmark : public RuntimeBenchmark<BackendTy> {
}

// Add the module to the HostManager instance.
bool error = errToBool(hostManager_->addNetwork(std::move(mod)));
CompilationContext cctx;
bool error = errToBool(hostManager_->addNetwork(std::move(mod), cctx));
if (error) {
state.SkipWithError("Unable to set up host manager - failed to add "
"module!");
@@ -60,7 +60,8 @@ void addAndRemoveNetwork(HostManager *manager, unsigned int functionNumber) {

// Expect this to be an Error because multiple networks with the same name
// have been added to HostManager
errToBool(manager->addNetwork(std::move(module)));
CompilationContext cctx;
errToBool(manager->addNetwork(std::move(module), cctx));
EXPECT_FALSE(errToBool(
manager->removeNetwork("function" + std::to_string(functionNumber))));
}
@@ -70,7 +71,8 @@ TEST_F(HostManagerTest, newHostManager) { createHostManager(BackendKind::CPU); }
TEST_F(HostManagerTest, addNetwork) {
auto module = setupModule(6);
auto hostManager = createHostManager(BackendKind::CPU);
ASSERT_FALSE(errToBool(hostManager->addNetwork(std::move(module))));
CompilationContext cctx;
ASSERT_FALSE(errToBool(hostManager->addNetwork(std::move(module), cctx)));
}

TEST_F(HostManagerTest, runNetwork) {
@@ -88,7 +90,8 @@ TEST_F(HostManagerTest, runNetwork) {
context->getPlaceholderBindings()->allocate(save->getPlaceholder());

auto hostManager = createHostManager(BackendKind::CPU);
ASSERT_FALSE(errToBool(hostManager->addNetwork(std::move(module))));
CompilationContext cctx;
ASSERT_FALSE(errToBool(hostManager->addNetwork(std::move(module), cctx)));

std::promise<void> runNetwork;
auto ready = runNetwork.get_future();
@@ -152,7 +152,8 @@ TEST_F(PartitionerTest, Basic1) {
{3072, BackendKind::Interpreter},
{3072, BackendKind::Interpreter}};
Partitioner myPartitioner(&mod_, devices, false, true);
auto err = myPartitioner.Partition();
CompilationContext cctx;
auto err = myPartitioner.Partition(cctx);
EXPECT_FALSE(errToBool(std::move(err)));
DAGListTy myList = std::move(myPartitioner.getPartitionResult());
ASSERT_EQ(mod_.getFunctions().size(), 3);
@@ -226,7 +227,8 @@ TEST_F(PartitionerTest, Basic2) {
{2048, BackendKind::Interpreter},
{2048, BackendKind::Interpreter}};
Partitioner myPartitioner(&mod_, devices, /* saturateHost */ true);
auto err = myPartitioner.Partition();
CompilationContext cctx;
auto err = myPartitioner.Partition(cctx);
EXPECT_FALSE(errToBool(std::move(err)));
DAGListTy myList = std::move(myPartitioner.getPartitionResult());
ASSERT_EQ(mod_.getFunctions().size(), 2);
@@ -304,7 +306,8 @@ TEST_F(PartitionerTest, Error1) {

std::vector<DeviceInfo> devices = {{2048}};
Partitioner myPartitioner(&mod_, devices);
auto err = myPartitioner.Partition();
CompilationContext cctx;
auto err = myPartitioner.Partition(cctx);
EXPECT_TRUE(errToBool(std::move(err)));
}

@@ -375,7 +378,8 @@ TEST_F(PartitionerTest, Basic1Roofline) {
{3072, BackendKind::Interpreter, 100, 10, 0.1, 1, 0.05},
{3072, BackendKind::Interpreter, 100, 10, 0.1, 1, 0.05}};
Partitioner myPartitioner(&mod_, devices);
auto err = myPartitioner.Partition();
CompilationContext cctx;
auto err = myPartitioner.Partition(cctx);
EXPECT_FALSE(errToBool(std::move(err)));

DAGListTy myList = std::move(myPartitioner.getPartitionResult());
@@ -423,7 +427,8 @@ TEST_F(PartitionerTest, SelectRepFunc) {

Partitioner myPartitioner(&mod_, {{1000000}, {1000000}, {1000000}});

auto err = myPartitioner.Partition();
CompilationContext cctx;
auto err = myPartitioner.Partition(cctx);
EXPECT_FALSE(errToBool(std::move(err)));
}

@@ -506,7 +511,8 @@ TEST_F(PartitionerTest, SimpleHeterogeneousPartitioning) {
{3072, BackendKind::Interpreter}};
auto partitioner =
Partitioner(&mod_, devices, backends, /* saturateHost */ true);
auto err = partitioner.Partition();
CompilationContext cctx;
auto err = partitioner.Partition(cctx);
EXPECT_FALSE(errToBool(std::move(err)));
DAGListTy myList = std::move(partitioner.getPartitionResult());
ASSERT_EQ(mod_.getFunctions().size(), 2);
@@ -621,7 +621,8 @@ class RecommendationSystemTest : public BackendTest {
std::cout << numDevices << " devices of size " << memSize << "\n";
std::vector<DeviceInfo> devices(numDevices, {memSize, backendKind});
Partitioner myPartitioner(&mod_, devices);
EXIT_ON_ERR(myPartitioner.Partition());
CompilationContext cctx;
EXIT_ON_ERR(myPartitioner.Partition(cctx));

DAGListTy myList = std::move(myPartitioner.getPartitionResult());
std::cout << "Partitions = " << mod_.getFunctions().size() << std::endl;

0 comments on commit 2f773c9

Please sign in to comment.
You can’t perform that action at this time.