Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pytorch] Run resnet #3202

Closed
wants to merge 1 commit into from

Conversation

Projects
None yet
6 participants
@jackm321
Copy link
Contributor

commented Jul 2, 2019

Summary:
Run PyTorch Resnet/Resnext models on Glow except for AdaptiveAvgPool and FC

Documentation:
doxygen

Test Plan:
added operator tests

python setup.py test

to run example:
python setup.py develop --run_cmake --release
then
python examples/resnet_example.py

@@ -407,9 +583,17 @@ void PyTorchModelLoader::load() {
auto *save = f_->createSave("save", outputNodeValue);
outputPlaceholders_.push_back(save->getPlaceholder());
}

static int32_t dagNum = 0;
f_->dumpDAG(glow::strFormat("GlowFunction%d.dot", dagNum++));

This comment has been minimized.

Copy link
@jackm321

jackm321 Jul 2, 2019

Author Contributor

will remove this

@jackm321 jackm321 force-pushed the jackm321:pytorch_resnet branch 2 times, most recently from c902a19 to 6ba2521 Jul 2, 2019

@@ -46,12 +46,19 @@ class PyTorchModelLoader {
/// Mapping from PyTorch Values to Glow NodeValues created during loading.
std::unordered_map<const torch::jit::Value *, glow::NodeValue> valueMap_;

This comment has been minimized.

Copy link
@putivsky

putivsky Jul 8, 2019

Contributor

What if the same input used more than one time, does it mean Value* will be the same? May by Value::uniqueName is a better key?

This comment has been minimized.

Copy link
@jackm321

jackm321 Jul 9, 2019

Author Contributor

What if the same input used more than one time
Unless I misunderstand your question, it should be fine. glow::NodeValue can also be used multiple times.

void PyTorchModelLoader::loadNode(const torch::jit::Node *node) {
auto kind = node->kind();
assert(nodeLoaderMapping_.count(kind));
return nodeLoaderMapping_.at(kind)(node);

This comment has been minimized.

Copy link
@putivsky

putivsky Jul 8, 2019

Contributor

Double search:

  auto it = nodeLoaderMapping_.find(kind);
  assert(it != nodeLoaderMapping_.end());
  return it->second(node);

This comment has been minimized.

Copy link
@jackm321

jackm321 Jul 9, 2019

Author Contributor

The double search only occurs in an assert, I'd prefer to keep it this way if you don't mind.

void PyTorchModelLoader::loadBatchNorm(const torch::jit::Node *ptNode) {
auto inputs = ptNode->inputs();
auto outputs = ptNode->outputs();
assert(inputs.size() == 9);

This comment has been minimized.

Copy link
@rdzhabarov

rdzhabarov Jul 9, 2019

Contributor

DCHECK everywhere?

This comment has been minimized.

Copy link
@jackm321

jackm321 Jul 9, 2019

Author Contributor

I will go through in a follow up and change things to DCHECK or llvm::Error now that things are more settled.

@jackm321 jackm321 force-pushed the jackm321:pytorch_resnet branch 2 times, most recently from b44dc73 to b129d32 Jul 9, 2019

@zrphercule
Copy link
Member

left a comment

overall LGTM! Great!

@@ -167,6 +170,7 @@ void PyTorchModelLoader::loadAdd(const torch::jit::Node *ptNode) {
const auto scalarHandle = getGlowConstantHandle<int32_t>(inputs[2]);
assert(scalarHandle.size() == 1);
assert(scalarHandle.raw(0) == 1);
(void)scalarHandle;

This comment has been minimized.

Copy link
@zrphercule

zrphercule Jul 10, 2019

Member

Wonder what is this for?

This comment has been minimized.

Copy link
@jackm321

jackm321 Jul 10, 2019

Author Contributor

scalarHandle is only used in assert() so it will look as though it's unused when compiled in Release so this makes it so the compiler doesn't complain about that.

@@ -298,6 +445,9 @@ void PyTorchModelLoader::loadConstant(const torch::jit::Node *ptNode) {
t.getHandle<int32_t>().raw(0) = to32Bit(val);
} else if (iVal.isIntList()) {
const auto vals = iVal.toIntListRef();
if (vals.empty()) {
return;

This comment has been minimized.

Copy link
@zrphercule

zrphercule Jul 10, 2019

Member

shouldnt we have some msg indicate an empty list is being used here?

}
return false;
static std::once_flag flag;
std::call_once(flag, [&]() {

This comment has been minimized.

Copy link
@zrphercule

zrphercule Jul 10, 2019

Member

this chunk of fancy code has already been put in "Rui's note: how to write graceful C++"

This comment has been minimized.

Copy link
@jackm321

jackm321 Jul 10, 2019

Author Contributor

graceful C++
If it exists, I don't think this is it 😛

@jfix71
Copy link
Contributor

left a comment

Awesome work!

@@ -209,6 +214,20 @@ void PyTorchModelLoader::loadConvolution(const torch::jit::Node *ptNode) {
assert(inputs.size() == 12);
assert(outputs.size() == 1);

// Node inputs:

This comment has been minimized.

Copy link
@jfix71

jfix71 Jul 10, 2019

Contributor

nit: Maybe create an enum with this? Since you're already listing it here anyway, it wouldn't add lines of code, and would replace uses of explicit indices with readable names/reduce likelihood of bugs. E.g.

enum ConvInputs : unsigned { 
  Input = 0,
  Weights = 1,
  ...
};

This comment has been minimized.

Copy link
@zrphercule

zrphercule Jul 10, 2019

Member

Then we will have different enum for each operators, wonder if it worth...

This comment has been minimized.

Copy link
@jfix71

jfix71 Jul 10, 2019

Contributor

If we're already going to list out the values in a comment for each operator anyway then IMO there's no real downside, since it'll be the same number of lines of code. And each enum would be contained to the scope of the operator's load function (e.g. loadConvolution()).

This comment has been minimized.

Copy link
@jackm321

jackm321 Jul 10, 2019

Author Contributor

Interesting idea, I had to use old school enum class (class with an anonymous enum in it) so that the values don't leak into the function and I don't need to static_cast everything. Kinda weird but I thinks it's still better.

@jackm321 jackm321 force-pushed the jackm321:pytorch_resnet branch from b129d32 to 3e27c22 Jul 10, 2019

@facebook-github-bot
Copy link

left a comment

@jackm321 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot

This comment has been minimized.

Copy link

commented Jul 11, 2019

@jackm321 merged this pull request in db6804c.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.