Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove more uses of DimensionedTensorType #23060

Closed
wants to merge 1 commit into from

Conversation

Krovatkin
Copy link
Contributor

No description provided.

@pytorchbot pytorchbot added oncall: jit Add this issue/PR to JIT oncall triage queue module: internals Related to internal abstractions in c10 and ATen module: onnx Related to torch.onnx module: pybind Related to our Python bindings / interactions with other Python libraries labels Jul 19, 2019
@Krovatkin Krovatkin changed the title [WIP] Remove more uses of DimensionedTensorType Remove more uses of DimensionedTensorType Jul 19, 2019
@Krovatkin Krovatkin requested a review from zdevito July 23, 2019 19:55
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Krovatkin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Thanks for cleaning these up.

auto tensor_type = input->type()->cast<DimensionedTensorType>();
// As of now, we do the decomposition for batchnorm/layernorm on GPU device only
if (!tensor_type || tensor_type->device().is_cpu()) {
if (!input->type()->isSubclass(TypeKind::TensorType)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The canonical way to do this is: !input->type()->isSubtype(TensorType::get())

@@ -182,13 +182,16 @@ struct GraphFuser {
}

bool isFusableDevice(Value *v) {
auto tensor_type = v->type()->cast<DimensionedTensorType>();
if (!tensor_type) {
if (!v->type()->isSubclass(TypeKind::TensorType)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

@@ -29,8 +29,10 @@ Node* InsertCastForCond(Value* cond_val, Graph* graph, Node* consumer_node) {

bool IsCondCastRequired(Value* cond_val) {
const auto& type = cond_val->type();
if (type->isSubclass(TypeKind::DimensionedTensorType)) {
return type->expect<DimensionedTensorType>()->scalarType() != c10::kBool;
if (type->isSubclass(TypeKind::TensorType)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here

@@ -35,6 +35,13 @@
if sys.version_info[0] > 2:
import pathlib

def _get_type_attr(type, aname):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather have the 'dim' and scalarType methods handle returning None rather than throw/catch an exception. It makes things hard to debug (catch throw now catches spurious things), and it can mask a real exception that is not the one that is expected.

fix a bug in an assertion

remove extra new lines

more changes to remove DimensionedTensorType

insert a newline

assert -> torch_check

only use `ProfiledTensorCreate` on tensors and insert checks elsewhere

add a missing import

fixing a call to _get_type_attr

 pass a type instead of a value into _get_type_attr

don't rely on a tensor type to be dimensioned

address Zach's feedback
Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason to even keep TensorType instead of using ProfiledTensorType with everything varying as the default?

@Krovatkin
Copy link
Contributor Author

Is there any reason to even keep TensorType instead of using ProfiledTensorType with everything varying as the default?

I think, that's the plan :-) eventually

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Krovatkin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Aug 2, 2019
Summary: Pull Request resolved: pytorch/pytorch#23060

Differential Revision: D16460391

Pulled By: Krovatkin

fbshipit-source-id: b50ee87d22ad18b8cbfff719b199ea876ef172f1
@facebook-github-bot
Copy link
Contributor

@Krovatkin merged this pull request in 3d15ee1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: internals Related to internal abstractions in c10 and ATen module: onnx Related to torch.onnx module: pybind Related to our Python bindings / interactions with other Python libraries oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants