Skip to content

Conversation

mfeliz-cruise
Copy link
Contributor

Description

Repeated aten::index ops in a convertible block would cause layer name conflicts. Add node_info to the layer names to avoid this.

Error Code 4: Internal Error (Repeated layer name: compute_dim0_0 (layers must have distinct names))

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@github-actions github-actions bot added component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: core Issues re: The core compiler component: tests Issues re: Tests labels Sep 24, 2022
Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@fatemebafghi
Copy link

Hello there. I am getting an error for same layer name on casting from Int32 to float32 as below.
image

I think it may be related to this bug. I would be so appreciated if you could help me fix it.

@narendasan narendasan merged commit dd88afc into pytorch:master Oct 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: core Issues re: The core compiler component: tests Issues re: Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants