Skip to content

Conversation

@dongruoping
Copy link
Contributor

@dongruoping dongruoping commented Dec 16, 2020

Proposed change(s)

Fix the model inference issue with Barracuda 1.2.1 by getting batch size implicitly with mu and avoid runtime broadcasting.

Also added memory_size_vector in model serialization to suppress torch onnx export warning about converting tensor to python constant.

Useful links (Github issues, JIRA tickets, ML-Agents forum threads etc.)

Types of change(s)

  • Bug fix
  • New feature
  • Code refactor
  • Breaking change
  • Documentation update
  • Other (please describe)

Checklist

  • Added tests that prove my fix is effective or that my feature works
  • Updated the changelog (if applicable)
  • Updated the documentation (if applicable)
  • Updated the migration guide (if applicable)

Other comments

Co-authored-by: Ervin T. <ervin@unity3d.com>
Co-authored-by: Ervin T. <ervin@unity3d.com>
@dongruoping dongruoping merged commit fbd4bd7 into master Dec 17, 2020
@delete-merged-branch delete-merged-branch bot deleted the develop-fix-export branch December 17, 2020 01:36
dongruoping pushed a commit that referenced this pull request Dec 17, 2020
Co-authored-by: Ervin T. <ervin@unity3d.com>
dongruoping pushed a commit that referenced this pull request Dec 17, 2020
Co-authored-by: Ervin T. <ervin@unity3d.com>
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 17, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants