Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A non-experimental model is displayed in the output box. #1746

Closed
vzhuqin opened this issue Aug 30, 2021 · 7 comments
Closed

A non-experimental model is displayed in the output box. #1746

vzhuqin opened this issue Aug 30, 2021 · 7 comments
Labels
Needs Repro Priority:1 Work that is critical for the release, but we could probably ship without Reported by: Test Stale

Comments

@vzhuqin
Copy link

vzhuqin commented Aug 30, 2021

System Information (please complete the following information):

  • ML.Net Model Builder: 16.7.3.2142702 (Main Branch)
  • Windows OS: Windows-10-Enterprise-21H1
  • Microsoft Visual Studio Enterprise 2019: 16.11.2

Describe the bug

  • On which step of the process did you run into an issue: Experiment Results in Output box
  • Clear description of the problem: Find one non-experimental model is displayed in the output box.

To Reproduce
Steps to reproduce the behavior:

  1. Select Create a new project from the Visual Studio 2019 start window;
  2. Choose the C# Console App (.NET Core) project template with .Net 5.0;
  3. Add model builder by right click on the project;
  4. Select Data classification to complete training;
  5. See there is a model that is not experiment results, display in Output box.

Expected behavior
Maybe should not display the non-experimental model.

Screenshots
If applicable, add screenshots to help explain your problem.
NetCore:
image
Net472:
image

Additional context
Add any other context about the problem here.

@beccamc
Copy link
Contributor

beccamc commented Aug 30, 2021

So the problem here is that the final model is getting reported late? These final models should be printed in the list instead of showing later in the output

@beccamc beccamc added Pretriaged Priority:1 Work that is critical for the release, but we could probably ship without Priority:2 Work that is important, but not critical for the release and removed Priority:1 Work that is critical for the release, but we could probably ship without labels Aug 30, 2021
@vzhuqin
Copy link
Author

vzhuqin commented Sep 1, 2021

It seems not. I don’t see the trainer of this score from the Experiment list.
image.png

@beccamc
Copy link
Contributor

beccamc commented Sep 2, 2021

Per Bri, remove the number column (first column) in "Top x models explored"

@beccamc beccamc added Priority:1 Work that is critical for the release, but we could probably ship without and removed Pretriaged Priority:2 Work that is important, but not critical for the release labels Sep 2, 2021
@github-actions github-actions bot added the Stale label Oct 3, 2021
@vzhuqin
Copy link
Author

vzhuqin commented Nov 10, 2021

Similar issue occurs on mlnet cli: 16.9.2
Recommendation scenario:
image

@luisquintanilla
Copy link
Contributor

@v-Hailishi can you please try and repro using the latest versions.

@v-Hailishi
Copy link

@luisquintanilla The bug is not repro on the latest main branch 16.13.10.2247001.
image

@luisquintanilla
Copy link
Contributor

Thanks for confirming. Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Repro Priority:1 Work that is critical for the release, but we could probably ship without Reported by: Test Stale
Projects
None yet
Development

No branches or pull requests

4 participants