Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualizations on W&B #3404

Open
AyushExel opened this issue Aug 25, 2021 · 10 comments
Open

Visualizations on W&B #3404

AyushExel opened this issue Aug 25, 2021 · 10 comments
Labels
enhancement Improvements or good new features

Comments

@AyushExel
Copy link

🚀 Feature

Allow visualization of training progress using media panels and tables.

Motivation & Examples

This is based on this issue.
I'm an engineer at W&B and I've been working with object detection tasks. I regularly use some of the following visualizations on W&B dashboards. Would the following features be useful to other detection users? I'd like to know what the maintainers think.

Bounding box & Segmentation maps debugger

W&B supports interactive media panels, where you can track how training progresses by adjusting the steps, confidence scores, and classes of predictions in real-time.
ezgif com-gif-maker (2)

Try it live here

The Media panel also supports debugging segmentation maps.
semantic segmentation (1)

Dataset visualization and versioning

With W&B tables, you can visualize, query, and filter your datasets in your browser.

Quickly compare results across different training epochs, datasets, hyperparameter choices, model architectures etc. For example, take a look at this comparison of two models on the same test images →

Screen Shot 2021-04-29 at 8 55 25 PM (1)

Model versioning and DAGs
ezgif com-gif-maker (1) (1)

We can have the user set

  • the model logging period
  • and the desired metric they'd like to optimize.

Based on that, we'll log models after every model logging period with alias best if the current model performs best on the desired metric

@AyushExel AyushExel added the enhancement Improvements or good new features label Aug 25, 2021
@AyushExel
Copy link
Author

@ppwwyyxx what do you think about this? Would this be a useful add-on?

@nbardy
Copy link

nbardy commented Sep 22, 2021

Would be useful for me! I ended up adding it myself to debug some models.

@BenSpex
Copy link

BenSpex commented Oct 24, 2021

Hi @nbardy would you mind sharing the code on how you added this?

@nbardy
Copy link

nbardy commented Oct 25, 2021

@BenSpex Im unfortunately busy focused on synthesis. But looking at detectron 2 integration next quarter. I'm happy to upstream logging when I iron it out.

Their Fully Connected forums often have examples of community integrations.

@BenSpex
Copy link

BenSpex commented Oct 25, 2021

@nbardy awesome, thanks for coming back. let me know when you need someone to test the integration.

@AyushExel
Copy link
Author

@BenSpex responded to you in the W&B forum :)

@ppwwyyxx
Copy link
Contributor

Hi all, the visualizations look pretty awesome and would be a great addition to detectron2!
However we're not familiar with w&b and uncertain of how much work is needed to support them.
If you'd like to contribute this to detectron2, could you provide an initial design (or even code, if available) about what changes will need to be added? This will give us a better idea how to integrate.

@AyushExel
Copy link
Author

@ppwwyyxx thanks. What would be the best way to share the code. Should I open a WIP draft PR to facilitate easier disscussion? I'm happy to do it via other mediums if you prefer.

Regarding changes - I think the only thing that is needed to log these visualizations efficiently is some predictions to be stored in event storage after evalhook is called.
In my WIP, I didn't change anything from the existing detecron2 codebase, I just created a new temporary EvalHookv2 to manually infer and save predictions in event storage. This adds overhead so it would be nice to have a split of predictions stored when they are calculated while evaluation. What do you think? Happy to discuss in a dedicated thread.

@ppwwyyxx
Copy link
Contributor

ppwwyyxx commented Oct 29, 2021

Yeah a draft PR could be a good starting point.

If what is needed is just to access the predictions, it seems a better approach is to implement a new evaluator (subclass of DatasetEvaluator https://detectron2.readthedocs.io/en/latest/modules/evaluation.html#detectron2.evaluation.DatasetEvaluator) that will work with the existing evaluation logic. All evaluators have access to the inputs & predictions during inference, so there is no need to store them anywhere or recompute them. This is also how we do some visualizations internally.

@AyushExel
Copy link
Author

@ppwwyyxx I've made a draft PR with a design proposal. Sorry for the dealy, was waiting on some UI changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Improvements or good new features
Projects
None yet
Development

No branches or pull requests

4 participants