-
Notifications
You must be signed in to change notification settings - Fork 13
Selecting the right metrics for a complete dashboard #425
Comments
Regarding Rascal metrics:
I hope this is helpful, please hit me with any other question. |
I cannot completely comment on this aspect as I do not use the dashboard. Maybe @mhow2 could have more recommendations. In all the cases, regarding NLP tools that I know that they have a dashboard and that you have not selected, I could recommend:
I might be wrong, but I think that the number of visualizations available is greater than the dashboards available. However, that doesn't mean that all the visualizations would generate nice graphs. And, as well, I do not know well the difference between the dashboards and the visualize. For example, I'm not sure there is severity dashboard, which for projects using Bugzilla could generate nice graphs with the different severity levels, e.g.:
|
Hi @tdegueul , all,
Well, no: the dashboard you link to contains only 3 projects (the 2 defaults + ee4jjersey). maybe it's because of the nature of the analysed projects, and we would have more data with different projects? In that case we'll just drop this very dashboard if it's not useful for our usecase. Thanks for the tip.
Ok, so I'll add the numberOfChanges* metrics and remove the changeMethods one. Thanks. :-) Thanks a lot for the quick and useful response Thomas, have a wonderful evening/day! :-) |
Hi @creat89 , all,
Great! Thanks! :-)
Euh.. I'm not sure to understand that. Of course not all metrics will produce a good viz. We have to select them according to that criterion (among others) I guess. AFAIU if we think that these metrics are worth it we could add them to the dashboard (and the analysis). I can add them to the list of computed metrics, @valeriocos WDYT? Can you add these to the dashboards? Thank you all, have a good day! :-) |
@valeriocos Following up on one of your (very useful) emails ;-) Just to make sure that we're on the same page, and I'm not missing something.. Do you mean that if we add all these metrics we have a complete dashboard? (on top of the code-related metrics discussed with Thomas of course) METRIC PROVIDER IDSnumberOfBreakingChanges.historic BTW they do not look like usual metric IDs. how is it that we have different IDs for the metrics? I don't get it. I guess I can map them to metrics in the instance/metrics.json .. right? |
Sorry for the late reply! I missed this issue. from #425 (comment)
trans.rascal.api.changedMethods is not shown in the dashboard. from #425 (comment)
The metrics will be shown in the debug dashboard if they are exposed via visualization endpoints. If you want to add one or more visualizations to a specific dashboard, we can work together to add them. Do you have a deadline for this? from #425 (comment)
Creating a dashboard (or adding visualizations to it) with metrics exposed via visualization endpoints is pretty easy. @borisbaldassari if you agree, we could have a call to explain how to create a dashboard/visualization and save it to Kibana (@tdegueul and I had a call of this kind in the past). Then I can take care of export it and include it to the docker-compose, WDYT? from #425 (comment)
I took these names from the project page within the SCAVA UI. For instance, if you access http://crossminer.bitergia.net:5601/#/project/configure/technologyepf, and click on metrics (within the grey box at the very end of the page), you will see the list of metrics. |
Hiho @valeriocos
Removed it from the list.
The deadline is the one we all have, so I guess it is set to our next meeting. I'd be interested to learn how to do that however, so I don't bother you each and every time.. ;-)
Same as before: yes, I'm interested. We'll need to find some time for that, next slot for me is next friday. Would that fit your schedule?
Done, I've updated the list of metrics in the task creation script with the full list of full IDs. Thanks to @mhow2 to point me to the right direction too! ;-) |
Hi @borisbaldassari , next Friday is perfect (and even the days before in case your schedule changes), thanks. |
We consider that the current set of metrics, as described in https://github.com/crossminer/scava-scripts/blob/master/scava_create_task.pl, will fill up the dashboard with nice graphics. We can close the issue, thank you all! |
We have an on-going discussion through emails to select the correct set of metrics in the analysis. As of today most graphics show empty data, we want to select only the metrics required to have a nice and complete dashboard while preserving the performance of the analysis.
The reference dashboard is the one maintained by Konstantinos:
http://83.212.75.210/
The list of metrics currently selected (for the next round of analyses) can be found at
https://github.com/crossminer/scava-scripts/blob/master/scava_create_task.pl
The text was updated successfully, but these errors were encountered: