I think we can improve the documentation for metrics used in calls to model.compile and how they relate to what one might see in a fit callback. I've recently been trying them out and found a few challenges
- Knowing what the available metrics are, the docs for compile only mention 'accuracy'. Knowing what other ones are available would be nice
- Mapping these metrics to what is passed to a callback like onBatchEnd. So far I've noticed that 'loss' is always present even if not explicitly mentioned in model.compile. I also noticed that when i pass 'accuracy' to model.compile, the actual keys in the callback are 'acc', (and sometimes 'val_acc' & 'val_loss' are present if i have a validation set). It would be nice to better describe what metrics are available in what callback
- Understanding how https://js.tensorflow.org/api/latest/#metrics.binaryAccuracy fit into the picture: Can these be used with model.compile etc.
There could be a number of ways to approach this: writing a guide, updating the API docs, adding new section header information to the relevant parts (or all three). I'm happy to help with this if we know the general direction we want to take.
cc @caisq @ericdnielsen @bileschi
I think we can improve the documentation for metrics used in calls to model.compile and how they relate to what one might see in a fit callback. I've recently been trying them out and found a few challenges
There could be a number of ways to approach this: writing a guide, updating the API docs, adding new section header information to the relevant parts (or all three). I'm happy to help with this if we know the general direction we want to take.
cc @caisq @ericdnielsen @bileschi