You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reason to separate embedding from other functions is that this function uses tensorflow's internal code to dump data, rather then SummaryWriter. (Because I fiddled around but in vain) And currently, the embedding folder depth can only be 1. This is a documented bug in a ongoing tutorial. XD. Another concern is pytorch/pytorch#2230, which breaks training procedure if you are using multi GPU, so I think it is still a premature porting. Finally, official tensorboard seems to unify data format as tensorSummary, if that comes true, we can solve the above problems at once then put it back to SummaryWriter class.
Some advantages to implement it in SummaryWriter:
runs/XXX
(currently the embedding must be written into separate directories.And in your README.txt, the add_embedding is under
SummaryWriter
part.If the original tensorboard don't have such mechanism, we can wrap the
SummaryWriter
andadd_embedding
into brand new unified APIs.The text was updated successfully, but these errors were encountered: