Conversation
|
Ok, I've cleaned it up a bit now. Removed the Also applied the changes to CUDA tensors. From my side this seems done for now. One idea I have that could improve it further: Allow coloring the separators |
|
And now I've added a tutorial section explaining the pretty printing. now I'm actually done. |
|
Ah, and of course we immediately run into regressions of float printing between Nim version 1.0 and current devel. I'll put the float printing tests behind a nim version check. |
|
Rebased onto current master to resolve merge conflict about slicing tutorial. |
Otherwise this introduces breaking changes of tensor printing (and is ugly)
|
Thanks, that' awesome! |
(will write in a few min)
edit:
Well, a few minutes quickly turn into an hour, haha.
This finally improves the pretty printing of an ND tensors. Instead of special casing each dimension, this is a generic approach that should scale to N dimensions. I've only tested to 5D (see tests), but that works fine. At even higher dimensions the algorithm should work correctly, but it's possible that the layouting of separator lines might not look perfectly.
The idea is simple:
In addition we add separators between the vertical / horizontal splits based on the current rank. On the outside of each printed block, the current index of that axis is printed.
The required alignment of the fields is performed by an initial string conversion of all elements and picking the largest string as a reference. If one stores a novel in a
Tensor[string]it'll thus be ugly. :P This leads to perfect alignment independent of the scale of numbers stored. If very large and very small numbers appear it leads to a lot of whitespace between the smalll numbers. I preferred that to aligning only sub tensors.There remains one question: the Haskell implementation referenced by @mratsim in #5 and #500 prints the number of the axis instead of the index along that axis on the outside. I personally prefer the approach I implemented as it allows to read off the exact index even in large outputs rather easily without manual counting. It does require one to think about which number corresponds to what axis though.
A note on performance: pretty printing a tensor is slow. I did not make (nor did the existing code) make any attempts at making sure all the string operations are particularly fast!
An example:
edit2:
As the CI is still failing, maybe it's a good idea to make another PR to fix it first and then rebase.
Aside from that I should probably add a pretty printing
Tensor[float]example as well.edit3:
Ah, another thing I forgot to mention: If we consider merging this, should I remove the existing
disp3danddisp4dprocedures?