-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve RBTree with static leaf #543
Comments
Are you sure that this is not something the compiler already optimizes way? I have a hunch that it could, and if it doesn’t, that it should :-) |
It doesn't seem so. According to my measurements it saves 24 bytes per node of that form (each #leaf costs 12 bytes and it doesn't get optimized away). |
This other, more efficient representation that is mentioned in the comments of RBTree.mo would benefit just the same:
|
As Joachim says, singleton typed objects (such as |
I am running
with output
|
Bug is confirmed: $ cat tree.mo
type Tree = {#leaf; #node : (Tree, Tree) };
func leaf() : Tree = #leaf;
func node(l : Tree, r : Tree) : Tree = #node (l, r);
ignore node(leaf(), leaf()); I get allocations for (func $leaf (type 16) (param $clos i32) (result i32)
(local $heap_object i32)
i32.const 3
call 142
local.tee $heap_object
i32.const 15
i32.store offset=1
local.get $heap_object
i32.const 1202717598
i32.store offset=5
local.get $heap_object
i32.const 0
i32.store offset=9
local.get $heap_object) |
Can you open an issue in the motoko repo? I don’t recall a reason why we don't have it already, probably just oversight. Maybe do |
I am working on a patch. See dfinity/motoko#3878. |
A compiler patch will fix the same for the After the patch, what will be the most compact representation for a node in RBTree? Ignoring the K-V pair, we have (current type)
at 32 bytes per #node. Proposed type (as per comment in RBTree.mo)
at 28 bytes per #red/#black. But the most compact would be
at 20 bytes. |
Variants with constant payloads should be constant and end up in the static heap. This will speed up all tree-like data structures that have unit-payload leaves (not to speak of allocation wins!). See dfinity/motoko-base#543 for motivation. For below program ``` Motoko import P = "mo:⛔️"; type Tree = { #leaf; #node : (Tree, Tree) }; func leaf() : (Tree, (Nat, Nat)) = (#leaf, (5, 42)); P.debugPrint(debug_show leaf()); ``` the function `leaf()` now looks like ``` wasm (func $leaf (type 5) (param $clos i32) i32.const 2097267 i32.const 2097279 global.set 18 global.set 17) ``` looks as expected. It also runs happily ``` $ wasmtime tree.wasm (#leaf, (5, 42)) ```
The current implementation seems to be creating a lot of individual leafs on the heap that are structurally all the same (
#leaf
). There could be one staticlet leaf_ = #leaf;
in the module and nodes could be created as#node (c, leaf_, k, v, leaf_)
instead of#node (c, #leaf, k, v, #leaf)
, saving a lot of memory.The text was updated successfully, but these errors were encountered: