You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do not pass node and edge directly. Instead, make method like createNode(), createEdge, updateNode, etc.
With point number 1, we can do many optimization and simplify API. We can use QuadTree and on creating node or edge, update, we can update or insert the node/edge on the QuadTree. This will improve so much on hover detection. Also we can pre-render path using Path2D on those events, this will improve so much on drawing. Another benefit is the user don't need to call something like requestDraw or setData anymore.
Use 2 more canvases, 1 for rendering background, and 1 for rendering moving node/edge. As we all know, when we move a node, only that node and its edges are moving, so we don't need to rerender everything.
When scaling up, we don't need to rerender everything, just render the previous canvas, zoomed.
When scaling down or moving the view, we can render the previous canvas with the new transform, and then rerender the part that is empty.
Let the user draw the node/edge shape and content. The previous version of shape definition is not flexible and not user friendly. Need to rewrite this.
Experiment on different ways to get intersection point between edge and node. One idea is to just use binary search and isPointInPath. This might be a bit slow, but it should be very easy to implement and if we do all the above points, rendering count should be very minimal so it should be ok.
Support touchscreen input. One idea is to actually not support any built-in input (like Shift+Click for creating node) and let the user implement it.
Known Bugs:
Is node or edge inside view detection is buggy, need to rewrite this.
Some improvement ideas:
requestDraworsetDataanymore.Known Bugs: