Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate the rendering and widgets #6447

Closed
AlixANNERAUD opened this issue Jun 30, 2024 · 22 comments
Closed

Separate the rendering and widgets #6447

AlixANNERAUD opened this issue Jun 30, 2024 · 22 comments
Labels

Comments

@AlixANNERAUD
Copy link

AlixANNERAUD commented Jun 30, 2024

Introduce the problem

It would be good to be able to get rid of the rendering part when needed (with a #define in lv_conf.h), somewhat like having a "client" side and a "server" side.

Proposal

The client side would only be responsible for setting up and configuring the data structure structures, and the server side for the rendering and input device. Currently, LVGL compiled in WASM occupies ~600 KB, which is significant. I think we can reduce this by getting rid of the rendering part.

@kisvegabor
Copy link
Member

Hi,

By rendering you mean e.g. software rendering which actually draws a rectangles, text, etc, or even an upper layer which creates draw_tasks, dispatches them to draw_units.

What would be use case for this separation?

Who will do the rendering in the end?

@AlixANNERAUD
Copy link
Author

This can be useful in cases where WASM is used (notably with WAMR on embedded devices) where the client application is sandboxed, preventing direct access to host memory. A naive approach would be to have a buffer per client application (each instance running its own LVGL "rendering engine"), but this is not very memory-efficient.

A better approach would be to compile LVGL as a WASM module without the rendering part (which is likely the heaviest part). The client application would then load this WASM LVGL module, create and set the necessary properties on its objects/widgets, and send references to the parent objects to the host containing the "rendering engine". These references would be attached by the host to one of the screens and then rendered.

Minimizing the size of the WASM part is crucial since the entire code is compiled when building it as a WASM module (similar to a shared library).

@kisvegabor
Copy link
Member

I see now but I'm not sure if it will work. Or have you already tried mixing the widget created by multiple LVGL instances?

My concern is that (I assume) each instance will create the widgets in its own memory pool. However it can happen that an other LVGL instance will delete/free the widgets. In this case free need to act on the other instance's memory. If you are using LVGL builtin malloc and free I believe free will crash as it assumes that that memory is allocated in in the current memory pool (and not the other instance's memory pool).

cc @liamHowatt

@AlixANNERAUD
Copy link
Author

AlixANNERAUD commented Jul 9, 2024

Yes, that was also one of the points where I was not sure how to proceed. We would need to have a kind of lv_set_parent_custom(lv_obj_t* parent, lv_obj_t* obj) but which does not remove the entry of the object from the old parent (which does not actually exist in the host memory space).
Thus, the whole process would be:

  • An object is created in the WASM application (allocated and initialized) : lv_obj_create.
  • The pointer is then passed via FFI to the host and translated to the host memory space.
  • The host then calls lv_set_parent_custom.

And normally this should resolve the issues between the different allocators, with the host owning the parent/screen and the WASM application owning the object in question.

@liamHowatt
Copy link
Collaborator

Could you please go into a bit more detail about what exactly lv_set_parent_custom would do? Does it prevent lv_free from being called on it? What would the parent parameter be when called in the host? Thanks.

@kisvegabor
Copy link
Member

Using a custom allocator which uses the host memory could be an option too. But I guess it beats the purpose of sand boxing.

@AlixANNERAUD
Copy link
Author

AlixANNERAUD commented Jul 11, 2024

Could you please go into a bit more detail about what exactly lv_set_parent_custom would do? Does it prevent lv_free from being called on it? What would the parent parameter be when called in the host? Thanks.

This function would do the same thing as lv_set_parent:

  1. Remove the reference from the old parent and decrease the size of the old parent's children array.
  2. Add the reference to the new parent and increase the size of the new parent's array.

However, we wouldn't have the first part because lv_free would not work/crash in the host memory space since the previous parent is located in wasm application memory space. New parent could be any kind of screen / object BUT in host memory space.

@AlixANNERAUD
Copy link
Author

AlixANNERAUD commented Jul 11, 2024

Using a custom allocator which uses the host memory could be an option too. But I guess it beats the purpose of sand boxing.

Yes, in a raw manner, this is not a solution for executing unverified code since the passed host pointer could be modified at any point by the guest application.
Another approach I had thought of (and this will be the case if this issue is not accepted) was to have a hashmap for each application instance that bind an identifier to a pointer, and to have a custom allocator that, depending on the instance (as I can know the instance based on the current thread), will allocate memory in the correct instance and return the identifier instead of the pointer. Then instead of using pointer, function would use identifier (e.g. : lv_set_pos(uint32_t obj, ...)).
However, for each call to LVGL functions, this implies creating a binding where the correspondence between pointer and identifier is made, which represents a lot of boilerplate code.

@xlbao123
Copy link

According to my understanding, do you just need to write a virtual draw driver?

@liamHowatt
Copy link
Collaborator

Yeah, could you write a draw driver that serializes the draw descriptors and coords and the host's LVGL deserializes the commands and calls lv_draw_(rect|label|image|etc). Serialized images could be an exception where a const pointer is sent instead of expensively serializing the data buffers.

In order to get any binary size savings, the serializer would have to be smaller than the rendering code it's replacing.

I think we can reduce this by getting rid of the rendering part.

It would be good to actually measure the compiled code size somehow so this isn't a wasted effort.

Fyi: #6313

@kisvegabor
Copy link
Member

However, we wouldn't have the first part because lv_free would not work/crash in the host memory space since the previous parent is located in wasm application memory space. New parent could be any kind of screen / object BUT in host memory space.

How can we handle edge cases like this:

  1. The application creates a container with some buttons. If one the buttons are pressed, delete the container
  2. The container is attach to a window in the host.
  3. The delete button is pressed, the callback from the application is called to delete the container, but it will delete it only from the application.

As the problem is only the deletion a solution could be this:

  1. Create the container as above.
  2. Attach it to a parent in host and also attache a delete_event_callback witch will detach it from the host container (just remove the reference).

Now the application has full freedom, but can the host do?

  • Attach the application to an parent and let it to work as it's defined by the application
  • Call the API function of the application to push some data
  • Attach callbacks to be called e.g. when a button is pressed (E.g. "Connect to Bluetooth")
  • Use subjects/observers to share data.
  • Direct manipulation or creation of application's widget from the host is dangerous as they might involve memory reallocation (e.g. when adding a style) so these cannot be initiated from the host.

What do you think?

@AlixANNERAUD
Copy link
Author

Yeah, could you write a draw driver that serializes the draw descriptors and coords and the host's LVGL deserializes the commands and calls lv_draw_(rect|label|image|etc). Serialized images could be an exception where a const pointer is sent instead of expensively serializing the data buffers.

In order to get any binary size savings, the serializer would have to be smaller than the rendering code it's replacing.

I think we can reduce this by getting rid of the rendering part.

It would be good to actually measure the compiled code size somehow so this isn't a wasted effort.

Fyi: #6313

Yes, this approach seems quite good to me as well because it completely eliminates the problem of choosing the allocator. However, we need to ensure that these calls for each rendering to the API via FFI are not too frequent, as FFI calls are quite expensive in terms of CPU time. The same issue applies to serialization/deserialization also (we could also use CBOR, by the way).

@AlixANNERAUD
Copy link
Author

How can we handle edge cases like this:

1. The application creates a container with some buttons. If one the buttons are pressed, delete the container

2. The container is attach to a window in the host.

3. The delete button is pressed, the callback from the application is called to delete the container, but it will delete it only from the application.

As the problem is only the deletion a solution could be this:

1. Create the container as above.

2. Attach it to a parent in host and also attache a `delete_event_callback` witch will detach it from the host container (just remove the reference).

Now the application has full freedom, but can the host do?

* Attach the application to an parent and let it to work as it's defined by the application

* Call the API function of the application to push some data

* Attach callbacks to be called e.g. when a button is pressed (E.g. "Connect to Bluetooth")

* Use [subjects/observers](https://docs.lvgl.io/master/others/observer.html) to share data.

* Direct manipulation or creation of application's widget from the host is dangerous as they might involve memory reallocation (e.g. when adding a style) so these cannot be initiated from the host.

What do you think?

If I understand correctly, you want to create a new type of widget (a kind of container dedicated to FFI?). This is a very good idea and quite efficient. The fact that the host cannot do anything on the application's widgets doesn't seem too problematic to me because I don't see many scenarios where this would be useful.

@AlixANNERAUD
Copy link
Author

AlixANNERAUD commented Jul 15, 2024

Yeah, could you write a draw driver that serializes the draw descriptors and coords and the host's LVGL deserializes the commands and calls lv_draw_(rect|label|image|etc). Serialized images could be an exception where a const pointer is sent instead of expensively serializing the data buffers.

In order to get any binary size savings, the serializer would have to be smaller than the rendering code it's replacing.

I think we can reduce this by getting rid of the rendering part.

It would be good to actually measure the compiled code size somehow so this isn't a wasted effort.

Fyi: #6313

But yes, to start with, in order to be sure that all this is worth it, I can proceed with adding flags to exclude the rendering part from the compilation. However, before I start, I would like to know if you think this is feasible without major refactoring (with just the addition of #if here and there). Because I don't know to what extent the rendering part is intertwined with the widget "setting/getting" part.

@kisvegabor
Copy link
Member

Actually rendering quite intertwined with the widgets as all widgets call lv_draw_* functions to draw themselves. Regardless to it #if can work. We can either

  1. Disable the internals of the lv_draw_* functions
  2. Disable the rendering and other not necessary parts in all widgets

Option 2) would save much more flash as the extra code for rendering can be quite long. See e.g. here.

However, I don't think it would work. If the rendering code is no there, when the host asks the applications' widgets to render themselves nothing will happen as the rendering logic is removed from the application.

This way, it won't be possible for the applications to define custom draw events either. E.g. this. In light of that maybe the serialization path that @liamHowatt recommended could be better.

A simpler approach could be to disable all rendering units (e.g. LV_USE_DRAW_SW 0) first after that. During rendering LVGL will create draw_tasks here and we "just" need to pass the created draw tasks to the host.

If we pass the application's main container's pointer to the host and the host will see the widgets as they were created there, the draw_tasks will be created by the host too in the host's display's layer.

It just came t my mind: can we read the application's memory from the host?

@AlixANNERAUD
Copy link
Author

AlixANNERAUD commented Jul 17, 2024

Actually rendering quite intertwined with the widgets as all widgets call lv_draw_* functions to draw themselves. Regardless to it #if can work. We can either

1. Disable the internals of the `lv_draw_*` functions

2. Disable the rendering and other not necessary parts in all widgets

Option 2) would save much more flash as the extra code for rendering can be quite long. See e.g. here.

However, I don't think it would work. If the rendering code is no there, when the host asks the applications' widgets to render themselves nothing will happen as the rendering logic is removed from the application.

This way, it won't be possible for the applications to define custom draw events either. E.g. this. In light of that maybe the serialization path that @liamHowatt recommended could be better.

A simpler approach could be to disable all rendering units (e.g. LV_USE_DRAW_SW 0) first after that. During rendering LVGL will create draw_tasks here and we "just" need to pass the created draw tasks to the host.

If we pass the application's main container's pointer to the host and the host will see the widgets as they were created there, the draw_tasks will be created by the host too in the host's display's layer.

It just came t my mind: can we read the application's memory from the host?

I didn't know that one could customize the rendering pipeline to this extent with LVGL. Yes, indeed, it seems like a good solution. However, how do you think we could proceed to add draw task to host draw tasks ? And yes, the host can access the guest application's memory, you just need to perform address translation beforehand.

@lvgl-bot
Copy link

lvgl-bot commented Aug 1, 2024

We need some feedback on this issue.

Now we mark this as "stale" because there was no activity here for 14 days.

Remove the "stale" label or comment else this will be closed in 7 days.

@lvgl-bot lvgl-bot added the stale label Aug 1, 2024
@kisvegabor
Copy link
Member

I'm sorry, I missed your reply. 🙁

Let's say you have an lv_display with an screen in the host. You add an application to the screen like:

lv_obj_t * wasm_attach_lvgl_app(lv_obj_t * parent_in_host, lv_obj_t * root_widget_of_the_app);

This function manually adds a new child to parent_in_host and translates the address if required. From this point when LVGL traverses the widgets it will see root_widget_of_the_app as normal child of parent_in_host, call its callbacks, handle their drawing etc. So it will look as it were created in the host and will be handled in transparent way.

Do we need the address translation if LVGL call the callbacks and uses the children in C? I hope not. 😄 🤞

@lvgl-bot lvgl-bot removed the stale label Aug 3, 2024
@AlixANNERAUD
Copy link
Author

AlixANNERAUD commented Aug 11, 2024

Yes, that should be the most appropriate solution (essentially what I meant with my lv_set_parent_custom function). However, if wasm_attach_lvgl_app uses lv_obj_set_parent, it would remove the child pointer from the old parent, which is not possible because the child is actually a WASM app screen.). That's why we need a custom lv_obj_set_parent function. In addition, wouldn't the lack of a registered display be a problem on WASM side to create a screen ?

For the callbacks (I assume you mean the event callback), a custom function should be added to register event callbacks (specifically for address translation). However, the lv_event_t* memory is owned by the host, correct? So, it cannot be read within the WASM application. Perhaps the WASM app should pass a pointer to a memory location it owns when registering the callback, which would then be sent back to the callback when called.

Another issue is that the WASM LVGL part is still quite large in terms of binary size when compiled as a WASM module. This is somewhat expected since all of the drawing code is still included (even though it is completely unnecessary). The compiler treats a WASM module like a dynamic library and does not remove unused code, I believe. We could link LVGL statically to allow further optimization; however, that would mean each WASM module instance has a copy of the LVGL code, which is not ideal in memory-constrained environments.

@kisvegabor
Copy link
Member

kisvegabor commented Aug 12, 2024

Regarding set_parent: We can leave the app's container in its tree (in the app) because it will be never used by the app (the host does everything by calling lv_timer_handler). We just need to make it part of the host's tree. Anyway, I think we are mostly on the same page. It would be great to see a proof of concept for that. Just to test if we can really "channel" widget from an app to the host.

Regarding the events: I think/hope events registered in the app will work in the app, without any further action. However, I see how events added to the app from the host can be tricky.

Regarding memory size: It would be huge to use LVGL as a library which can be linked to multiple apps. However, as I described above probably the drawing routines cannot be removed to let the users use them in custom draw events. But maybe it's not so important, because it seems we can have this setup:

  • 1 common full LVGL shared library
  • 1 host using the LVGL library
  • Multiple apps using the same LVGL library

What do you think?

@lvgl-bot
Copy link

We need some feedback on this issue.

Now we mark this as "stale" because there was no activity here for 14 days.

Remove the "stale" label or comment else this will be closed in 7 days.

@lvgl-bot lvgl-bot added the stale label Aug 28, 2024
@lvgl-bot
Copy link

lvgl-bot commented Sep 5, 2024

As there was no activity here for a while we close this issue. But don't worry, the conversation is still here and you can get back to it at any time.

So feel free to comment if you have remarks or ideas on this topic.

@lvgl-bot lvgl-bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants