Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: auto-retry with single-threaded writer on OOM #253

Closed
3 tasks done
Ushie opened this issue Oct 10, 2023 · 14 comments
Closed
3 tasks done

feat: auto-retry with single-threaded writer on OOM #253

Ushie opened this issue Oct 10, 2023 · 14 comments
Labels
Feature request Requesting a new feature that's not implemented yet

Comments

@Ushie
Copy link
Member

Ushie commented Oct 10, 2023

Type

Functionality

Issue

When running on limited memory, it may not be enough to patch a specific app and ends up throwing an OOM as seen in #193

Feature

ReVanced Patcher handles the OOM and restarts with single thread at the appropriate step

Motivation

ReVanced Patcher supports both single and multi-threads, multi-threads are preferred at all times as the patching process will overall be faster

This moves the hassle from the end-user to ReVanced Patcher, hopefully for a seamless transition with a few additional logs to state what happened

Additional context

The reason I'm opening this in the ReVanced Patcher repository instead of ReVanced Library or expecting it to be done by the clients is that hopefully, ReVanced Patcher can restart ONLY the necessary section instead of having to start over from the beginning

Acknowledgements

  • I have searched the existing issues and this is a new and no duplicate or related to another open issue.
  • I have written a short but informative title.
  • I filled out all of the requested information in this issue properly.
@Ushie Ushie added the Feature request Requesting a new feature that's not implemented yet label Oct 10, 2023
@Ushie Ushie changed the title feat: auto-retry with multi-threaded writer on OOM feat: auto-retry with single-threaded writer on OOM Oct 10, 2023
@LisoUseInAIKyrios
Copy link
Contributor

Perhaps patcher could also check how much memory is available, and if it's below some preset threshold then always use single threaded from the start.

@oSumAtrIX
Copy link
Member

@LisoUseInAIKyrios It depends on the app. Some apps can be patched with 500MB, such as YouTube with multiple threads, but TikTok requires 500MB with a single line due to having way more classes.

@LisoUseInAIKyrios
Copy link
Contributor

LisoUseInAIKyrios commented Oct 10, 2023

Then could alternatively check how many classes/methods are total, or the total file size of the original class/dex files, and use a threshold based on that. Then compare what YouTube and TikTok size up to, and pick a threshold that falls somewhere between the two that indicates to always use single threaded.

@oSumAtrIX
Copy link
Member

By default 1 << 16 classes are compiled for each dex file, reducing this number causes more dex files to be compiled but it does not consume this much memory anymore, but more dex files mean the app will load slower.

@oSumAtrIX
Copy link
Member

I am considering closing this issue as it doesn't really make sense from the standpoint of ReVanced Patcher to handle memory issues. It provides the option to select either single or multithreading, but what you make out of it, is not the concern of ReVanced Patcher anymore.

@oSumAtrIX oSumAtrIX added the Waiting on author Further information is requested label Feb 11, 2024
@Ushie
Copy link
Member Author

Ushie commented Feb 11, 2024

A client can't implement a solution that is better than a solution from the ReVanced Patcher itself, if ReVanced Patcher can figure out whether to use single or multithreading then the issue is solved universally on all clients, this prevents clients from having to restart the process when they encounter an OOM to run it in single threaded mode

@oSumAtrIX
Copy link
Member

It is not ReVanced Patcher's responsibility to care about the environment it runs on, even if it seems simpler to move the matter to ReVanced Patcher. ReVanced Patcher is what the name suggests. It is the single responsibility, and matters like the environment have to be handled by the environment. For that, it allows configuring it according to the environment's needs.

@Ushie
Copy link
Member Author

Ushie commented Feb 11, 2024

What about moving this to ReVanced Library? it fits the Library's motive and responsibility of unifying common code across clients

@LisoUseInAIKyrios
Copy link
Contributor

Will the new Manager resolve this issue? It seems to have the memory limit fixed.

@Ushie
Copy link
Member Author

Ushie commented Aug 17, 2024

Yep

@oSumAtrIX
Copy link
Member

oSumAtrIX commented Aug 17, 2024

The main issue with memory is caused by the classes being held im memory after the writer writes them back. Dexlib initially instantiates classes lazily. So it only reads the dex when the fields are accessed, but because we are writing the classes, the writer needs to read all fields. Something references these classes in memory so the Garbage collector is unable to free the memory after the classes have been read and written. I have already tried copying the list to a local variable and then clearing the list allocated in heap, in hopes of getting rid of the class list referencing the classes, in case something references that list, but that had no effect. Patches are objects so they are kept in memory as static fields, perhaps some patch references the classes in a field and the garbage collector cant get rid of them. I tried profiling but couldn't figure out much through that as well. In general, memory issues should be completely resolved with maximum required memory for patching at about 100mb if we can find a way to get rid of the classes from memory after they have been written. Compose manager tried to work around this issue by using s separate process and launched a new ART process and configures the VM to not enforce any memory limitations. Since processes don't run in the VM, they themselves aren't limited by memory either so in theory as long as your device has enough memory, OOMs should not occur. The issue is that this is a workaround and thus one issue following from this is that still, linear to increasing amount of classes, more memory is allocated until all classes have been written and the garbage collector can finally get rid of them in memory.

@LisoUseInAIKyrios
Copy link
Contributor

Patches are objects so they are kept in memory as static fields, perhaps some patch references the classes in a field and the garbage collector cant get rid of them.

At least for YT, many of the shared hook patches have target method and class references, so yeah they won't be cleared out. Modifying these patches to clear the fields on close should fix it, but if even a single patch doesn't do that then the issue is not fixed. Changing the patches to not be static objects and clear all patch instances would be the most fool proof and only guaranteed way for that.

@oSumAtrIX
Copy link
Member

Well, if a couple of classes are kept in memory it would be fine. fingerprints are also objects and reference classes. As it seems though, every class is kept in memory, so there must be a different issue. What i have observed is that without applying any patch and just reading and writing the classes, keeps memory fairly low, but without profiling skills its hard to figure out the problem. Classes may solve the issue, though with the upcoming DSL, instances are stored in top level fields so this may need a different idea

@oSumAtrIX
Copy link
Member

What about moving this to ReVanced Library? it fits the Library's motive and responsibility of unifying common code across clients

ReVanced Library doesn't have any APIs yet to run ReVanced Patcher. It may also make sense to introduce another library specific to the purpose of managers which uses the base ReVanced Library as I have found it difficult to structure manager specific code in ReVanced Library because it is more of a general purpose library. Once we setup such a project, we can implement manager backend agnostic logic there so that managers like cli or manager only have to implement a frontend of their choice, while the backend relies on the library as much as possible to share code.

@oSumAtrIX oSumAtrIX closed this as not planned Won't fix, can't repro, duplicate, stale Aug 18, 2024
@oSumAtrIX oSumAtrIX removed the Waiting on author Further information is requested label Aug 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request Requesting a new feature that's not implemented yet
Projects
None yet
Development

No branches or pull requests

3 participants