-
Notifications
You must be signed in to change notification settings - Fork 395
-
Notifications
You must be signed in to change notification settings - Fork 395
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistent job runs (successfully) forever #172
Comments
I tried using another serialization method (use |
about deserialization, you don't seem to assign it in the configuration. You need to implement the JobSerializer interface and let the JobManager call it when necessary. About going crazy, thats unnacceptable. I'll check the logs now but meanwhile, is it possible for you to extract the part that goes crazy so i can debug it in an app? |
@yigit I use the default configuration, which should automatically add a About an app so you can debug, I'm looking at what I can do. |
actually, i'm afraid i broke it in v2 because base class was depending on it. |
BTW, the problem doesn't seems to be serialization/deserialization itself, but what JobManager does with it. I could use hardcoded ByteBuffer serialization with a custom JobSerializer and still got the problem. The behavior is that when the job is added (in background), it's serialized and persisted, but then, instead of just running (with available network) what it already have in memory, it doesn't do it, but retrieves the just added Job from the storage, deserializes it, which is already a wrong behavior, then starts running the job, then redoes these steps (retrieving from storage, running, and again...) |
it is not a mistake that it always deserializers, rather a design decision to make things consistent. |
But it uses the storage unnecessarily. Shouldn't this just be a check which would be run only in Debug to detect serialization issues ASAP? |
Do you think using the v1 for now would be the best quick workaround for this bug so I can meet my deadline set to tomorrow evening (GMT+2)? I'm wondering if the backwards refactoring is worthwhile. |
no. It is not unnecessary storage. For a persistent job, it HAS TO BE saved to disk before it can be accepted. JobManager makes the guarantee that when jobManager.add returns, it is saved to disk for sure and will survive app crash. |
@yigit I know it has to be saved, I was talking about reading from storage it just after, and deserialize it before running it while the app didn't stop. |
Oh sorry but still it is better to make it consistent. |
i pushed a version into https://github.com/yigit/android-priority-jobqueue/tree/transient-job . |
actually, i pushed a one off repo here: You can download the zip file, unzip to a folder and add it as a local maven repository. Btw, this includes another WIP change that keeps job data on files rather than sqlite. That is also not ready to be merged but should be OK for you today. Lmk if you get a chance to try with it. At least we can figure out whether serialization is the issue or not. |
Thank you very much for this, I'm going to try it out! In the meantime, I made a Sample to reproduce the issue, and I got the bug! How I saw the bug in my sample: I had added all the libraries, made a Job using GitHub REST API, but I forgot to include the LoganSquare library (which generates model parsing code). The sources were in the third party retrofit library, so it compiled and run, but the code wasn't generated, making the job fail. I hop GitHub won't get mad because of this small DDOS attack... The red cross to kill the app in Android Studio is very useful in this case. You can download the sample project as a zip here: https://drive.google.com/open?id=0B59c-Qkht7tYSTRpOEJoTDMxWDA It also includes the logs I got, but I guess the buffer was too small to contain the first lines, I couldn't be faster to kill the app 😆 |
@yigit Ok, I tried with your alpha4 SNAPSHOT... and It's now working perfectly (even after killing and restarting the app) on the sample I linked above! I'm trying with my app, but I guess it'll work perfectly too 😄 Thanks a lot for your help!! BTW, I'm wondering about backward compatibility for old jobs from previous versions (v2 alpha1,2,3 and v1) that may be waiting in the queue when the app will be updated with post v2 alpha4 version... Fortunately, I'm not in this case as it's first release, but I wonder if jobs get lost, or anything else, for developers that may be using your library and will update. |
Cool so to clarify, once #174 gets in, your issue is gone right? That means we can single out this issue to custom serialization which is good. About backward compatibility, since v2 is alpha, i'm not paying too much attention on that (v2 is incompatible with v1). Also, with #174, it will be a lot easier since changes in BaseJob will not affect the app's compatibility. Glad it is resolved. |
Yes, I guess your next version will close this issue. |
Hi!
First, I observed the bug on v2 alpha2 and alpha3 (after refactoring according to changes).
I have one important job that needs to be sent, so I made it serializable, but I'm not using the half-default (half as I saw serialization methods in the
Job
class) method as if I need to update the Job implementation, a job serialized on an older app version needs to be properly deserialized on a newer version.I implemented the SerializerProxy pattern (as seen in Effective Java, last item of the 2nd edition) to keep only the crucial information (a timestamp, and id and a boolean), and use the public constructor to ensure backward and forward compatibility.
It seems to work as there's no error during serialization and deserialization.
However, after adding the job once, the JobManager doesn't run it, but, after doing weird things (from my point of view, when I look at the logs), it deserializes the job, runs it, then redeserializes, and run it, and repeat, and repeat, forever, spamming the servers...
I saw that the job is being deserialized and run in loop by
JobManager
after I addedThread.dumpStack()
in the constructor of my Job and analyzed the Stacktrace.You can see the logs attached, from the user event which triggered the job creation and adding, to the point I had to kill my app to keep a trace of logs. You can clearly see that JobManager is way too chatty (afraid of performance...), and that the same job is being deserialized and run again and again.
Logs AndroidPriorityJobQueue issue.txt
I'll try to use my own JobSerializer, but if it doesn't work, I won't be able to use this library (that seemed awesome) as my app needs to be dogfoodly tested next week, then shipped ASAP...
Here's the relevant code:
The problematic job
It's superclass
The superclass of it's superclass
The code used in my Application class to initialize the JobManager
The logger seen in the JobManager Configurationg.Builder
The code that created the Job
The text was updated successfully, but these errors were encountered: