-
-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Session expired #536
Comments
Hello @sylido ,
|
Hi @dr-dimitru,
Here's the log with
Part of the same log, error object expanded
Part of the same log, I just expanded the error object to get more details
Server side logs
Below are the logs when the
The server side logs seems to complain about memory allocation, I'll try giving more memory to my Ubuntu VM, this wasn't there with
|
Hello @sylido , thank you for update. Could you please post all "debug" logs from client and server (everything starting with
Speaking of hitting memory limit - yes, this is reasonable limit, which will produce exactly the same error, it's like file wasn't ever created. To try to avoid memory issue I recommend to limit |
Hey @dr-dimitru sure thing, hope it helps. The first and second log paragraphs I quoted above are everything that comes out of the package. I'll list the same stuff below, but without the traces and formatted as code.
As far as the memory error I got, I think it's because when I switched branches I didn't stop and start the meteor server, might be a false positive since I haven't gotten it before or afterwards. I'll try setting the streams to 1 and chunSize to something constant. Setting the chunksize to a big number yielded some unresponsiveness before (i.e. 20mb chunk). I guess something related to the whole issue is the speed of the upload, even though I'm working locally, it doesn't seem like I can get more than say 120kb/s. Is there any way to improve the speed ? (this is weirdly constant on both ddp and http, doesn't seem to make any difference) |
@sylido thank you for update
|
hey @dr-dimitru thanks for the suggestions, I think it's becoming clearer why this is happening.
And here is a more detailed server side log - something that wasn't showing up before
It seems that our The next step is deleting the uploaded file using After that we do an update on the Files collection referencing the file by it's id - in this operation we update the potentially new name of the file, as well as the path and versions.original.path with the path + "new name" value. The upload error seems to get triggered just after the deletion of the file from the disk, but doesn't actually get console logged until after the file is re-written to the disk. I think this is due to the observer being a bit slow to realize the file is missing from disk, with smaller files the observer probably doesn't even notice since it happens too fast, with bigger files that doesn't seem to be the case. Maybe the logic can be tweaked a little here to avoid this from happening, but I would like to avoid marking the upload as successful before the file has been re-encrypted and re-written to disk. Open to any suggestions you might have for dealing with this. |
Not sure why this may happen. Here is some ideas:
My suggestions:
|
Got one more idea:
wdyt? |
Hey @dr-dimitru thanks for the suggestions. I guess the next step is going to be to actually implemenet your second suggestion and update a variable Thanks again, much appreciated. |
Great, let's keep this thread open until you'll find a solution to your case. |
Hey @dr-dimitru, I was able to successfully implement this, albeit it took some time because of some aggregations and reactivity issues. I'm satisfied with the current solution, but I still feel like the file should be marked as uploaded successfully before we get to the Thanks for the help again ! |
Hi @sylido , I'm glad you have accomplished this task. As I've said before - feel free to share your experience with our community, I believe it's quite popular demand for encrypted Client - Sever upload. Send a PR to the wiki, or publish a post to Medium, we will link it to our wiki.
File record never exists in both collections, first it's fully removed from |
Hello @dr-dimitru , I'll try and make a PR for the wiki - I'm thinking a super simple one page meteor program that uses the Meteor files package and several other cryptography libraries to accomplish the client side encryption. Since I'm having a bit of trouble using RSA(slow), I'll use AES instead for this one. As far as |
I'm afraid it's not possible, at least in current implementation. |
Hi @dr-dimitru I made a sample project for the Client Side Encryption based on the simple insert file project demo. Feel free to check it out and see if you can get it to run, I did include some libraries like lodash/crypto-js and base64-arraybuffer. You would also need to make a /uploadFiles directory into your root and give it full permission to store the encrypted files there. Of course this can be changed through the code if you prefer to set a different default. For one of the files - which is a web worker - I do use watchify, browserify and uglifyjs the instructions are in the encFile.js file. It doesn't actually need to run since I've included the browserified mangled version that actually gets used as encFileMangled.js. |
Uploading files times out and throws an error in the .on("end") callback.
Here's client side log -
Followed immediately by my
console.error
of the error returned by the callback of the upload end event:I am doing some pretty custom stuff as described in issue #505 - includes reading the file, converting to base64, getting workers to do RSA encryption, then returning the data to the main thread and feeding the encrypted data to the upload function. So it seems like the whole process just takes too long, more than 3 minutes for a 14mb file and maybe the http/ddp session expires, although not sure how the ddp meteor default session would expire since I'm still loged in, maybe just the sockjs(?).
Using 1.9.3, the latest version of the files package results in the following server side log:
While the client side logs look like this
and the console.error looks like
The weird part is that I assume the file upload failed since I get an error. This doesn't seem to be true though, the uploaded file actually gets to the server successfully as I can download it afterwards with no degradation whatsoever. The problem is that it might freak out the user and make it hard to distinguish between successful and unsuccessful uploads. I also need the document id given to the file as the post upload function triggered in
.on("end")
tries to link the file to another object - this obviously fails because the doc data doesn't contain the_id
. It does contain all the other file metadata though in 1.8.2, in 1.9.3 I get an object of this form for the document{ error : {} }
.Any suggestions on what might be causing this ? I think having a way to extend the http request/response timeout might be the solution. I noticed that the constructor has a config.responseHeaders function that allows you to modify the headers - does that sound like the correct place to play with to see if I can get around it.
P.S. For some reason this was not happening a month or so ago - all uploads, even 20mb ones (the current maximum allowed) were succeeding without this happening.
The text was updated successfully, but these errors were encountered: