Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Session expired #536

Closed
sylido opened this issue Dec 12, 2017 · 15 comments · Fixed by #537
Closed

Session expired #536

sylido opened this issue Dec 12, 2017 · 15 comments · Fixed by #537

Comments

@sylido
Copy link

sylido commented Dec 12, 2017

Uploading files times out and throws an error in the .on("end") callback.

  • when using ddp it happens with files ~10mb or bigger
  • when using http as the transport it happens with files ~14mb or bigger
  • Meteor@1.5.1 & ostrio:files@1.8.2 (with 1.9.3 it's the same problem, but message of the error is non-existant)

Here's client side log -

POST http://dan-vm:3000/cdn/storage/Files/__upload 500 (Internal Server Error)
HTTP.call @ httpcall_client.js:189
UploadInstance.sendEOF @ client.coffee:524
emitEvent @ event-emitter.jsx:373
(anonymous) @ client.coffee:503
(anonymous) @ httpcall_client.js:83
(anonymous) @ underscore.js?hash=c…0189e39c665183c:794
xhr.onreadystatechange @ httpcall_client.js:172
XMLHttpRequest.send (async)
HTTP.call @ httpcall_client.js:189
UploadInstance.sendChunk @ client.coffee:485
emitEvent @ event-emitter.jsx:373
worker.onmessage @ client.coffee:715

Followed immediately by my console.error of the error returned by the callback of the upload end event:

Error: failed [500] {"error":{"isClientSafe":true,"error":408,"reason":"Can't continue upload, session expired. Start upload again.","message":"Can't continue upload, session expired. Start upload again. [408]","errorType":"Meteor.Error"}}
at makeErrorByStatus (httpcall_common.js:13)
at XMLHttpRequest.xhr.onreadystatechange (httpcall_client.js:170)
(anonymous) @ manageFiles.js:222
emitEvent @ event-emitter.jsx:373
UploadInstance.end @ client.coffee:442
emitEvent @ event-emitter.jsx:373
(anonymous) @ client.coffee:538
(anonymous) @ httpcall_client.js:83
(anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
xhr.onreadystatechange @ httpcall_client.js:172
XMLHttpRequest.send (async)
HTTP.call @ httpcall_client.js:189
UploadInstance.sendEOF @ client.coffee:524
emitEvent @ event-emitter.jsx:373
(anonymous) @ client.coffee:503
(anonymous) @ httpcall_client.js:83
(anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
xhr.onreadystatechange @ httpcall_client.js:172
XMLHttpRequest.send (async)
HTTP.call @ httpcall_client.js:189
UploadInstance.sendChunk @ client.coffee:485
emitEvent @ event-emitter.jsx:373
worker.onmessage @ client.coffee:715

I am doing some pretty custom stuff as described in issue #505 - includes reading the file, converting to base64, getting workers to do RSA encryption, then returning the data to the main thread and feeding the encrypted data to the upload function. So it seems like the whole process just takes too long, more than 3 minutes for a 14mb file and maybe the http/ddp session expires, although not sure how the ddp meteor default session would expire since I'm still loged in, maybe just the sockjs(?).

Using 1.9.3, the latest version of the files package results in the following server side log:

[FilesCollection] [Upload] [HTTP] Exception: [TypeError: Cannot convert undefined or null to object]

While the client side logs look like this

httpcall_client.js:189 POST http://dan-vm:3000/cdn/storage/Files/__upload 500 (Internal Server Error)

and the console.error looks like

File upload error -> Error: failed [500] {"error":{}}
at makeErrorByStatus (httpcall_common.js:13)
at XMLHttpRequest.xhr.onreadystatechange (httpcall_client.js:170)

The weird part is that I assume the file upload failed since I get an error. This doesn't seem to be true though, the uploaded file actually gets to the server successfully as I can download it afterwards with no degradation whatsoever. The problem is that it might freak out the user and make it hard to distinguish between successful and unsuccessful uploads. I also need the document id given to the file as the post upload function triggered in .on("end") tries to link the file to another object - this obviously fails because the doc data doesn't contain the _id. It does contain all the other file metadata though in 1.8.2, in 1.9.3 I get an object of this form for the document { error : {} }.

Any suggestions on what might be causing this ? I think having a way to extend the http request/response timeout might be the solution. I noticed that the constructor has a config.responseHeaders function that allows you to modify the headers - does that sound like the correct place to play with to see if I can get around it.

P.S. For some reason this was not happening a month or so ago - all uploads, even 20mb ones (the current maximum allowed) were succeeding without this happening.

dr-dimitru added a commit that referenced this issue Dec 13, 2017
 - Minor changes in order to debug or fix #536
@dr-dimitru dr-dimitru mentioned this issue Dec 13, 2017
@dr-dimitru dr-dimitru reopened this Dec 13, 2017
@dr-dimitru
Copy link
Member

dr-dimitru commented Dec 13, 2017

Hello @sylido ,

  1. Upload timeout is controlled by continueUploadTTL option, which is 3 hours by default, so, I think issue is at some other place.
  2. Have you tried to upload file with disabled debug option?
  3. Since you're already using debug mode, could you please post full logs from Client and Server? On the Client, please use Chrome, Chromium or Canary
  4. What browser and OS, this issue appears? Have you tried other?
  5. Please upgrade to the latest v1.9.4, I've added some changes in order to better understand roots of this issue

@sylido
Copy link
Author

sylido commented Dec 15, 2017

Hi @dr-dimitru,

  1. Yeah I thought so as well, but maybe there is some other place you can set it, if you think of something I can try it out.

  2. debug was set to false actually, after the update to 1.9.4 it gave me pretty much the same errors - I'll paste them below. I also set debug to true and I'll give you that log as well.

  3. I am always using Chrome.

  4. Win 7 x64, Chrome 63.0.3239.84. I think the issue is the same under Firefox as far as I remember.

  5. Updated, thanks !

Here's the log with debug set to true

> [FilesCollection] [insert()]
> core.js:88 [FilesCollection] [FileUpload] [constructor]
> core.js:88 [FilesCollection] [insert] using WebWorkers
> core.js:88 [FilesCollection] [UploadInstance] [createStreams]
> upload.js:275 loadFile coffeeMittens.jpg: 4669.999755859375ms
> core.js:88 [FilesCollection] [UploadInstance] [sendEOF] false
> core.js:88 [FilesCollection] [insert] [Tracker] [pause]
> core.js:88 [FilesCollection] [insert] [.pause()]

Part of the same log, error object expanded

> VM2060:1 POST http://dan-vm:3000/cdn/storage/Files/__upload 503 (Service Unavailable)
> (anonymous) @ VM2060:1
> HTTP.call @ httpcall_client.js:189
> sendEOF @ upload.js:345
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2515
> (anonymous) @ upload.js:320
> (anonymous) @ httpcall_client.js:83
> (anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
> xhr.onreadystatechange @ httpcall_client.js:172
> XMLHttpRequest.send (async)
> (anonymous) @ VM2060:1
> HTTP.call @ httpcall_client.js:189
> sendChunk @ upload.js:299
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2516
> worker.onmessage @ upload.js:582
> core.js:88 [FilesCollection] [UploadInstance] [end] coffeeMittens.jpg
> upload.js:227 insert coffeeMittens.jpg: 140898.94970703125ms
> core.js:88 [FilesCollection] [insert] [end] Error: Error: failed [503] Unexpected error.
>     at makeErrorByStatus (httpcall_common.js:13)
>     at XMLHttpRequest.xhr.onreadystatechange (httpcall_client.js:170)
> core.js:88 [FilesCollection] [insert] [.abort()]
> core.js:88 [FilesCollection] [insert] [.pause()]
> upload.js:677 insert coffeeMittens.jpg: 0ms

Part of the same log, I just expanded the error object to get more details

> manageFiles.js:234 File upload error ->  Error: failed [503] Unexpected error.
>     at makeErrorByStatus (httpcall_common.js:13)
>     at XMLHttpRequest.xhr.onreadystatechange (httpcall_client.js:170)
> (anonymous) @ manageFiles.js:234
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2517
> end @ upload.js:244
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2517
> (anonymous) @ upload.js:366
> (anonymous) @ httpcall_client.js:83
> (anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
> xhr.onreadystatechange @ httpcall_client.js:172
> XMLHttpRequest.send (async)
> (anonymous) @ VM2060:1
> HTTP.call @ httpcall_client.js:189
> sendEOF @ upload.js:345
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2515
> (anonymous) @ upload.js:320
> (anonymous) @ httpcall_client.js:83
> (anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
> xhr.onreadystatechange @ httpcall_client.js:172
> XMLHttpRequest.send (async)
> (anonymous) @ VM2060:1
> HTTP.call @ httpcall_client.js:189
> sendChunk @ upload.js:299
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2516
> worker.onmessage @ upload.js:582
> manageFiles.js:263 doc passed to postUpload 2 ->  {}

Server side logs

> [FilesCollection] [Abort Method]: nfE28mQqmCje4ZyZu - 
> [FilesCollection] [download(/cdn/storage/Files/nfE28mQqmCje4ZyZu/original/nfE28mQqmCje4ZyZu.jpg, original)]

Below are the logs when the debug option is set to false

> POST http://dan-vm:3000/cdn/storage/Files/__upload 503 (Service Unavailable)
> (anonymous) @ VM2399:1
> HTTP.call @ httpcall_client.js:189
> sendEOF @ upload.js:345
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2515
> (anonymous) @ upload.js:320
> (anonymous) @ httpcall_client.js:83
> (anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
> xhr.onreadystatechange @ httpcall_client.js:172
> XMLHttpRequest.send (async)
> (anonymous) @ VM2399:1
> HTTP.call @ httpcall_client.js:189
> sendChunk @ upload.js:299
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2516
> worker.onmessage @ upload.js:582
> 

> File upload error ->  Error: failed [503] Unexpected error.
>     at makeErrorByStatus (httpcall_common.js:13)
>     at XMLHttpRequest.xhr.onreadystatechange (httpcall_client.js:170)
> (anonymous) @ manageFiles.js:234
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2517
> end @ upload.js:244
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2517
> (anonymous) @ upload.js:366
> (anonymous) @ httpcall_client.js:83
> (anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
> xhr.onreadystatechange @ httpcall_client.js:172
> XMLHttpRequest.send (async)
> (anonymous) @ VM2399:1
> HTTP.call @ httpcall_client.js:189
> sendEOF @ upload.js:345
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2515
> (anonymous) @ upload.js:320
> (anonymous) @ httpcall_client.js:83
> (anonymous) @ underscore.js?hash=cde485f60699ff9aced3305f70189e39c665183c:794
> xhr.onreadystatechange @ httpcall_client.js:172
> XMLHttpRequest.send (async)
> (anonymous) @ VM2399:1
> HTTP.call @ httpcall_client.js:189
> sendChunk @ upload.js:299
> emit @ ostrio_files.js?hash=92c3c7f2e0870d6c86d67fc008ad44982070861b:2516
> worker.onmessage @ upload.js:582

The server side logs seems to complain about memory allocation, I'll try giving more memory to my Ubuntu VM, this wasn't there with debug set to true or in version 1.8.2

> <--- Last few GCs --->
>   527997 ms: Mark-sweep 680.8 (1886.6) -> 680.8 (1886.6) MB, 267.2 / 0 ms [allocation failure] [GC in old space requested].
>   528263 ms: Mark-sweep 680.8 (1914.1) -> 680.8 (1914.1) MB, 258.0 / 0 ms [allocation failure] [GC in old space requested].
>   528527 ms: Mark-sweep 680.8 (1941.7) -> 680.4 (1940.7) MB, 256.2 / 2 ms [last resort gc].
>   528733 ms: Mark-sweep 680.4 (1940.7) -> 679.4 (1940.7) MB, 204.4 / 0 ms [last resort gc].
> <--- JS stacktrace --->
> ==== JS stack trace =========================================
> Security context: 0x3b4af7b37399 <JS Object>
>     1: Join(aka Join) [native array.js:133] [pc=0x6c1fb8c467] (this=0x3b4af7b04131 <undefined>,o=0x3b4af7b8c459 <JS Array[28881984]>,v=28881984,C=0x3b4af7b04631 <String[0]: >,B=0x328277e6fe11 <JS Function ConvertToString (SharedFunctionInfo 0x3b4af7b516f9)>)
>     2: InnerArrayJoin(aka InnerArrayJoin) [native array.js:331] [pc=0x6c1fb8b9ea] (this=0x3b4af7b04131 <undefined>,C=0x3b4af7b04631 <Strin...
>     
> (STDERR) FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
> => Exited from signal: SIGABRT

@dr-dimitru
Copy link
Member

dr-dimitru commented Dec 15, 2017

Hello @sylido , thank you for update.

Could you please post all "debug" logs from client and server (everything starting with [FilesCollection.storagePath] Set to:..., do not expand traces), and place it into

code  block

Speaking of hitting memory limit - yes, this is reasonable limit, which will produce exactly the same error, it's like file wasn't ever created.

To try to avoid memory issue I recommend to limit streams to 1 and set chunkSize to constraint 262144 (play with this value to get better results), more info here

@sylido
Copy link
Author

sylido commented Dec 16, 2017

Hey @dr-dimitru sure thing, hope it helps.

The first and second log paragraphs I quoted above are everything that comes out of the package. I'll list the same stuff below, but without the traces and formatted as code.

[FilesCollection] [insert()]
[FilesCollection] [FileUpload] [constructor]
[FilesCollection] [insert] using WebWorkers
[FilesCollection] [UploadInstance] [createStreams]
[FilesCollection] [UploadInstance] [sendEOF] false
[FilesCollection] [insert] [Tracker] [pause]
[FilesCollection] [insert] [.pause()]
[FilesCollection] [UploadInstance] [end] coffeeMittens.jpg
[FilesCollection] [insert] [end] Error: Error: failed [503] Unexpected error.
  at makeErrorByStatus (httpcall_common.js:13)
  at XMLHttpRequest.xhr.onreadystatechange (httpcall_client.js:170)
[FilesCollection] [insert] [.abort()]
[FilesCollection] [insert] [.pause()]

As far as the memory error I got, I think it's because when I switched branches I didn't stop and start the meteor server, might be a false positive since I haven't gotten it before or afterwards.

I'll try setting the streams to 1 and chunSize to something constant. Setting the chunksize to a big number yielded some unresponsiveness before (i.e. 20mb chunk). I guess something related to the whole issue is the speed of the upload, even though I'm working locally, it doesn't seem like I can get more than say 120kb/s. Is there any way to improve the speed ? (this is weirdly constant on both ddp and http, doesn't seem to make any difference)

@dr-dimitru
Copy link
Member

@sylido thank you for update

  1. Try to set chunkSize to a smaller constraint value, start with 1KB 1024. Playing with this value should improve the speed. Set upload transport to http, it's faster by default
  2. You've posted only Client logs, please update this thread with full server logs, starting with your meteor launch command, right to the error

@sylido
Copy link
Author

sylido commented Dec 18, 2017

hey @dr-dimitru thanks for the suggestions, I think it's becoming clearer why this is happening.

  1. Tried setting it to a low number, but that increased the upload time significantly, now back to 5mb chunks, upload seems to happen really fast.

  2. Here is the meteor launch script

export MONGO_URL='mongodb://localhost:27017/rthree'
export ROOT_URL=http://dan-vm/
killall node
killall node
meteor npm install
nohup meteor --port 3000 --settings settings-local.json > meteor.out &

And here is a more detailed server side log - something that wasn't showing up before

[FilesCollection.storagePath] Set to: .
Kadira: completed instrumenting the app
[FilesCollection] [File Start HTTP] coffeeMittens.jpg - D6u6xP9xxMCocjyx5
[FilesCollection] [Upload] [HTTP Start Method] Got #-1/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #1/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #2/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #3/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #4/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #5/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #6/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #7/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #8/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #9/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #10/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #11/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #12/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #13/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [HTTP] Got #-1/13 chunks, dst: coffeeMittens.jpg
[FilesCollection] [Upload] [finish(ing)Upload] -> /var/www/filesmount/2/002/2/D6u6xP9xxMCocjyx5.jpg
[FilesCollection] [Upload] [finish(ed)Upload] -> /var/www/filesmount/2/002/2/D6u6xP9xxMCocjyx5.jpg
[FilesCollection] [_preCollectionCursor.observe] [changed]: D6u6xP9xxMCocjyx5
[FilesCollection] [findOne({"_id":"D6u6xP9xxMCocjyx5"}, undefined)]
file read
search for file done
decrypt file: 121536ms
file encrypted
newFile written to disk
[FilesCollection] [_preCollectionCursor.observe] [removed]: D6u6xP9xxMCocjyx5
  (STDERR) [FilesCollection] [Upload] [HTTP] Exception: [TypeError: Cannot convert undefined or null to object]
  (STDERR) Trace
  (STDERR)     at handleError (packages/ostrio:files/server.js:425:23)
  (STDERR)     at packages/ostrio:files/server.js:567:17
  (STDERR)     at packages/ostrio:files/server.js:22:50
  (STDERR)     at runWithEnvironment (packages/meteor.js:1180:24)
Updated file collection
[FilesCollection] [Abort Method]: D6u6xP9xxMCocjyx5 - 

It seems that our onAfterUpload modifications are breaking things for bigger files.
The pseudocode in there is as follows - file contents get read with fs.readFileSync, then we decode the contents using RSA - this takes around 120 seconds for a 16mb file, so pretty slow. Then the decrypted file contents get re-encrypted but with AES.

The next step is deleting the uploaded file using fs.removeSync(the path of the file).
Immediately after that the newly encrypted file contents are written to disk with fs.writeFileSync using the same path as before, but potentially a different name - all of this done with base64 encoding.

After that we do an update on the Files collection referencing the file by it's id - in this operation we update the potentially new name of the file, as well as the path and versions.original.path with the path + "new name" value.

The upload error seems to get triggered just after the deletion of the file from the disk, but doesn't actually get console logged until after the file is re-written to the disk. I think this is due to the observer being a bit slow to realize the file is missing from disk, with smaller files the observer probably doesn't even notice since it happens too fast, with bigger files that doesn't seem to be the case.

Maybe the logic can be tweaked a little here to avoid this from happening, but I would like to avoid marking the upload as successful before the file has been re-encrypted and re-written to disk. Open to any suggestions you might have for dealing with this.

@dr-dimitru
Copy link
Member

Not sure why this may happen. Here is some ideas:

  1. Message [FilesCollection] [_preCollectionCursor.observe] [removed] indicates what upload nearly finished, file is written to FS, and only cleaning it from memory is left;
  2. It feels like AES decryption is blocking other processes, although it's almost impossible as they are asynchronous and should be in different Fibers;
  3. As many of the processes is asynchronous they may interfere in some weird way with AES decryption;

My suggestions:

  1. Wrap onAfterUpload into process.nextTick();
  2. Delay onAfterUpload with timeout, wrapping it into Meteor.setTimeout, try different timings here
  3. Wrap onAfterUpload into separate Fiber using Fiber NPM package directly, like: Fiber(() => {/*..*/}).run();, see sleep example at official docs - https://www.npmjs.com/package/fibers#sleep

@dr-dimitru
Copy link
Member

dr-dimitru commented Dec 18, 2017

Got one more idea:

  1. Add encrypted {Boolean} option to file's object;
  2. Listen file's collection with observe for new entries;
  3. If new entry has encrypted === true, set encrypted to decrypting, and start decryption process, again in separate Fiber, as suggested above in n.3;
  4. Once file is fully decrypted change encrypted option to false;
  5. Publish/select/use only files where encrypted === false.

wdyt?

@sylido
Copy link
Author

sylido commented Dec 19, 2017

Hey @dr-dimitru thanks for the suggestions.
I tried the Meteor.timeout and it seems to run the onPostUpload in a separate non-blocking thread so the upload completes almost immediately. The con to this method is that the actual decryption, re-encryption, delete of original file and update of the Files collection record doesn't happen until a couple of seconds afterwards, so the user gets notified that everything is done, but it's not. This fixed the timeout error and it's nice that the upload is much much faster now, even with streams and chunks set to dynamic.

I guess the next step is going to be to actually implemenet your second suggestion and update a variable encrypted to true when everything completes in that thread, which will then in turn make the file available for download. I'll update and close the issue if I succeed with this method.

Thanks again, much appreciated.

@dr-dimitru
Copy link
Member

Great, let's keep this thread open until you'll find a solution to your case.

@sylido
Copy link
Author

sylido commented Dec 20, 2017

Hey @dr-dimitru, I was able to successfully implement this, albeit it took some time because of some aggregations and reactivity issues. I'm satisfied with the current solution, but I still feel like the file should be marked as uploaded successfully before we get to the onAfterUpload callback. Not sure what the [FilesCollection] [_preCollectionCursor.observe] observer does, but I don't think it needs to track the actual path and availability of the file, since that is what messes things up for me at least. If it's pretty much an edge case, let's close this issue - I consider it resolve for my purposes.

Thanks for the help again !

@dr-dimitru
Copy link
Member

Hi @sylido ,

I'm glad you have accomplished this task. As I've said before - feel free to share your experience with our community, I believe it's quite popular demand for encrypted Client - Sever upload. Send a PR to the wiki, or publish a post to Medium, we will link it to our wiki.

_preCollection - is the cache/middleware in front of collection where successfully uploaded files is stored. It helps to avoid unfinished, corrupted, and stale uploads. Backend event-system is build on top of this observer. It tracks all records on isFinished field.

File record never exists in both collections, first it's fully removed from _preCollection then moved to FilesCollection.

@sylido
Copy link
Author

sylido commented Dec 20, 2017

Hello @dr-dimitru ,

I'll try and make a PR for the wiki - I'm thinking a super simple one page meteor program that uses the Meteor files package and several other cryptography libraries to accomplish the client side encryption. Since I'm having a bit of trouble using RSA(slow), I'll use AES instead for this one.

As far as _preCollection thanks for the explanation, makes sense, but the problem is I would like to be able to mark it as finished before we try to do the whole decrypt/encrypt/delete file/write file again to disk stuff. It seems like being able to mark it as done and then doing that stuff should help with the error and subsequent non-asynchronous notification logic that I was hoping for. I guess asynchronous might be better in some cases, but would love to have the option.

@sylido sylido closed this as completed Dec 20, 2017
@dr-dimitru
Copy link
Member

non-asynchronous notification logic that I was hoping for

I'm afraid it's not possible, at least in current implementation.
Event-based logic is asynchronous, and now there is no way to alter this.

@sylido
Copy link
Author

sylido commented Dec 18, 2018

Hi @dr-dimitru I made a sample project for the Client Side Encryption based on the simple insert file project demo. Feel free to check it out and see if you can get it to run, I did include some libraries like lodash/crypto-js and base64-arraybuffer.

You would also need to make a /uploadFiles directory into your root and give it full permission to store the encrypted files there. Of course this can be changed through the code if you prefer to set a different default.

For one of the files - which is a web worker - I do use watchify, browserify and uglifyjs the instructions are in the encFile.js file. It doesn't actually need to run since I've included the browserified mangled version that actually gets used as encFileMangled.js.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants