You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Storing doc attachments in the main bucket is inefficient, limits them to something like 40MB, and puts lots of extra documents in the bucket (which we want to avoid.)
We should instead store them externally, probably as a content-addressable store the way Couchbase Lite does. CBFS would be the best way to do this, since the files need to be accessible from anywhere in the cluster.
The text was updated successfully, but these errors were encountered:
But I think it will create other issues, when I first started with CouchCocoa and iriscouch, I frequently saw sync issues with large files 20-40Meg from mobile devices. The issue was that a stable connection could not be held open long enough for the binary attachment transfer to complete, often resulting in 5+ attempts to sync those files.
I realize it would probably break the CouchDB replication protocol, but if sync_gateway is going to be able to support large attachments, there really needs to be a way to chunk the data when replicating to support unreliable mobile connections.
At the App level I have been looking at using a channel to represent a binary file and have that channel contain multiple documents each with a chunk of the data. I use a hash tree to allow the chunks to be validated and reassembled, out of order if necessary.
But if this was built into the replication it would make the task of syncing large attachments with mobile devices a lot more practical.
Storing doc attachments in the main bucket is inefficient, limits them to something like 40MB, and puts lots of extra documents in the bucket (which we want to avoid.)
We should instead store them externally, probably as a content-addressable store the way Couchbase Lite does. CBFS would be the best way to do this, since the files need to be accessible from anywhere in the cluster.
The text was updated successfully, but these errors were encountered: