Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File Transport Checksums Between Server and Desktop #3735

Closed
MTRichards opened this issue Sep 2, 2015 · 60 comments

Comments

@MTRichards
Copy link

commented Sep 2, 2015

As an ownCloud admin, I want ownCloud to ensure that files transported from the desktop client to the server are not corrupted in transit so that we can catch bugs and avoid corrupting customer data

Acceptance Criteria:

  • Desktop sync client takes a checksum of files on the desktop when they are discovered and stores the checksum in the database
  • When the file is transmitted to the ownCloud server, the checksum is also transmitted
  • The server checksums the file when it arrives at the server, compares it to the checksum transmitted by the client previously
  • If the checksums match, a success return code is sent
  • If the chechsums fail to match, the file has been corrupted and the desktop skips that file. The file will be retransmitted on the next upload.
  • If the file is corrupted in transit, the file transfer log on the desktop reflects that this file was corrupted in transit, and that it will be retried on the next sync run
    Note: this links to server issue XXX

@MTRichards MTRichards added this to the 2.1-next milestone Sep 2, 2015

@dragotin dragotin self-assigned this Sep 7, 2015

@ckamm ckamm added the p2-high label Sep 30, 2015

@ckamm

This comment has been minimized.

Copy link
Member

commented Oct 1, 2015

What needs to be done in the client?

  • Determine whether the server accepts checksums, possibly via capabilities. Currently checksumming needs to be explicitly enabled in the config file.
  • Store checksums in the database. (for performance optimization only, I guess?)
  • Check that checksum errors are handled nicely.

ckamm added a commit that referenced this issue Oct 1, 2015

Checksums: Prepare 'supported checksums' capability #3735
It currently always returns the empty list and thus has no effect.
@ckamm

This comment has been minimized.

Copy link
Member

commented Oct 14, 2015

We also need to make sure getting an empty checksum header from the server during downloads isn't considered a validation failure - currently that seems to be the case. I wonder why b05ca52 did that -- @dragotin?

@dragotin

This comment has been minimized.

Copy link
Contributor

commented Oct 14, 2015

@ckamm the patch you're pointing to checks the expected checksum type as in Adler32 or SHA or so. If that is empty, we do not have checksumming and assume validated() - which means all is ok.

@ckamm

This comment has been minimized.

Copy link
Member

commented Oct 14, 2015

@dragotin Yes, but why would the checksum type configured in the config file matter when one validates a checksum the server sends?

And shouldn't an empty checksum header still yield validated? It sounds like only files that were uploaded by new clients will have checksums when using the owncloud server.

@ckamm

This comment has been minimized.

Copy link
Member

commented Oct 15, 2015

@moscicki I'm touching the checksumming code and wonder: Do you use Adler32 or ADLER32? What does the server accept and what does it send? I'd prefer to just use one spelling and there's a small chance you're requiring one for uploads and the other for downloads...

@moscicki

This comment has been minimized.

Copy link
Contributor

commented Oct 15, 2015

We treat the request header name as case-insensitive. So all these are accepted: Adler32, adler32, ADLER32

Response given by the server: Adler32

@ckamm

This comment has been minimized.

Copy link
Member

commented Oct 21, 2015

@moscicki Great, then the client will standardize on Adler32.

EDIT: Note that with 'standardize' I meant the spelling of this particular checksum type (instead of also sometimes sending ADLER32).

@ogoffart

This comment has been minimized.

Copy link
Collaborator

commented Oct 21, 2015

The problem with Adler32 is that it is not safe against collision. And so we cannot use it to detect copies or moves. (Or can we?)

@dragotin

This comment has been minimized.

Copy link
Contributor

commented Oct 21, 2015

@moscicki can you please enlighten us?

@moscicki

This comment has been minimized.

Copy link
Contributor

commented Oct 21, 2015

What exactly do you want to use the checksum for? As Etag? Or deduplicate
the transfer in case of a copy or simple "touch"? Not sure I understand how
moves can be helped with the checksum...?

I think the answer depends on what you want to do with it ;-)

On Wed, Oct 21, 2015 at 6:24 PM, Klaas Freitag notifications@github.com
wrote:

@moscicki https://github.com/moscicki can you please enlighten us?


Reply to this email directly or view it on GitHub
#3735 (comment).


Best regards,
Kuba

@ogoffart

This comment has been minimized.

Copy link
Collaborator

commented Oct 21, 2015

This is not decided yet. But if we have checksum in the database we might as well use it to detect copies and move more reliably than now.

@moscicki

This comment has been minimized.

Copy link
Contributor

commented Oct 22, 2015

I think this requires an architectural discussion, also in the context of new storage architecture promised for owncloud 9 server. This is to say that IMO sync client should be flexible enough to handle storage backends which come with checksums already (e.g. S3 with MD5, EOS with Alder32,...). This is the storage backend behind the owncloud server which tells the client what the checksum is - you cannot change that unless you want to compute it yourself again in the owncloud server. For file downloads the checksum type comes in the response header -- so client just uses this information. For file uploads client may want to discover what is the checksum type supported by the server (this may be per-directory!) in order to compute the expected checksum and add it to the PUT header. Except for the discovery this is what has already been implemented in the sync client and for protecting the file transfer itself this should be sufficient.

Another point is what kind of checksum you store locally in the sync state db and if this could/should be different from the transmission checksum (at the expense of computing the checksum twice) and how that can help you to optimize operations. If this can help you at all then it should be a very strong checksum that would guarantee that if checksum1==checksum2 is equivalent to content1=content2. This is not true for Adler32. Here are some pointers:

Having said that, it probably makes sense to first try to understand what kind of optimizations you may want to achieve (deduplicating transfer in case of propagating a local copy -- yes, that would be one such case -- but is it worth it?). For moves, I really do not see how this can help the client to track them.

And, in addition to all that, checksuming should be considered in the context of delta sync -- this is potentially a big win. But this needs to be verified on real data.

The bottom line is that one should analyze the real usage of the system to have some idea about how important these things are. We do this kind of data mining on CERNBox currently so any ideas about what pieces of information to extract to provide a useful, real-life information as input to this discussion are welcome.

I propose to prepare the grounds and aim at the discussion in Zurich CS3 Workshop in ETH in January.

@dragotin

This comment has been minimized.

Copy link
Contributor

commented Oct 22, 2015

Yes, actually there are three areas where we can use checksums, and often these are confused in discussions:

Checksums for Transportation Verification (#3735)

The idea here is to verify the integrity of a file's content after it has been transmitted over any kind of network, usually after it has been downloaded from the server or uploaded from a client to the server. A checksum is added to the header of the GET or PUT request and the receiving site recalculates the content checksum of the file on it's final location and compares it to the expected value from the header. This adds another level of safety to the transmission.

For this usecase, any available checksum from the storage backends should be used, to keep the effort for the server as low as possible. The header that contains the checksum information for this reason prefixes the checksum with its type, for example SHA1:543353xadr32323.

If the type is unkown to the receiving site, the transmission verification can not be done. This is currently not considered an error, but on the longer run, it could become a configuration option if it should.

Checksums to Support Discovering Changes

This usecase has two aspects:

  1. Verification of moves: Currently moves are detected by comparison of file size, it's inode and modification time. All values have to be equal. Still there could be cases where this is not a move but a new file, so verification of the content checksum which also must be equal would solve that. (See #2983)
  2. Detection of "wrong" change notification on local file system (the .eml problem, #3235): Even if the modification time changed, a content checksum could be used in this case to check if it really was a change or not.
  3. Avoid touch changes: Currently ownCloud syncs files that do not have a content change, but only a change of the modification time, ie. through command touch. This could be avoided if a content checksum check is performed if the modification time changed, but not the file size.

These usecase is mainly isolated on a single site, so any checksum that is available and stored in the journal could be used.

Checksums to Enable Delta Sync

The main difference of this compared to the two other usecases is that here checksums over blocks of a file are needed, and not checksums over the entire file. The usual delta sync algorithms usually split a file into blocks of equal size, calculate the checksums over these blocks and try to only transmit these blocks that have really changed. The tricky part is how the file is split to have a maximum high probability of finding unchanged blocks. (See #179)

For this usecase, strong checksums are needed. Both client and server need to be able to recalculate and possibly store the checksum lists.

@moscicki

This comment has been minimized.

Copy link
Contributor

commented Oct 22, 2015

@labkode

This comment has been minimized.

Copy link
Member

commented Dec 1, 2015

@dragotin

As explained in comments before, if the server checksum type has a different case that the one configured in the client (Adler32 vs ADLER32 or MD5 vs md5) the client do not accept the checksum.

With EOS is working correctly. EOS sends Adler32 in the response and the client is configured with Adler32. No problem.

To trigger this error I hijacked the response from EOS to insert ADLER32 checksum type.
The error appears in the client log when the client tries to download the file.

Is the RC1 client supposed to fix the case problem ?

Cheers

@dragotin

This comment has been minimized.

Copy link
Contributor

commented Dec 1, 2015

Thanks @labkode

I am not sure if this is really an error. I mean, md5 != MD5, so I think this is an acceptable behaviour. This feature needs attention from the administrator, so configuring it correctly should be possible.

So we can consider this working I assume?

@labkode

This comment has been minimized.

Copy link
Member

commented Dec 1, 2015

@dragotin The client works correctly on Mac OSX. I have to try on Linux and Windows. But if the codebase is the same they should also work correctly.

@labkode

This comment has been minimized.

Copy link
Member

commented Dec 2, 2015

@dragotin Tested from Win and Linux. Both working against EOS with Adler32.

@dragotin

This comment has been minimized.

Copy link
Contributor

commented Jan 8, 2016

Reopening because we have to test that again against ownCloud server.

@dragotin dragotin reopened this Jan 8, 2016

@MTRichards MTRichards modified the milestones: 2.2-next, 2.1 Jan 8, 2016

@bboule bboule added the in progress label Jan 8, 2016

@MTRichards MTRichards referenced this issue Jan 8, 2016
3 of 3 tasks complete
@ckamm

This comment has been minimized.

Copy link
Member

commented Jan 19, 2016

@bboule Are you saying you're currently working on this? Should I assign this to you?

@MorrisJobke

This comment has been minimized.

Copy link
Member

commented Jan 19, 2016

@bboule Are you saying you're currently working on this? Should I assign this to you?

I don't think so. This is just a label of the tool he is using 🙈

@ckamm

This comment has been minimized.

Copy link
Member

commented Jan 19, 2016

@MorrisJobke Thanks, that explains it. As far as I know, testing this can only start when owncloud/core#18716 is done.

@rullzer

This comment has been minimized.

Copy link
Contributor

commented Jan 29, 2016

See owncloud/core#21997 for a WIP PR... it can store provided checksums from the OC-Checksum Header. Will return the OC-Checksum header if there is a checksum. And on propfind the checksums will be returned as well <oc:checksums>

The format for the header field is basically: <TYPE>:<CHECKSUM>(, <TYPE>:<CHECKSUM>)*

@rullzer

This comment has been minimized.

Copy link
Contributor

commented Feb 3, 2016

owncloud/core#21997 is in. However slimmed down. So right now only 1 checksum can be send.

But all should work for 9.0

@dragotin

This comment has been minimized.

Copy link
Contributor

commented Feb 19, 2016

The acceptance criteria of this issue is not met as the server in the current implementation (9.0) does not validate the checksum that comes with the client's file upload, but only stores it.

In fact, with that we only have file transport verification only on downloads. Also the server stores checksums without verifying them against the uploaded file first.

@DeepDiver1975

This comment has been minimized.

Copy link
Member

commented Feb 23, 2016

The server checksums the file when it arrives at the server, compares it to the checksum transmitted by the client previously
If the checksums match, a success return code is sent
If the chechsums fail to match, the file has been corrupted and the desktop skips that file. The file will be retransmitted on the next upload.

these criteria above should have been added to the corresponding server ticket owncloud/core#18716

@MTRichards @karlitschek @cmonteroluque this somehow got lost while coordinating client and server requirements - hopefully we can avoid this in the future

@dragotin

This comment has been minimized.

Copy link
Contributor

commented Feb 29, 2016

for the sake of completeness, here is the current state with 9.0.

With not having the server verifying the uploaded checksum, this can happen:

    1. UserA uploads a file to the server with checksumA. 
    2. FIle gets corrupted during transport
    3. Server stores corrupted file and checksumA.
    4. UserB downloads the file and checksum and they don't match.
    5. UserB will invalidate the file, redownload.

At the same time UserA thinks his file is properly upload to the server. Which is not the case.

@mcastroSG

This comment has been minimized.

Copy link

commented May 11, 2016

Hi hi:

Test 1

Steps Executed

1.- Connect desktop client to a server that has cksum capabilities enabled
2.- Copy a big file (at least 50Mb) to Shared sync folder
3.- While syncing connect to server and modify any chunk adding some characters to be sure that final cksum and initial cksum does not match

Results

Right now as could be expected server does not validate cksum at uploads so in this case we could have a corrupted file updated to the server what is described right above could happend.

Test 2

Steps Executed

1.- Connect desktop client to a server that has cksum capabilities enabled
2.- Add a file1 to Shared sync folder
3.- Wait till sync is done
4.- Connect to server DB and update checksum value at filecache table for the file we have uploaded at step 3
5.- Remove connection and add a new connection to the same server

Results

An error is raised to let us know that the file1 does not match expected cksum and sync for that file ir retried at next sync.

Test 3

Steps executed

1.- Connect desktop client to a server that has cksum capabilities enabled
2.- Add a file2.eml to Shared Sync folder
3.- Wait till sync ends
4.- touch file2.eml
5.- Wait till sync ends again

Results

File2 is not synced again.

Test 4

Steps executed

1.- Connect desktop client to a server that has cksum capabilities enabled
2.- Add a file2.txt to Shared Sync folder
3.- Wait till sync ends
4.- touch file2.txt
5.- Wait till sync ends again

Results

File2.txt is uploaded again.

Test 5

Steps executed

1.- Connect desktop client to a server that has cksum capabilities enabled
2.- Add several files to Shared sync folder
3.- Wait till sync ends
4.- Check at DB that checksum stored matches with local checksum, i.e. md5

Results

It works fine, checksum matches.

@dragotin what do you think we can do with this issue ¿?

ServerOS: Ubuntu
Server Version: 9.1.0 Alpha modified to add cksum capabilities

Client OS: OS X El capitán 10.11.4
Client Version: Version 2.2.0rc1 (build 3346)

@dragotin

This comment has been minimized.

Copy link
Contributor

commented May 11, 2016

@mcastroSG very good work, all tests make sense and succeed. Thanks. I think you can close this.

@mcastroSG

This comment has been minimized.

Copy link

commented May 11, 2016

Thanks a lot !! 😃

@meekjt

This comment has been minimized.

Copy link

commented May 15, 2016

What is the config option for enabling support of file transport checksums on the server side? It does not seem to be documented anywhere.

@danimo

This comment has been minimized.

Copy link
Contributor

commented May 15, 2016

@meekjt The option is not there yet on the server side. It will come with one of the next versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.