Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stream_read not returning the requested length for encrypted remote storage #34599

Closed
martink-p opened this issue Feb 24, 2019 · 3 comments
Closed

Comments

@martink-p
Copy link
Contributor

Hello OwnCloud developers.

I've recently installed ownCloud with the latest available docker image, set it up and configured it to use an external storage (Strato HiDrive) with encryption enabled.
While synchronizing/uploading files I ran into some problems. The files were uploaded and ecnrypted correctly. However, trying to download them failed. Looking into the logfile revealed an error message "missing signature". Because I couldn't wait for an official solution and because maybe you cannot reproduce the issue due to a different system setup, I made some investigations on my own. By using the logged stack trace I could dig into that problem and found a solution which I want to share now.

Steps to reproduce

  1. Block Size for encrypted storage is alway set to 8k
  2. parent::stream_read in class Encryption (Files/Stream/Encryption.php) does not return 8k although they were requested
  3. passing the smaller data block downstream (e.g. towards Crypto/Crypt(.php) of app encryption) fails at signature checking

Reason for failing at signature checking

Signature checking fails because on encryption of the file the header was padded with "---" to fill up an 8k block. However, within this block the string "00sig00" does not appear and since the passed data block is smaller there is no signature to check when there should be one.

Solution I came up with

Indeed the parent::stream_read does return a data block with defferent length on each call. Obviously the upstream class returns as much as it can while downloading and caching.
Since stream_read does not return the requestes block size on it's own and that fixed block size was introduced due to a bug in php (according to the source files), the best solution i.m.h.o. is to read from the stream in a loop until the block has the required length. This leaves the remaining system alone and also checks for the correct size of the data block (which seems to be mandatory for the remaining algorithm). Also this does not rely on fixed/not fixed bugs in some library or php functions and makes my solution portable.

Here we go:
I simply added a private function to the Class Files/Stream/Encryption:

 /**
   * stream_read wrapper to read complete requested block
   */
  private function stream_read_block($blockSize) {
    $remaining = $blockSize;
    $data = "";

    do {
      $chunk = parent::stream_read($remaining);
      $chunk_len = strlen($chunk);
      $data .= $chunk;
      $remaining -= $chunk_len;
    } while ( ($remaining > 0) && ($chunk_len > 0) );

    return $data;
  }

Then I've replaced the calls to "parent::stream_read" with "$this->stream_read_block" in the functions "readCache" and "skipHeader".

That's it.
Best regards,
Martin.

Here are some details about my configuration:

Server configuration

Operating system:
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Architecture: x86_64

Web server:
Latest Docker Image

Database:
Latest Docker Image

PHP version:
PHP 7.2.10-0ubuntu0.18.04.1

ownCloud version:
10.0.10.4

Updated from an older ownCloud or fresh install:
Fresh Install

Where did you install ownCloud from:
Docker Image

If you have access to your command line run e.g.:
{
"system": {
"apps_paths": [
{
"path": "/var/www/owncloud/apps",
"url": "/apps",
"writable": false
},
{
"path": "/var/www/owncloud/custom",
"url": "/custom",
"writable": true
}
],
"trusted_domains": [
"localhost"
],
"datadirectory": "/mnt/data/files",
"dbtype": "mysql",
"dbhost": "db",
"dbname": "owncloud",
"dbuser": "REMOVED SENSITIVE VALUE",
"dbpassword": "REMOVED SENSITIVE VALUE",
"dbtableprefix": "oc_",
"log_type": "owncloud",
"supportedDatabases": [
"sqlite",
"mysql",
"pgsql"
],
"upgrade.disable-web": true,
"default_language": "en",
"overwrite.cli.url": "http://localhost/",
"htaccess.RewriteBase": "/",
"logfile": "/mnt/data/files/owncloud.log",
"loglevel": 2,
"memcache.local": "\OC\Memcache\APCu",
"mysql.utf8mb4": "true",
"filelocking.enabled": true,
"memcache.distributed": "\OC\Memcache\Redis",
"memcache.locking": "\OC\Memcache\Redis",
"redis": {
"host": "redis",
"port": "6379"
},
"passwordsalt": "REMOVED SENSITIVE VALUE",
"secret": "REMOVED SENSITIVE VALUE",
"version": "10.0.10.4",
"logtimezone": "UTC",
"installed": true,
"instanceid": "oc18zxbvnd2i",
"ldapIgnoreNamingRules": false,
"mail_domain": "REMOVED SENSITIVE VALUE",
"mail_from_address": "REMOVED SENSITIVE VALUE",
"mail_smtpmode": "php",
"singleuser": false,
"enable_certificate_management": true,
"forcessl": false,
"integrity.ignore.missing.app.signature": [
"encryption"
]
}
}

@ownclouders
Copy link
Contributor

GitMate.io thinks the contributor most likely able to help you is @PVince81.

Possibly related issues are #13073 (Option to encrypt just external storage), #3685 (hashed filenames on encrypted storage), #13668 (Cannot download encrypted file version after moving to external storage), #23468 ([DAV] Do not return negative content length for files), and #10383 (Cannot download encrypted file with HTTP range request).

@martink-p
Copy link
Contributor Author

Was my suggestion merged into OC?

@stale
Copy link

stale bot commented Sep 19, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 10 days if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment